query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
We present an approach for expanding taxonomies with synonyms, or aliases. We target large shopping taxonomies, with thousands of nodes. A comprehensive set of entity aliases is an important component of identifying entities in unstructured text such as product reviews or search queries. Our method consists of two stages: we generate synonym candidates from WordNet and shopping search queries, then use a binary classifier to filter candidates. We process taxonomies with thousands of synonyms in order to generate over 90,000 synonyms. We show that using the taxonomy to derive contextual features improves classification performance over using features from the target node alone. We show that our approach has potential for transfer learning between different taxonomy domains, which reduces the need to collect training data for new taxonomies. Semantic Networks (SN) represent entities, relationships between entities, and their properties. Semantic Networks may represent a broad variety of information, from named entities, such as persons or places, to abstract concepts. The term "knowledge graph" is also used to describe this form of structured data. One of the properties commonly encoded in a SN are the primary name and aliases of an entity in multiple languages. For example, Wikidata 1 entity Q2 has multilingual names, such as Earth or Blue Planet (in English), or Tierra (in Spanish). Semantic networks may include sub-structures based on a subset of the relations defined, for example, taxonomies which define type-subtype relations; for example, ConceptNet includes the WordNet taxonomy BID28 as a subset of its nodes and relations. Synonyms, or aliases, are equivalent names for entities in a SN. For example, "washing machine" and "washer" can refer to the same concept of an appliance type. Synonyms enable improved performance in a variety of SN applications. For entity extraction from text BID7 , ], wikification BID18 BID5, or natural language instruction grounding BID16, a broader set of synonyms improves recall. In applications which use SN to generate prompts for users, such as conversational agents BID11 BID31 or generating explanations of the system's state in natural language, a richer set of synonyms in more varied utterances. In this paper, we focus on the problem of expanding taxonomies with synonyms for applications in which entities are complex concepts arranged into taxonomies designed to facilitate browsing the product catalog on amazon.com. The ontologies contain product type taxonomies, which are the focus for this work, in addition to other information such as attributes for refining products in search . In addition to distinct product types, the taxonomies contain nodes which are complex concepts, for example combinations of types and attributes, or groupings of multiple types. For example, the node "Gloves & Protective Gear" groups together gloves and other gear; the node "Automatic Irrigation Equipment" describes irrigation equipment that has automation features. The primary application of the synonyms generated using our method is to identify direct references to the taxonomy nodes in text such as search queries. Having a broader set of synonyms for taxonomy nodes enables a broader query coverage for experiences that are specific to products in the taxonomy, for example, showing the best selling products under a given category. It is thus important to the users' experience that node synonyms are as accurate as possible, within the broader context of the taxonomy. For example, given the node "household bathroom surface cleaners" we output synonyms such as "family bathroom surface cleaner" and "house bath surface cleansing." Our method is robust to errors of word sense compatibility, for example we reject "mack game restrainer" as a synonym for "mac game controllers," or "store circuit board" is a rejected candidate for "memory cards."The taxonomies are authored by experts familiar with the respective shopping domains to facilitate navigation and browsing (Section 4.1). They contain over 4,300 nodes and have depths of over 30 nodes; in addition to taxonomical relationships, they represent type properties, possible values, node equivalence, and other information. In this paper, we identify each taxonomy by its root node name. For the example shown in Figure 1, the taxonomy "Baby Products" includes, among 15 other nodes, a category node named "Car Seats and Accessories. " This has the children "Car Seats," "Car Seat Bases," "Car Beds," and "Accessories. " The "Accesories" node has 17 children (e.g. "Cup Holders" and "Seat Liners"), while the "Car Seats" node has five children grouped by age group and chair type. We note the fine granularity of nodes, which includes distinctions based on product types, features, indented use, and other criteria dependent on the domain; concepts range from general to specific in fine increments, with children refining and specifying the parent node. The taxonomy nodes we target have complex names, for example "Convertible Child Safety Car Seats" and are thus unlikely to be frequently found in large natural language text corpora with sufficient frequency in order to extract synonyms from unstructured text. We present a method that leverages similarity within the taxonomy to evaluate synonym candidates obtained using low-precision, high-recall methods. Our goal is to enable collecting possible synonyms from a broad range of sources, and output a final set of synonyms consistent to a single standard. This method enables expansion with synonyms for complex SN that are not common in typical text corpora, such as shopping taxonomies for browsing. The main advantages of our approach are that: 1) it does not depend on frequent mentions in corpora of entities in the taxonomy; 2) it identifies synonyms that fit within the broader structure of a taxonomy contained within the graph, and outputs synonyms of similar specificity to the original name; 3) the classifier uses domain-independent features, enabling cross-domain predictions. Our method consists of the following stages (Figure 2):1. Generate synonym candidates for each node of the taxonomy. We experimented with two methods of candidate generation. First, we primarily used a method based on Figure 1: Sample section of a taxonomy used in this work, which is designed for exploring and filtering online shopping catalogs. We highlight the path from the root node, "Baby Products," to a leaf node, "Child Safety Booster Car Seats. " Each node prefixed by a + sign indicates the node has children; leaf nodes are marked by a -. For compactness, we enumerate instead of indenting some of the 15 children of the root node. Figure 2: Overview of our method. We start with product taxonomies designed for browsing a large online shopping catalog, described in Section 4.1, and generate synonym candidates for each node using a thesaurus such as WordNet (Section 3.1). We then classify the set of candidates using a binary classifier (Section 3.2) to output the final set of synonyms. WordNet BID23, to generate the cartesian product of concept-level synonyms that are present in the node's name (Section 3.1). Secondly, we show additional on classifying shopping search queries (Section 4.4).2. Filter synonym candidates using a binary classifier (Section 3.2). The classifier uses features derived from a) similarity between the candidate the target node, and b) similarity features between the candidate and other nodes in the taxonomy. Our goal is to avoid producing synonyms more general or more specific than the original node name, such that the synonyms are consistent with the taxonomy as a whole. The classifier uses features independent of the taxonomy vocabulary, making our method suitable for transfer learning by predicting on new taxonomies that do not have training data available. Transfer learning is one method of interest to reduce the need to collect training labels for new taxonomies. The rest of the paper is structured as follows. We first review relevant literature. We then describe the taxonomies we use in this work (Section 4.1), and the methods of obtaining synonym candidates and classifying them. We then evaluate the binary synonym classifier using a corpus of annotations collected using crowdsourcing for synonyms generated using the thesaurus. We also include cross-domain learning experiments to evaluate the potential for training the classifier on one taxonomy and predicting on synonyms for different taxonomy (Section 4.3). Furthermore, we conducted a separate evaluation using an alternative method of selecting synonym candidates, which we will briefly summarize: we associated search queries with taxonomy names using customer purchases, and used these search terms as synonym candidates (Section 4.4). We evaluate the impact of using domain-specific knowledge, specifically lists of known brand names, which may be closely associated but not synonymous with product categories, to improve synonym filtering. We conclude the paper with observations about the role of taxonomy-wide similarity in predicting synonymy and describe future directions. Methods of automatically buiding knowledge bases from unstructured text include identifying salient terms (keywords and keyphrases), identifying synonymy between terms, forming entities and entity hierarchies from keywords, and inferring relationships and rules between entities BID4; this covers the spectrum from keyword extraction to semantic inference. Synonym extraction has been defined as one of the base steps of automated ontology and knowledge base building. Synonym expansion has been used to enrich search keywords in order to improve recall. WordNet synonym sets have been used to expand search keywords BID29. This work used rules on the WordNet taxonomy to expand queries with keywords, increasing the number of words that are matched. Other work has used text mapping to concepts in a thesaurus for query expansion BID1. In life sciences, domain-specific ontologies have been used in similar ways to expand search terms BID33.Previous work in extracting synonyms from text relies on frequent mentions of the entities in text corpora BID6 BID19 BID32 BID15. These works identify synonyms based on statistically similar contexts or phrases in which they are used. The taxonomies we use in this work are engineered to facilitate exploring a product catalog for online shopping, for example by choosing subtypes of products or filtering by feature. As such, we cannot expect that all node names and their potential synonyms will occur in text. Other work has used clustering methods that identify synonyms such as "birth date" and "date of birth" in search queries BID14. Topic-based methods have also been used in medical document search BID17 to add aliases as encountered in text. We show an approach that uses search queries and past customer purchases to propose synonym candidates (Section 4.4); this method benefits from using domain-specific knowledge, such as product brand names. Other work in identifying aliases has been focused on named entities, as opposed to common words. Named entity recognition and extraction (NER) are rich research areas, focusing on identifying and categorizing proper names in text BID8 BID25 BID24 BID13. Named entities may refer to persons, places, brands, or authors. It is a problem related to synonym detection since the same person may be referred to in different contexts using different names. The problem we address in this paper is different than existing work in NER and synonym extraction from unstructured text. Our entities cannot be considered names entities, and as mentioned above, we do not expect to find them mentioned frequently in text corpora. Structure mapping is an established approach of comparing semantically-complex structures, with applications in analogy modeling BID12 BID20 BID9 BID10. Structural similarity methods consider correspondence at a relational level to be indicative of higher similarity, as opposed to feature-based similarity models that compare attributes directly. For example, in a structural similarity approach, the function an entity performs is more important than individual features such a color. Previous work has used structural similarity in contexts derived from robot tasks in order to identify equivalent objects for a given task. Other work using structural similarity for comparison identified significant discrepancies in superficially similar structures, for example to make the distinction between an arch and a bridge BID21. In designing the synonym filtering classifier, we incorporated concepts from structure mapping by adding features that compare the synonym candidate with multiple nodes in the taxonomy; we refer to these as structural similarity features. The rationale behind this decision was to calibrate the level of generality of a synonym by considering the surrounding nodes of the taxonomy. Our approach consists of two stages. First, we identify synonym candidates. We describe a method based on a thesaurus containing concepts part of a node's primary name (Section 3.1). Second, we filter synonym candidates using a classifier (Section 3.2). The rationale behind our design is that multiple methods can be used to generate candidates; we demonstrate a generally-applicable method. The filtering stage processes candidates regardless of the method used to obtain them. We start from the observation that many taxonomy node names are composed of common words for which we could identify synonyms. For example, the node "Baby Clothing & Shoes" has the synonym "Baby Clothing & Footwear." One of the challenges in this approach is selecting word-level synonyms that are consistent with the sense of the original word. We identify word synonyms using WordNet BID23. WordNet represents concepts as synonym sets, or synsets, arranged in a taxonomy. Each synset consists of synonyms for a concept corresponding to a particular sense of the word. For example the word "car" may have the meanings Car.n.01: car, auto, automobile; Car.n.02: car, rail car, railroad car; Car.n.03: car, gondola. For each node in the taxonomies:1. We split the node's name into separate concepts found in WordNet. We combine individual consecutive words into concepts whenever possible. 2. We select all synsets for the words in the node name. 3. We perform word sense disambiguation in order to find the WordNet synset to the taxonomy. We select the WordNet synset most similar to the node and its context for each concept identified in the node name. We define context as the node name and the other node names in the taxonomy. We compute similarity by averaging cosine similarity over word embeddings between all pairwise concept pairs between the synonym set and the node context. We choose the synset that in the highest average similarity to use for permuting word synonyms. The majority of words in our taxonomies have more than one synset in WordNet, we designed this step to prune the set of incorrect candidates. 4. We generate the cartesian product of all synset words extracted from WordNet for each concept in the node's name. The following is an example:1. Given a node name "washing machine parts," we identify the corresponding concepts, "washing machine" and "parts." 2. We build a context vocabulary by sampling other node names from the Electronics taxonomy. We compare all synsets corresponding to "washing machine" and "parts" to the context vocabulary. We select the most similar sense for each concept, "washer.n.03" and "part.n.01." 3. We extract all lemmas for each synset, i.e. "washer," "automatic washer" and "washing machine," and "part," "portion," "component part," "component." 4. We generate the cartesian product of these lemmas starting from the original node name, for example "washer parts" or "automatic washer components," for a total of 12 phrases. Each ing list of lemmas is a synonym candidate, which is then accepted or rejected using the method described below. We select candidates that 1) have similar meaning to the original node name, and 2) have a similar level of generality with the original name node, in the context of the taxonomy. In common speech, the product types "TV" and "LED TV" may be considered equivalent, with most shoppers referring to an LED TV as simply TV, assuming that LED is the most common display technology at a given time. However, from our taxonomy standpoint they are not synonyms: the type TV has other sub-types, such as OLED TV, Plasma TV, CRT TV, and LED TV. We hypothesized that, in order to make such distinctions when identifying synonyms, taking into account the structure of the taxonomy is an essential factor. We use vector word representations, or word embeddings, to compute some of the classifier's features by comparing synonym candidates with various node names in the taxonomy. To compute features, we sum the corresponding vectors for each word in a taxonomy node name or the synonym candidate(using bi-and tri-grams when available); then, we compute the cosine similarity value between the ing addition vectors. We used Numberbatch, a set of word embeddings generated the ConceptNet semantic network, because it is publicly available, it includes WordNet concepts, and has low bias BID28. We implemented a binary gradient boosting classifier using the Python scikit-learn tookit BID26.Many entity names and synonym candidates consist of more than one word, and have corresponding embeddings, such as washing machine. We identify multi-word concepts by searching WordNet with adjacent n-grams in decreasing order of length, in a greedy approach that extracts the longest match from the node name. Synonym candidates generated using WordNet similarly may consist of multi-word concepts as synonyms for single words. We compute the word embedding distances between node names and synonym candidates by first summing all respective vectors for the node and the synonym candidate, then computing cosine similarity between the ing vectors. We group features in local and structural, where local features are computed only with respect to the target node, and structural features refer to other nodes in the taxonomy.• Local Features: -Word frequency in search queries: We compute four features, for the average and minimum frequency of words in the original node name and in the synonym string. We denote this group of features W F;-Character and word edit distance: We compute two features from edit distance in character and in words between the candidate and the original name (edit(SC, N));-Cosine similarity: We compute one feature by summing the word embedding vectors of the synonym candidate words (SC), and comparing this sum with the word embedding sum of the original node name (d (SC, N) ).• Structural features: We compute word embedding similarity between the synonym candidate and the following, and add one feature for each (examples refer to nodes shown in Figure 1):-The node's parent name, e.g. "Child Safety Car Seats & Accessories" for "Car Seats" (d(SC, P))-The name of the taxonomy root, "Baby Products" (d (SC, R) ).-The average distance to the node's direct children (d(SC, ci), where ci ∈ N.children), if the node has children, e.g. all nodes including "Infant Safety Car Seats," "Forward Facing Child Safety Car Seats," and "Baby Stroller Travel Systems."Our hypothesis, which we test in Section 4.3 using feature ablation, is that comparing a synonym candidate with the broader structure of the taxonomy models how specific a node is with respect to others. We first describe the taxonomies we used in the evaluation (Section 4.1). We then describe the methodology for collecting training data for training classifiers and show for classifying synonyms generated using WordNet, exploring feature contribution and transfer learning between taxonomies (Sections 4.2 and 4.3). We include separate of predicting synonyms from search queries and the effect of using domain-specific features (Section 4.4). We use taxonomies created by manual ontology design and aimed to enable navigation through the amazon.com product catalog. The taxonomies enable users to refine products by category and features, for example by selecting a product type after performing a search. We used the following eight taxonomies, with total number of nodes shown in parentheses, which we selected to cover a broad range of product types: The taxonomies contain type nodes on the leaves, for example "Bulb Planters," and grouping nodes at higher levels, for example "Gardening & Lawn Care" is a grouping node with children such as "Hand Tools," which in turn has children such as "Picks," "Bulb Planters," or "Manual Lawn Aerators." FIG1 shows a histogram of the number of words in node names in our taxonomies; for example, the node name "women's contemporary clothes designer base layer sets" has seven words. The majority of nodes have three or more words in the name. The taxonomies contain 2379 distinct words. DISPLAYFORM0 We generated candidates using the WordNet method described in Section 3.1. For the total of 15,197 nodes in the eight taxonomies, we generated 182,974 synonym candidates. We took a uniform sample, and labeled them using a crowdsourced survey 2. We sampled uniformly 3331 nodes with distinct names and a total of 4488 corresponding synonym candidates for the eight taxonomies. The candidate synonym set contains 3056 distinct words (0.91 distinct words per node). We designed the survey to present the original node name, a synonym candidate, and the category name. We chose the context as the name of the root in each taxonomy tree, for example "Electronics." Participants were asked to answer a binary yes/no question, "Do these phrases mean the same in the following context?" We collected 10 answers from separate participants for each name-synonym candidate pair. We calculated the proportion of "yes" answers and used a threshold, which we refer to as the annotation threshold, to assign the synonym example a final label, positive or negative. We used all eight taxonomies and collected ten answers per question, totaling 45,000 responses. FIG2 shows the proportion of positive answers per evaluation. We observe a skew towards positive answers, which is to be expected given that the generation method incorporates word sense disambiguation to reduce the number of implausible candidates. We use this label set in Section 4.3, and explore the effect of choosing an annotation threshold for this label set. We trained the classifier on 90% of the labeled data and tested it on the remaining 10%. We supplemented each instance in the training set with ten negative examples by selecting names of other nodes in the taxonomy; for example, in the Electronics taxonomy, we provided examples such as "mp3 player" and "televisions" as a negative synonym example for "LED TVs." We trained and tested using this process 50 times for each condition, and report average precision and recall values. We explored the effect of the annotator agreement threshold at which the annotation is considered positive; for this set of experiments we enabled all features of the classifier. TAB0 shows classification performance for different annotation thresholds and train/test splits in the Electronics taxonomy. For the remained of the experiments using crowdsourced labels, we select an annotation threshold of 0.6, as it achieved high precision (our primary consideration) and satisfactory recall. We observed that using a combination of local and structural features in the best performance (0.92 F-1 score), compared to local features or structural features alone Table 2: Classification performance (precision, P, recall, R, and F-1 score) for the Electronics taxonomy for classifiers using subsets of the features described in Section 3.2. We observe that including some structural features, by comparing synonym candidates with the root, parent or children of the target node, improves classification accuracy. DISPLAYFORM0 (0.56 and 0.79 respective F-1 scores). We conducted an ablation study to investigate the contribution of each category of feature. We disabled different combinations of features and trained separate classifiers for each subset of features. For brevity, we report representative findings in the Electronics category (Table 2). We note that in isolation, either local features (row 2) or structural features (row 3) in lower recall and precision, with combinations of the two ing in improved performance. The cosine similarity between the candidate between the target name and the candidate is important: removing it decreases recall to 67% (row 7), and on its own this feature achieves an F-1 score of 0.77 (row 11). Including a mix of local and structural features yields in similar performance of 94% precision and a 89-90% recall (rows 4, 5, 8, 10). We observe lower F-1 scores when using all three structural features, and improvements when selecting two out the three. Frequency-based features have high precision but low recall (row 12), which is expected since, if the keywords are present in large volume in search queries, it is likely that they are meaningful phrases. The thesaurusbased method generates invalid candidates primarily because of the individual word senses do not have the same meaning when put together, which means that the combination will have lower search term frequency. However, it does not identify less frequent terms, which is where in practice synonyms are most useful for, since they allow interpreting infrequent queries. We observed similar performance in the other taxonomies, with F-1 scores of over 0.83 (with the exception of the "Clothing, Shoes & Jewelry). We selected the feature set on row 4 in Table 2, i.e. all features except for d(SC, R), and evaluated prediction performance in all other categories. Table 3 shows the best performance for each of the eight product type taxonomies. We note similar scores, with the exception of Clothing, which had a significant Table 3: Classification for each product taxonomy, using 10% of annotations for testing; showing average values for 50 train/test samples. We used the feature set shown on row 4 in Table 2.skew towards positive annotations. Including the candidate-root similarity features ed in lower performance, similar to our previous observation (average F-1 score of 0.78, lower than the average of 0.86 shown in Table 3). Finally, we conducted cross-domain experiments and observed degraded but comparable classification performance to using the same taxonomy for training and testing. We trained a classifier using one taxonomy and used it to predict in another. This is possible because none of the features described in this section rely on domain-specific information such as known word-level labels. We experimented with all 56 combinations of source-target combinations. For this set of experiments, we used the full feature set (i.e. not excluding d(SC, R), since it ed in a higher average F-1 score. In Table 4 we show , in order of F-1 score, of the top 5 best performing and top 5 worst performing from all pairwise combinations of taxonomies; for the sake of brevity we exclude the full list. The average F-1 score over all 56 combinations was 0.766 with a standard deviation of 0.087. We observe that for all top five worst performing, the target taxonomy is Clothing, Shoes & Jewelry; this was the taxonomy with the lowest score in our in-domain experiments (Table 2). The low F-1 scores are due to recall (0.35), precision is competitive (0.94). Similarly, for the top performing target taxonomies Electronics is the most common: precision and recall are both similar but slightly lower on average (0.90 precision compared to 0.94 in-domain, and 0.84 recall compared to 0.90). The prediction from Electronics to Clothing, not shown in the table, has an F-1 score of 0.67. We attribute these changes in performance to label noise, but also to how diverse the taxonomies are. In this section we describe additional using a separate source for synonym candidates. We experimented with selecting synonym candidates from search queries on amazon.com. These associations are derived using product purchases that follow search queries, and using the taxonomy associations for these products BID27. We reference a related method, also applied to an online shopping domain, of associating product attribute values indexed in a shopping catalog BID30. We use user behavior such as Table 4: Classification performance for cross-domain performance, in which we trained on one taxonomy and predicted on another. The table shows the lowest and highest five pairs of source and target taxonomy, ranked by F-1 score. We used the feature set shown on row 1 in Table 2; the full set of features ed in the highest average F-1 performance.product purchases after issuing queries to infer statistical associations between a query and the product; we then use the product's assignment to taxonomy nodes to infer associations between the query and the node. This model associates unique search keyword queries to taxonomy nodes, and outputs a probability distribution over the taxonomy. We selected as synonym candidates queries that occurred frequently, at least once every day for an entire month, and that had a probability of over 80% to lead to purchases from the respective taxonomy node. The following are examples of node names in the Electronics taxonomy, followed by a sample of queries selected for synonym candidates; we selected examples that are not covered by the thesaurus method:• Portable cell phone power banks: "battery bank" • Cell phone cases: "iphone 4s case," "lg g4 phone case" • Repeaters: "wifi repeater" • Hdmi cables: "hdmi to mini displayport cable" While this method has the advantage of accessing a broad and current vocabulary, we also observe a mix of brands, product models, and manufacturer names in those categories; for the task of generating synonyms, these examples are undesirable since they are specific to a subset of items under the taxonomy node, and not to the node globally. This method has the potential of identifying candidates that are not reformulations of the taxonomy nodes. For example, the taxonomy node "Self-Balancing Scooters" may be referred to as "hover boards" in search queries. Using information from search queries enables us to identify emerging synonyms before this information is included in a thesaurus such as WordNet. The queries selected using this method have high lexical variability as a of the variety of intents of search queries. For example, queries for a specific product that is assigned to a single taxonomy node with in a high probability of the query targeting the corresponding node, because the query is unlikely to lead to clicks or purchases other than for the target product. This makes the input vocabulary to the classifier more varied, both in unique words and in the generality of those words; some candidates contain tokens such as model numbers and brands. Using the same methodology as described in Section 4.2, we collected annotations for a uniform sample of 2000 candidate search queries for 337 nodes in the Electronics taxonomy, also with 10 answers per candidate (20,000 answers in total). The 2000 synonym candidates contain 1307 unique words. The ratio of unique candidate words per node is 3.87, 300% higher than the ratio of unique words in original node names per node, of 1.27. The synonym candidate vocabulary is significantly more diverse than for WordNet-generated candidates: for 594 nodes use for collection in Electronics in Section 4.2, there were 1.41 distinct words per node for synonym candidates, only 20% higher than the ratio of 1.17 distinct words in original names per node. Similar to the WordNet survey, we observed a skew towards positive answers. In addition, these annotations showed more positive responses for popular brands in a given product category. We evaluate using domain-specific features, such as known brand names. For candidates that included names of manufacturers or brands, we observed a higher proportion of positive answers for brands that are more common in their respective product category, similar to genericized brands. To control for this effect, we collected an additional set of annotations, authored by experts familiar with product search. For the experiments in this section, we used a set of unigram and bigram embedding vectors generated by applying the word2vec algorithm on the search query dataset BID22. We used the this set of embeddings as they represent brand names in relation to a rich vocabulary, which we evaluate towards the end of this section. Following the same methodology as in Section 4.3 we trained and tested separate classifiers using the crowdsourced or the expert annotations, using all available features. We included a feature in the model, β, that activates when a brand, product line, or manufacturer, is present in the synonym candidate. The list of brands is domain-specific, for example "apple" would be a brand name in Electronics but not in Grocery. We computed β using a subset of 120 brands and manufacturer names extracted from the product catalog. Table 5 shows the effect of the annotation consensus threshold and of using the feature β using the crowdsourced annotations and when training on the expert-annotated candidates. In all these experiments, we used the full feature set described in Section 3.2 and the same setup as in the previous section. Overall, we observe a decline in recall compared to classifying performance on generated synonyms, which we attribute to the greater lexical variety in the valid candidates set; precision is maintained or improved upon compared to candidates generated with WordNet. Classification performance is lower for the expert annotations than for the crowdsourced annotations. Furthermore, using β is beneficial only for the expert annotation set. We attribute these observations to the effect of genericized brands. Annotations diverge between Table 5: Classification performance when training with crowdsourced labels, for the Electronics taxonomy for candidates selected from search query keywords. We evaluated setting an annotation consensus threshold and a domain-specific feature, β, which identifies the presence of brand names in the candidate. Last two rows show classification performance when using expert annotations.crowdsourced and expert annotations for categories in which popular brands are closely identified with the product category. Word embedding vectors trained without domain supervision place those words close to common words denoting the type, since they occur in the same context, making the problem inseparable for the classifier. Introducing domain knowledge, in the form of the known brand feature β, is useful only if the annotation set is free of this conflation between brand and type. Entity aliases are an important component of ontology construction, enabling entity recognition in text and generating natural language references to entities. We demonstrate a method for identifying synonyms for large taxonomies used for online shopping. Our method consists of two complementary approaches of selecting synonym candidates, and a candidate filtering stage which uses a classifier that includes structural similarity features. We show that using structural similarity features, such as comparing synonym candidates with the parent, children, or root nodes in the taxonomy, improves classification accuracy, and that the method is applicable to transfer learning between taxonomies. We include an additional evaluation on using search queries associated statistically with the taxonomy nodes via user behavior. This method extracts a broader vocabulary for the candidates, including tokens that are not common words, such as proper names, model numbers, or years. We show that using domain knowledge such as brand name definitions improves classification performance for candidates extracted from search queries, which conflate in the same context types, brands and other terms. In future work we will experiment with taxonomies in languages other than English. We will explore the potential for predicting synonyms in other languages than the training language, similar to the experiments we showed for cross-domain prediction.
We use machine learning to generate synonyms for large shopping taxonomies.
1,100
scitldr
Relational reasoning, the ability to model interactions and relations between objects, is valuable for robust multi-object tracking and pivotal for trajectory prediction. In this paper, we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts for permutation invariance in its relational reasoning. We explore a number of permutation invariant architectures and show that multi-headed self-attention outperforms the provided baselines and better accounts for complex physical interactions in a challenging toy experiment. We show on three real-world tracking datasets that adding relational reasoning capabilities in this way increases the tracking and trajectory prediction performance, particularly in the presence of ego-motion, occlusions, crowded scenes, and faulty sensor inputs. To the best of our knowledge, MOHART is the first fully end-to-end multi-object tracking from vision approach applied to real-world data reported in the literature. Real-world environments can be rich and contain countless types of interacting objects. Intelligent autonomous agents need to understand both the objects and interactions between them if they are to operate in those environments. This motivates the need for class-agnostic algorithms for tracking multiple objects-a capability that is not supported by the popular tracking-by-detection paradigm. In tracking-by-detection, objects are detected in each frame independently, e. g., by a pre-trained deep convolutional neural network (CNN) such as YOLO , and then linked across frames. Algorithms from this family can achieve high accuracy, provided sufficient labelled data to train the object detector, and given that all encountered objects can be associated with known classes, but fail when faced with objects from previously unseen categories. Hierarchical attentive recurrent tracking (HART) is a recently-proposed, alternative method for single-object tracking (SOT), which can track arbitrary objects indicated by the user . This is done by providing an initial bounding-box, which may be placed over any part of the image, regardless of whether it contains an object or what class the object is. HART efficiently processes just the relevant part of an image using spatial attention; it also integrates object detection, feature extraction, and motion modelling into one network, which is trained fully end-to-end. Contrary to tracking-by-detection, where only one video frame is typically processed at any given time to generate bounding box proposals, end-to-end learning in HART allows for discovering complex visual and spatio-temporal patterns in videos, which is conducive to inferring what an object is and how it moves. In the original formulation, HART is limited to the single-object modality-as are other existing end-to-end trackers (; ;). In this work, we present MOHART, a class-agnostic tracker with complex relational reasoning capabilities provided by a multi-headed self-attention module . MOHART infers the latent state of every tracked object in parallel, and uses self-attention to inform per-object states about other tracked objects. This helps to avoid performance loss under self-occlusions of tracked objects or strong camera motion. Moreover, since the model is trained end-to-end, it is able to learn how to manage faulty or missing sensor inputs. See fig. 1 for a high-level illustration of MOHART. In order to track objects, MOHART estimates their states, which can be naturally used to predict future trajectories over short temporal horizons, which is especially useful for planning in the context of autonomous agents. MOHART can be trained simultaneously for object tracking and trajectory prediction at the same time, thereby increasing statistical efficiency of learning. In contrast to prior art, where trajectory prediction and object tracking are usually addressed as separate problems with unrelated solutions, our work show trajectory prediction and object tracking are best addressed jointly. Section 2 describes prior art in tracking-by-detection, end-to-end tracking and predestrian trajectory prediction. In Section 3, we describe our approach, which uses a permutation-invariant self-attention module to enable tracking multiple objects end-to-end with relational reasoning. Section 4 contrasts our approach with multi-object trackers which do not explicitly enforce permutation invariance but have the capacity to learn it, simpler permutation-invariant architectures, as well as multiple single-object trackers running in parallel. We show that multi-headed self-attention significantly outperforms other approaches. Finally, in Section 5, we apply MOHART to real world datasets and show that permutation-invariant relational reasoning leads to consistent performance improvement compared to HART both in tracking and trajectory prediction. Tracking-by-Detection Vision-based tracking approaches typically follow a tracking-by-detection paradigm: objects are first detected in each frame independently, and then a tracking algorithm links the detections from different frames to propose a coherent trajectory (; ; ;). Motion models and appearance are often used to improve the association between detected bounding-boxes in a postprocessing step. Tracking-by-detection algorithms currently provide the state-of-the-art in multi-object tracking on common benchmark suites, and we fully acknowledge that MOHART is not competitive at this stage in scenarios where high-quality detections are available for each frame. MOHART can in principle be equipped with the ability to use bounding boxes provided by an object detector, but this is beyond the scope of this project. End-to-End Tracking A newly established and much less explored stream of work approaches tracking in an end-to-end fashion. A key difficulty here is that extracting an image crop (according to bounding-boxes provided by a detector), is non-differentiable and in high-variance gradient estimators. propose an end-to-end tracker with soft spatial-attention using a 2D grid of Gaussians instead of a hard bounding-box. HART draws inspiration from this idea, employs an additional attention mechanism, and shows promising performance on the real-world KITTI dataset . HART forms the foundation of this work. It has also been extended to incorporate depth information from RGBD cameras . propose an approach in which the crop corresponds to the scaled up previous bounding-box. This simplifies the approach, but does not allow the model to learn where to look-i. e., no gradient is backpropagated through crop coordinates. To the best of our knowledge, there are no successful implementations of any such end-to-end approaches for multi-object tracking beyond SQAIR , which works only on datasets with static s. On real-world data, the only end-to-end approaches correspond to applying multiple single-object trackers in parallel-a method which does not leverage the potential of scene context or inter-object interactions. Pedestrian trajectory prediction Predicting pedestrian trajectories has a long history in computer vision and robotics. Initial research modelled social forces using hand-crafted features (; ; ;) or MDP-based motion transition models , while more recent approaches learn from context information, e. g., positions of other pedestrians or landmarks in the environment. Social-LSTM ) employs a long short-term memory (LSTM) to predict pedestrian trajectories and uses max-pooling to model global social context. Attention mechanisms have been employed to query the most relevant information, such as neighbouring pedestrians, in a learnable fashion (; ;). Apart from relational learning, context , periodical time information , and constant motion priors (Schöller et al. ) have proven effective in predicting long-term trajectories. Our work stands apart from this prior art by not relying on ground truth tracklets. It addresses the more challenging task of working directly with visual input, performing tracking, modelling interactions, and, depending on the application scenario, simultaneously predicting future motions. As such, it can also be compared to Visual Interaction Networks (VIN) , which use a CNN to encode three consecutive frames into state vectors-one per object-and feed these into a recurrent neural network (RNN), which has an Interaction Network at its core. More recently, Relational Neural Expectation Maximization (R-NEM) has been proposed as an unsupervised approach which combines scene segmentation and relational reasoning (van). Both VINs and R-NEM make accurate predictions in physical scenarios, but, to the best of our knowledge, have not been applied to real world data. This section describes the model architecture in fig. 1. We start by describing the hierarchical attentive recurrent tracking (HART) algorithm , and then follow with an extension of HART to tracking multiple objects, where multiple instances of HART communicate with each other using multiheaded attention to facilitate relational reasoning. We also explain how this method can be extended to trajectory prediction instead of just tracking. HART is an attention-based recurrent algorithm, which can efficiently track single objects in a video. It uses a spatial attention mechanism to extract a glimpse g t, which corresponds to a small crop of the image x t at time-step t, containing the object of interest. This allows it to dispense with the processing of the whole image and can significantly decrease the amount of computation required. HART uses a CNN to convert the FIGURE 2: The relational reasoning module in MOHART based on multi-headed self-attention. Here, we show the computation of the interaction of the red object with all other objects. Object representations f t,m are computed using visual features, positional encoding and the hidden state from the recurrent module. These are linearly projected onto keys (k), queries (q), and values (v) to compute a weighted sum of interactions between objects, yielding an interaction vector o t,m. Subscripts t, m are dropped from all variables for clarity of presentation, so is the splitting into multiple heads. glimpse g t into features f t, which then update the hidden state h t of a LSTM core. The hidden state is used to estimate the current bounding-box b t, spatial attention parameters for the next time-step a t+1, as well as object appearance. Importantly, the recurrent core can learn to predict complicated motion conditioned on the past history of the tracked object, which leads to relatively small attention glimpses-contrary to CNN-based approaches , HART does not need to analyse large regions-of-interest to search for tracked objects. In the original paper, HART processes the glimpse with an additional ventral and dorsal stream on top of the feature extractor. Early experiments have shown that this does not improve performance on the MOTChallenge dataset, presumably due to the oftentimes small objects and overall small amount of training data. Further details are provided in Appendix B. The algorithm is initialised with a bounding-box 1 b 1 for the first time-step, and operates on a sequence of raw images x 1:T. For time-steps t ≥ 2, it recursively outputs bounding-box estimates for the current time-step and predicted attention parameters for the next time-step. The performance is measured as intersection-overunion (IoU) averaged over all time steps in which an object is present, excluding the first time step. Although HART can track arbitrary objects, it is limited to tracking one object at a time. While it can be deployed on several objects in parallel, different HART instances have no means of communication. This in performance loss, as it is more difficult to identify occlusions, ego-motion and object interactions. Below, we propose an extension of HART which remedies these shortcomings. Multi-object support in HART requires the following modifications. Firstly, in order to handle a dynamically changing number of objects, we apply HART to multiple objects in parallel, where all parameters between HART instances are shared. We refer to each HART instance as a tracker. Secondly, we introduce a presence variable p t,m for object m. It is used to mark whether an object should interact with other objects, as well as to mask the loss function (described in ) for the given object when it is not present. In this setup, parallel trackers cannot exchange information and are conceptually still single-object trackers, which we use as a baseline, referred to as HART (despite it being an extension of the original algorithm). Finally, to enable communication between trackers, we augment HART with an additional step between feature extraction and the LSTM. For each object, a glimpse is extracted and processed by a CNN (see fig. 1). Furthermore, spatial attention parameters are linearly projected on a vector of the same size and added to this representation, acting as a positional encoding. This is then concatenated with the hidden state of the recurrent module of the respective object (see fig. 2). Let f t,m denote the ing feature vector corresponding to the m th object, and let f t,1:M be the set of such features for all objects. Since different objects can interact with each other, it is necessary to use a method that can inform each object about the effects of their interactions with other objects. Moreover, since features extracted from different objects comprise a set, this method should be permutation-equivariant, i. e., the should not depend on the order in which object features are processed. Therefore, we use the multi-head self-attention block , which is able to account for higher-order interactions between set elements when computing their representations. Intuitively, in our case, SAB allows any of the trackers to query other trackers about attributes of their respective objects, e. g., distance between objects, their direction of movement, or their relation to the camera. This is implemented as follows, where o m is the output of the relational reasoning module for object m. Time-step subscripts are dropped to decrease clutter. In Eq. 1, each of the extracted features f t,m is linearly projected into a triplet of key k t,m, query q t,m and value v t,m vectors. Together, they comprise K, Q and V matrices with M rows and d q, d k, d k columns, respectively. K, Q and V are then split up into multiple heads H ∈ N +, which allows to query different attributes by comparing and aggregating different projection of features. Multiplying in Eq. 2 allows to compare every query vector q t,m,i to all key vectors k t,1:M,i, where the value of the corresponding dot-products represents the degree of similarity. Similarities are then normalised via a softmax operation and used to aggregate values V. Finally, outputs of different attention heads are concatenated in Eq. 3. SAB produces M output vectors, one for each input, which are then concatenated with corresponding inputs and fed into separate LSTMs for further processing, as in HART-see fig. 1. MOHART is trained fully end-to-end, contrary to other tracking approaches. It maintains a hidden state, which can contain information about the object's motion. One benefit is that in order to predict future trajectories, one can simply feed black frames into the model. Our experiments show that the model learns to fall back on the motion model captured by the LSTM in this case. Multilayer perceptron (MLP) In this version, the representations of all objects are concatenated and fed into a fully connected layer followed by ELU activations. The output is then again concatenated to the unaltered feature vector of each object. This concatenated version is then fed to the recurrent module of HART. This way of exchanging information allows for universal function approximation (in the limit of infinite layer sizes) but does not impose permutation invariance. DeepSets Here, the learned representations of the different objects are summed up instead of concatenated and then divided by total number of objects. This is closely related to DeepSets and allows for universal function approximation of all permutation invariant functions . Max-Pooling Similar to DeepSets, but using max-pooling as the permutation invariant operation. This way of exchanging information is used, e.g., by Crucially, each circle is randomly assigned its identity in each time step. Hence, the algorithm can not infer the forces exerted on one object without knowledge of the state of the other objects in the current time step. The forces in this scenario scale with 1/ √ r and the algorithm was trained to predict one time step into the future. HART (top) is indeed unable to predict the future location of the objects accurately. The achieved average IoU is 47%, which is only slightly higher than predicting the objects to have the same position in the next time step as in the current one (34%). Using the relational reasoning module, MOHART (bottom) is able to make meaningful predictions (76% IoU). The numbers in the bottom row indicate the self-attention weights from the perspective of the top left tracker (yellow number box). Interestingly, the attention scores have a strong correlation with the interaction strength (which scales with distance) without receiving supervision. and momentum conservation) between objects and with walls. To investigate how the model understands motion patterns and interactions between objects, we train it to predict future object locations in contrast to traditional tracking. In the first experiment, each circle exerts repulsive forces on each other, where the force scales with 1/r, r being the distance between them. Predicting the future location just using the previous motion of one object (i.e. without relational reasoning) accurately is therefore challenging. We show that HART as an end-toend single-object tracker is able to capture complex motion patterns and leverage these to make accurate predictions (see Appendix C). This indicates that HART is able to draw about the (deterministic, but not static) force field. In the second experiment, we introduce randomness, rendering the scenario not solvable for a single object tracker as it requires knowledge about the state of the other objects and relational reasoning (see fig. 3). In each time step, we assign a colour-coded identity to the objects. Objects of the same identity repel each other, object of different identities attract each other (the objects can be thought of as electrons and protons). The qualitative in fig. 3 show that MOHART, using self-attention for relational reasoning, is able to capture these interactions with high accuracy. Figure 4 (left) shows a quantitative comparison of augmenting HART with different relational reasoning modules when identities are re-assigned in every timestep (randomness = 1.0). Exchanging information between trackers of different objects in the latent space with an MLP leads to slightly worse performance than the HART baseline, while simple max-pooling performs significantly better (∆IoU ∼ 17%). This can be explained through the permutation invariance of the problem: the list of latent representation of the different objects has no meaningful order and the output of the model should therefore be invariant to the ordering of the objects. The MLP is in itself not permutation fig. 3 (randomness = 1.0). Right: performance depending on how often agents are re-assigned identities randomly (sequence length 15). The higher the randomness, the less static the force field is and the more vital relational reasoning is. For randomness = 0.0, identities still have to be reassigned in some cases in order to prevent deadlocks, this leads to a performance loss for all models, which explains lower performance of self-attention for randomness = 0.0. invariant and therefore prone to overfit to the (meaningless) order of the objects in the training data. Maxpooling, however, is permutation invariant and can in theory, despite its simplicity, be used to approximate any permutation invariant function given a sufficiently large latent space . Maxpooling is often used to exchange information between different tracklets, e.g., in the trajectory prediction domain; ). However, self-attention, allowing for learned querying and encoding of information, solves the relational reasoning task significantly more accurately. In fig. 4 (right), the frequency with which object identities are reassigned randomly is varied. The show that, in a deterministic environment, tracking does not necessarily profit from relational reasoning -even in the presence of long-range interactions. The less random, the more static the force field is and a static force field can be inferred from a small number of observations (see fig. 6). This does not mean that all stochastic environments profit from relational reasoning. What these experiments indicate is that tracking can not be expected to profit from relational reasoning by default in any environment, but instead in environments which feature (potentially non-deterministic) dynamics and predictable interactions. Having established that MOHART is capable of performing complex relational reasoning, we now test the algorithm on three real world datasets and analyse the effects of relational reasoning on performance depending on dataset and task. We find consistent improvements of MOHART compared to HART throughout. Relational reasoning yields particularly high gains for scenes with ego-motion, crowded scenes, and simulated faulty sensor inputs. We investigate three qualitatively different datasets: the MOTChallenge dataset , the UA-DETRAC dataset , and the Stanford Drone dataset ). To increase scene dynamics and make the tracking/prediction problems more challenging, we sub-sample some of the high framerate scenes with a stride of two, ing in scenes with 7-15 frames per second. Training and architecture details are given in Appendices A and B. We conduct experiments in three different modes: FIGURE 5: Camera blackout experiment on a street scene from the MOTChallenge dataset with strong egomotion. Solid boxes are MOHART predictions (for t ≥ 2), faded bounding boxes indicate object locations in the first frame. As the model is trained end-to-end, MOHART learns to fall back onto its internal motion model if no new observations are available (black frames). As soon as new observations come in, the model'snaps' back onto the tracked objects. Tracking. The model is initialised with the ground truth bounding boxes for a set of objects in the first frame. It then consecutively sees the following frames and predicts the bounding boxes. The sequence length is 30 time steps and the performance is measured as intersection over union (IoU) averaged over the entire sequence excluding the first frame. This algorithm is either applied to the entire dataset or subsets of it to study the influence of certain properties of the data. Camera Blackout. This simulates unsteady or faulty sensor inputs. The setup is the same as in Tracking, but sub-sequences of the input are replaced with black images. The algorithm is expected to recognise that no new information is available and that it should resort to its internal motion model. Prediction. Testing MOHART's ability to capture motion patterns, only the first two frames are shown to the model followed by three black frames. IoU is measured seperately for each time step. On the MOTChallenge dataset, HART achieves 66.6% intersection over union (see Table 1), which in itself is impressive given the small amount of training data of only 5225 training frames and no pre-training. MOHART achieves 68.5% (both numbers are averaged over 5 runs, independent samples t-test ed in p < 0.0001). The performance gain increases when only considering ego-motion data. This is readily explained: movements of objects in the image space due to ego-motion are correlated and can therefore be better understood when combining information from movements of multiple objects, i.e. performing relational reasoning. In another ablation, we filtered for only crowded scenes by requesting five objects to be present for, on average, 90% of the frames in a sub-sequence. For the MOT-Challenge dataset, this only leads to a minor increase of the performance gain of MOHART indicating that the dataset exhibits a sufficient density of objects to learn interactions. The biggest benefit from relational reasoning can be observed in the camera blackout experiments (setup explained in Section 5.1). Both HART and MOHART learn to rely on their internal motion models when confronted with black frames and propagate the bounding boxes according to the previous movement of the objects. It is unsurprising that this scenario profits particularly from relational reasoning. Qualitative tracking and camera blackout are shown in fig. 5 and Appendix E. Tracking performance on the UA-DETRAC dataset only profits from relational reasoning when filtering for crowded scenes (see Table 2). The fact that the performance of MOHART is slightly worse on the vanilla dataset (∆ = −0.3%) can be explained with more overfitting. As there is no exchange between trackers for each object, each object constitutes an independent training sample. The Stanford drone dataset (see Table 3) is different to the other two-it is filmed from a birds-eye view. The scenes are more crowded and each object covers a small number of pixels, rendering it a difficult problem for tracking. The dataset was designed for trajectory prediction-a setup where an algorithm is typically provided with ground-truth tracklets in coordinate space and potentially an image as context information. The task is then to extrapolate these tracklets into the future. The tracking performance profits from relational reasoning more than on the UA-DETRAC dataset but less than on the MOTChallenge dataset. The performance gain on the camera blackout experiments are particularly strong when only considering cyclists. In the prediction experiments (see Appendix D), MOHART consistently outperforms HART. On both datasets, the model outperforms a baseline which uses momentum to linearly extrapolate the bounding boxes from the first two frames. This shows that even from just two frames, the model learns to capture motion models which are more complex than what could be observed from just the bounding boxes (i.e. momentum), suggesting that it uses visual information (HART & MOHART) as well as relational reasoning (MOHART). With MOHART, we introduce an end-to-end multi-object tracker that is capable of capturing complex interactions and leveraging these for precise predictions as experiments both on toy and real world data show. However, the experiments also show that the benefit of relational reasoning strongly depends on the nature of the data. The toy experiments showed that in an entirely deterministic world relational reasoning was much less important than in a stochastic environment. Amongst the real-world dataset, the highest performance gains from relational reasoning were achieved on the MOTChallenge dataset, which features crowded scenes, ego-motion and occlusions. The MOTChallenge and the UA-DETRAC dataset discussed in this section are intended to be used as a benchmark suite for multi-object-tracking in a tracking-by-detection paradigm. Therefore, ground truth bounding boxes are only available for the training datasets. The user is encouraged to upload their model which performs tracking in a data association paradigm leveraging the provided bounding box proposals from an external object detector. As we are interested in a different analysis (IoU given inital bounding boxes), we divide the training data further into training and test sequences. To make up for the smaller training data, we extend the MOTChallenge 2017 dataset with three sequences from the 2015 dataset (ETHSunnyday, PETS09-S2L1, ETH-Bahnhof). We use the first 70% of the frames of each of the ten sequences for training and the rest for testing. Sequences with high frame rates (30Hz) are sub-sampled with a stride of two. For the UA-DETRAC dataset, we split the 60 available sequences into 44 training sequences and 16 test sequences. For the considerably larger Stanford Drone dataset we took three videos of the scene deathCircle for training and the remaining two videos from the same scene for testing. The videos of the drone dataset were also sub-sampled with a stride of two to increase scene dynamics. The architecture details were chosen to optimise HART performance on the MOTChallenge dataset. They deviate from the original HART implementation as follows: A presence variable predicts whether an object is in the scene and successfully tracked. This is trained with a binary cross entropy loss. The maximum number of objects to be tracked simultaneously was set to 5 for the UA-DETRAC and MOTChallenge dataset. For the more crowded Stanford drone dataset, this number was set to 10. The feature extractor is a three layer convolutional network with a kernel size of 5, a stride of 2 in the first and last layer, 32 channels in the first two layers, 64 channels in the last layer, ELU activations, and skip connections. This converts the initial 32 × 32 × 3 glimpse into a 7 × 7 × 64 feature representation. This is followed by a fully connected layer with a 128 dimensional output and an elu activation. The spatial attention parameters are linearly projected onto 128 dimensions and added to this feature representation serving as a positional encoding. The LSTM has a hidden state size of 128. The self-attention unit in MOHART comprises linear projects the inputs to dimensionality 128 for each keys, queries and values. For the real-world experiments, in addition to the extracted features from the glimpse, the hidden states from the previous LSTM state are also fed as an input by concatinating them with the features. In all cases, the output of the attention module is concatenated to the input features of the respective object. As an optimizer, we used RMSProp with momentum set to 0.9 and learning rate 5 * 10 −6. For the MOTChallenge dataset and the UA-DETRAC dataset, the models were trained for 100,000 iterations of batch size 10 and the reported IoU is exponentially smoothed over iterations to achieve lower variance. For the Stanford Drone dataset, the batch size was increased to 32, reducing time to convergence and hence model training to 50,000 iterations. In our first experiment in the toy domain (Figure 6), four circles each exert repulsive forces on each other, where the force scales with 1/r, r being their distance. HART is applied four times in parallel and is trained to predict the location of each circle three time steps into the future. The different forces from different objects lead to a non-trivial force field at each time step. Predicting the future location just using the previous motion of one object (Figure 6 shows that each spatial attention box covers only the current object) accurately is therefore challenging. Surprisingly, the single object tracker solves this task with an average of 95% IoU over sequences of 15 time steps. This shows the efficacy of end-to-end tracking to capture complex motion FIGURE 7: Peeking into the future. Only the first two frames are shown to the tracking algorithm followed by three black frames. MOHART learns to fall back on its internal motion model when no observation (i.e. only a black frame) is available. The reported IoU scores show the performance for the respective frames 0, 1, 2, and 3 time steps into the future. patterns and use them to predict future locations. This, of course, could also be used to generate robust bounding boxes for a tracking task. In the from the prediction experiments (see Figure 7) MOHART consistently outperforms HART. On both datasets, the model outperforms a baseline which uses momentum to linearly extrapolate the bounding boxes from the first two frames. This shows that even from just two frames, the model learns to capture motion models which are more complex than what could be observed from just the bounding boxes (i.e. momentum), suggesting that it uses visual information (HART & MOHART) as well as relational reasoning (MOHART). The strong performance gain of MOHART compared to HART on the UA-DETRAC dataset, despite the small differences for tracking on this dataset, can be explained as follows: this dataset features little interactions but strong correlations in motion. Hence when only having access to the first two frames, MOHART profits from estimating the velocities of multiple cars simultaneously. In Section 5, we tested MOHART on three different real world data sets and in a number of different setups. Figure 8 shows qualitative both for HART and MOHART on the MOTChallenge dataset. Furthermore, we conducted a set of camera blackout experiments to test MOHART's capability of dealing with faulty sensor inputs. While traditional pipeline methods require careful consideration of different types of corner cases to properly handle erroneous sensor inputs, MOHART is able to capture these automatically, especially when confronted with similar issues in the training scenarios. To simulate this, we replace subsequences of the images with black frames. Figure 9 and Figure 5 show two such examples from the test data together with the model's prediction. MOHART learns not to update its internal model when confronted with black frames and instead uses the LSTM to propagate the bounding boxes. When proper sensor input is available again, the model uses this to make a rapid adjustment to its predicted location and'snap' back onto the object. This works remarkably well in both the presence of occlusion (Figure 9) and ego-motion (Figure 5). Tables 1 to 3 show that the benefit of relational reasoning is particularly high in these scenarios specifically. These experiments can also be seen as a proof of concept of MOHART's capabalities of predicting future trajectories-and how this profits from relational reasoning. FIGURE 9: Camera blackout experiment on a pedestrian street scene from the MOTChallenge dataset without ego-motion. Subsequent frames are displayed going from top left to bottom right. Shown are the inputs to the model (some of them being black frames, i.e. arrays of zeroes) and bounding boxes predicted by MOHART (coloured boxes). This scene is particularly challenging as occlusion and missing sensor input coincide (fourth row).
MOHART uses a self-attention mechanism to perform relational reasoning in multi-object tracking.
1,101
scitldr
We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. We employ those insights to accelerate hyper-parameter sweeps in which all participating agents run concurrently and share their experience via a common replay module. To this end we analyze the bias-variance tradeoffs in V-trace, a form of importance sampling for actor-critic methods. Based on our analysis, we then argue for mixing experience sampled from replay with on-policy experience, and propose a new trust region scheme that scales effectively to data distributions where V-trace becomes unstable. We provide extensive empirical validation of the proposed solution. We further show the benefits of this setup by demonstrating state-of-the-art data efficiency on Atari among agents trained up until 200M environment frames. Value-based and actor-critic policy gradient methods are the two leading techniques of constructing general and scalable reinforcement learning agents . Both have been combined with non-linear function approximation , and have achieved remarkable successes on multiple challenging domains; yet, these algorithms still require large amounts of data to determine good policies for any new environment. To improve data efficiency, experience replay agents store experience in a memory buffer (replay) , and reuse it multiple times to perform reinforcement learning updates . Experience replay allows to generalize prioritized sweeping to the non-tabular setting , and can also be used to simplify exploration by including expert (e.g., human) trajectories . Overall, experience replay can be very effective at reducing the number of interactions with the environment otherwise required by deep reinforcement learning algorithms . Replay is often combined with the value-based Q-learning , as it is an off-policy algorithm by construction, and can perform well even if the sampling distribution from replay is not aligned with the latest agent's policy. Combining experience replay with actor-critic algorithms can be harder due to their on-policy nature. Hence, most established actor-critic algorithms with replay such as (; ;) employ and maintain Q-functions to learn from the replayed off-policy experience. In this paper, we demonstrate that off-policy actor-critic learning with experience replay can be achieved without surrogate Q-function approximators using V-trace by employing the following approaches: a) off-policy replay experience needs to be mixed with a proportion of on-policy experience. We show experimentally (Figure 2) and theoretically that the V-trace policy gradient is otherwise not guaranteed to converge to a locally optimal solution. b) a trust region scheme (; ; can mitigate bias and enable efficient learning in a strongly off-policy regime, where distinct agents share experience through a commonly shared replay module. Sharing experience permits the agents to benefit from parallel exploration (Figures 1 and 3). Our paper is structured as follows: In Section 2 we revisit pure importance sampling for actor-critic agents and V-trace, which is notable for allowing to trade off bias and variance in its estimates. We recall that variance reduction is necessary (Figure 4 left) but is biased in V-trace. We derive proposition 2 stating that off-policy V-trace is not guaranteed to converge to a locally optimal solution -not even in an idealized scenario when provided with the optimal value function. Through theoretical analysis (Section 3) and experimental validation (Figure 2) we determine that mixing on-policy experience into experience replay alleviates the problem. Furthermore we propose a trust region scheme (; ; in Section 4 that enables efficient learning even in a strongly off-policy regime, where distinct agents share the experience replay module and learn from each others experience. We define the trust region in policy space and prove that the ing estimator is correct (i.e. estimates an improved return). As a , we present state-of-the-art data efficiency in Section 5 in terms of median human normalized performance across 57 Atari games , as well as improved learning efficiency on DMLab30 (Table 1). Figure 1: Sharing experience between agents leads to more efficient hyper-parameter sweeps on 57 Atari games. Prior art are presented as horizontal lines (with scores cited from , and). Note that the only previous agent "R2D2" that achieved a score beyond 400% required more than 3,000 million environment steps (see , page 14, Figure 9). We present the pointwise best agent from hyper-parameter sweeps with and without experience replay (shared and not shared). Each sweep contains 9 agents with different learning rate and entropy cost combinations. Replay experiment were repeated twice and ran for 50M steps. To report scores at 200M we ran the baseline and one shared experience replay agent for 200M steps. Table 1: Comparison of state-of-the-art agents on 57 Atari games trained up until 200M environment steps (per game) and DMLab-30 trained until 10B steps (multi-task; all games combined). The first two rows are quoted from and , the third is our implementation of a pixel control agent from and the last two rows are our proposed LASER (LArge Scale Experience Replay) agent. All agents use hyper-parameter sweeps expect for the marked. V-trace importance sampling is a popular off-policy correction for actor-critic agents . In this section we revisit how V-trace controls the (potentially infinite) variance that arises from naive importance sampling. We note that this comes at the cost of a biased estimate (see Proposition 1) and creates a failure mode (see Proposition 2) which makes the policy gradient biased. We discuss our solutions for said issues in Section 4. Figure 2: Left: Learning entirely off-policy from experience replay fails, while combining on-policy data with experience replay leads to improved data efficiency: We present sweeps on DMLab-30 with experience replays of 10M capacity. A ratio of 87.5% implies that there are 7 replayed transitions in the batch for each online transition. Furthermore we consider an agent identical to "LASER 87.5% replay" which however draws all samples from replay. Its batch thus does not contain any online data and we observe a significant performance decrease (see Proposition 2 and 3). The shading represents the point-wise best and worst replica among 3 repetitions. The solid line is the mean. Right: The effect of capacity in experience replay with 87.5% replay data per batch on sweeps on DMLab-30. Data-efficiency improves with larger capacity. Figure 3: Left: Naively sharing experience between distinct agents in a hyper-parameter sweep fails (green) and is worse than the no-replay baseline (blue). The proposed trust region estimator mitigates the issue (red). Right: Combining population based training with trust region estimation improves performance further. All replay experiments use a capacity of 10 million observations and 87.5% replay data per batch. We follow the notation of where an agent interacts with its environment, to collect rewards. On each discrete time-step t, the agent selects an action a t; it receives in return a reward r t and an observation o t+1, encoding a partial view of the environment's state s t+1. In the fully observable case, the RL problem is formalized as a Markov Decision Process : a tuple (S, A, p, γ), where S, A denotes finite sets of states and actions, p models rewards and state transitions (so that r t, s t+1 ∼ p(s t, a t)), and γ is a fixed discount factor. A policy is a mapping π(a|s) from states to action probabilities. The agent seeks an optimal policy π * that maximizes the value, defined as the expectation of the cumulative discounted returns Off-policy learning is the problem of finding, or evaluating, a policy π from data generated by a different policy µ. This arises in several settings. Experience replay mixes data from multiple iterations of policy improvement. In large-scale RL, decoupling acting from learning causes the experience to lag behind the latest agent policy. Finally, it is often useful to learn multiple general value functions (; ; ; ; b) or options from a single stream of experience. On-policy n-step bootstraps give more accurate value estimates in expectation with larger n . They are used in many reinforcement learning agents (; ;). Unfortunately n must be chosen suitably as the estimates variance increases with n too. It is desirable to obtain benefits akin to n-step returns in the off-policy case. To this end multi-step importance sampling can be used. This however adds another source of (potentially infinite ) variance to the estimate. Importance sampling can estimate the expected return V π from trajectories sampled from µ = π, as long as µ is non-zero whereever π is. We employ a previously estimated value function V as a bootstrap to estimate expected returns. , a multi-step formulation of the expected return is where E µ denotes the expectation under policy µ up to an episode termination, δ t V = r t + γV (s t+1) − V (s t) is the temporal difference error in consecutive states s t+1, s t, and π t = π t (a t |s t). Importance sampling estimates can have high variance. Tree Backup , and Q(λ) address this, but reduce the number of steps before bootstrapping even when this is undesirable (as in the on-policy case). RETRACE makes use of full returns in the on-policy case, but it introduces a zero-mean random variable at each step, adding variance to empirical estimates in both on-and off-policy cases. V-trace reduces the variance of importance sampling by trading off variance for a biased estimate of the return -ing in a failure mode (see Proposition 2). It uses clipped importance sampling ratios to approximate i=0 c i ρ t δ t+k V where V is a learned state value estimate used to bootstrap, and ρ t = min [π t /µ t,ρ], c t = min [π t /µ t,c] are the clipped importance ratios. Note that, differently from RETRACE, V-trace fully recovers the Monte Carlo return when on policy. It similarly reweights the policy gradient as: Note that ∇Vπ(s t) recovers the naively importance sampled policy gradient forρ → ∞. In the literature, it is common to subtract a baseline from the action-value estimate r t + γVπ(s t+1) to reduce variance , omitted here for simplicity. The constantsρ ≥c ≥ 1 (typically chosenρ =c = 1) define the level of clipping, and improve stability by ensuring a bounded variance. For any givenρ, the bias introduced by V-trace in the value and policy gradient estimates increases with the difference between π and µ. We analyze this in the following propositions. Proposition 1. The V-trace value estimate Vπ is biased: It does not match the expected return of π but the return of a related implied policyπ defined by equation 3 that depends on the behaviour policy µ:π Proof.. Note that the biased policyπ µ can be very different from π. Hence the V-trace value estimate Vπ may be very different from V π as well. As an illustrative example, consider two policies over a set of two actions, e.g. "left" and "right" represented as a tuple of probabilities. Let us investigate µ = (φ, 1 − φ) and π = (1 − φ, φ) defined for any suitably small φ ≤ 1. Observe that π and µ share no trajectories (state-action sequences) in the limit as φ → 0 and they get more focused on one action. A practical example of this could be two policies, one almost always taking a left turn and one always taking the right. Given sufficient data of either policy it is possible to estimate the value of the other e.g. with naive importance sampling. However observe that V-trace withρ = 1 will always estimate a biased value -even given infinite data. Observe that min [µ(a|x), π(a|x)] = min [φ, 1 − φ] for both actions. Thusπ µ is uniform rather than resembling π the policy. The V-trace estimate Vπ would thus compute the average value of "left" and "right" -poorly representing the true V π. Proposition 2. The V-trace policy gradient is biased: given the the optimal value function V * the V-trace policy gradient does not converge to a locally optimal π * for all off-policy behaviour distributions µ. Proof. See Appendix C. In Proposition 2 we presented a failure mode in V-trace where the variance reduction biases the value estimate and policy gradient. V-trace computes biased Q-estimates Q ω = Q ing in a wrong local policy gradient: The question of how biased the ing policy will be depends on whether the distortion changes the argmax of the Q-function. Little distortions that do not change the argmax will in the same local fixpoint of the policy improvement. The policy will continue to select the optimal action and it will not be biased at this state. The policy will however be biased if the Q-function is distorted too much. For example consider a ω(s, a) that swaps the argmax for the 2nd largest value, the regret will then be the difference between the maximum and the 2nd largest value. Intuitively speaking the more distorted the Q ω, the larger will be the regret compared to the optimal policy. More precisely, the regret of learning a policy that maximizes the distorted Q ω at state s is: where a * = argmax b (Q, b) is the optimal action according to the real Q and, is the optimal action according to the distorted Q ω. For generality, we denote A * as the set of best actions -covering the case with multiple with identical optimal Q-values. Proposition 3 provides a mitigation: Clearly the V-trace policy gradient will converge to the same solution as the true on-policy gradient if the argmax of the Q-function is preserved at all states in a tabular setting. We show that this can be achieved by mixing a sufficient proportion α of on-policy experience into the computation. We show in equation 13 in the Appendix that choosing α such that will in a policy that correctly chooses the best action at state s. Note that Intuitively: the larger the action value gap of the real Q-function Q(s, a *) − Q(s, b) the lower the right hand side and the less on-policy data is required. is negative, then α may be as small as zero and we enabling even pure off-policy learning. Finally note that the right hand side decreases due to d µ (s)/d π (s) if π visits the state s more often than µ. All of those conditions can be computed and checked if an accurate Q-function and state distribution is accessible. How to use imperfect Q-function estimates to adaptively choose such an α remain a question for future research. We provide experimental evidence for these with function approximators in the 3-dimensional simulated environment DMLab-30 with various α ≥ 1/8 in Section 5.3 and Figure 2. We observe that α = 1/8 is sufficient to facilitate stable learning. Furthermore it in better data-efficiency than pure on-policy learning as it utilizes off-policy replay experience. Proposition 3. Mixing on-policy data into the V-trace policy gradient with the ratio α reduces the bias by providing a regularization to the implied state-action values. In the general function approximation case it changes the off-policy V-trace policy gradient from is a regularized stateaction estimate and d π, d µ are the state distributions for π and µ. Note that there exists α ≤ 1 such that Q α has the same argmax (i.e. best action) as Q. Proof. See Appendix C. Mixing online data with replay data has also been argued for by , as a heuristic way of reducing the sensitivity of reinforcement learning algorithms to the size of the replay memory. Proposition 3 grounds this in the theoretical properties of V-trace. To mitigate the bias and variance problem of V-trace and importance sampling we propose a trust region scheme that adaptively selects only suitable behaviour distributions when estimating the state-value of π. To this end we introduce a behaviour relevance function that classifies behaviour as relevant. We then define a trust-region estimator that computes expectations (such as expected returns, or the policy gradient) only on relevant transitions. In proposition 4 and 5 we show that this trust region estimator indeed computes new state-value estimates that improve over the current value function. While our analysis and proof is general we propose a suitable behaviour relevance function in section 4.3 that employs the Kullback Leibler divergence between target policy π and implied policyπ µ: KL (π(·|s)||π µ (·|s)). We provide experimental validation in Figure 3. In off-policy learning we often consider a family of behaviour policies either indexed by training iteration t: M T = {µ t |t < T} for experience replay, or by a different agent k: M K = {µ k |k ∈ K} when training multiple agents. In the classic experience replay case we then sample a time t and locate the transition τ that was generated earlier via µ t. This extends naturally to the multiple agent case where we sample an agent index k and then obtain a transition for such agent or tuples of (k, t). Without loss of generality we simplify this notation and index sampled behaviour policies by a random variable z ∼ Z that represents the selection process. While online reinforcement learning algorithms process transitions τ ∼ π, off-policy algorithms process τ ∼ µ z for z ∼ Z. In this notation, given equation and a bootstrap V, the expectation of importance sampled off-policy returns at state s t is described by: where Above E µz|z represents the expectation of sampling from a given µ z. The conditioning on z is a notational reminder that this expectation does not sample z or µ z but experience from µ z. For any sampled z we obtain a µ z and observe that the inner expectation wrt. experience of µ z in equation recovers the expected on-policy return in expectation: Thus. This holds provided that µ z is non-zero wherever π is. This fairly standard assumption leads us straight to the core of the problem: it may be that some behaviours µ z are ill-suited for estimating the inner expectation. However, standard importance sampling applied to very off-policy experience divides by small µ ing in high or even infinite variance. Similarly, V-trace attempts to compute an estimate of the return following π ing in limited variance at the cost of a biased estimate in turn. The key idea of our proposed solution is to compute the return estimate for π at each state only from a subset of suitable behaviours µ z: M β,π (s) = {µ z |z ∈ Z and β(π, µ, s) < b} as determined by a behaviour relevance function β(π, µ, s): (M Z, M Z, S) → R and a threshold b. The behaviour relevance function decides if experience from a behaviour is suitable to compute an expected return for π. It can be chosen to control properties of V π mix by restricting the expectation on subsets of Z. In particular it can be used to control the variance of an importance sampled estimator: Observe that the inner expectation E µz G π,µ (s t) z in equation already matches the expected return V π. Thus we can condition the expectation on arbitrary subsets of Z without changing the expected value of V π mix. This allows us to reject high variance G π,µ without introducing a bias in V π mix. The same technique can be applied to V-trace where we can reject return estimates with high bias. Using a behaviour relevance function β(s) we can define a trust region estimator for regular importance sampling (IS) and V-trace and show their correctness. We define the trust region estimator as the conditional expectation with λ-returns G, chosen as G IS for importance sampling and G Vtrace for V-trace: where λ π,µ (s t) is designed to constraint Monte-Carlo bootstraps to relevant behaviour: λ π,µ (s t) = 1 β(π,µ,st)<b and ρ z,t+k = min πt+i µz,t+i,ρ and c z,t+k are behaviour dependent clipped importance rations. Thus both G π,µz IS and G Vtrace are a multi-step return estimators with adaptive length. Note that only estimators with length ≥ 1 are used in V π trusted. Due to Minkowski's inequality the trust region estimator thus shows at least the same contraction as a 1-step bootstrap, but can be faster due to its adaptive nature: be a set of importance sampling estimators as defined in equation 7. Note that they all have the same fix point V π and contract with at least γ. Then the contraction properties carry over to V π trusted. In particular Proof. See Appendix C. Vtrace be a set of V-trace estimators (see equation 8) with corresponding fixed points V z (see equation 3) to which they contract at a speed of an algorithm and behaviour specific Proof. See Appendix C. Note how the choice of β and thus M β,π enables us to discard ill-suited G π,µz Vtrace from the estimation of V π trusted. Recall that V-trace fixed points V z are biased. Thus β allows us to selectively create the V-trace target and control its bias and the shrinkage Similarly it can control cases where we can not use the exact importance sampled estimator. The same approach based on nested expectations can be applied to the expectation of the policy gradient estimate and allows to control the bias and greediness (see Proposition 2) there as well. In Proposition 5 we have seen that the quality of the trust region V-trace return estimator depends on β. A suitable choice of β can move the return estimate V β closer to V π and improve the shrinkage by Hence, we employ a behaviour relevance function β KL that rejects high bias transitions by estimating the Kulback-Leibler divergence between the target policy π and the implied policyπ µz for a sampled behaviour µ z. Recall from Proposition 1 thatπ µz determines the fixed point of the V-trace estimator for behaviour µ z and thus determines the bias in V z. Note that the behaviour probabilities µ z can be evaluated and saved to the replay when the agent executes the behaviour, similarly the target policy π is represented by the agents neural network. Using both and equation 3,π µ can be computed. For large or infinite action spaces a Monte Carlo estimate of the KL divergence can be computed. It is possible to define separate behaviour relevance functions for the policy and value estimate. For simplicity we reject transitions entirely for all estimates and do not consider rejected transitions for the policy gradient and value gradient updates or auxiliary tasks. As described above we stop the Monte-Carlo bootstraps once they reach undesirable state-behaviour pairs. Note that this censoring procedure is computed from state dependent β(π, µ, s) and ensures that the choice of bootstrapping does not depend on the sampled actions. Note that rejection by an action-based criteria such as small π(a|s)/µ(a|s) would introduce an additional bias which we avoid by choosing β KL. We present experiments to support the following claims: • Section 5.2: Uniform experience replay obtains comparable as prioritized experience replay, while being simpler to implement and tune. • Section 5.3: Using fresh experience before inserting it in experience replay is better than learning purely off-policy from experience replay -in line with Proposition 3. • Section 5.4: Sharing experience without trust region performs poorly as suggested by Proposition 2. Off-Policy Trust-Region V-trace solves this issue. • Section 5.5: Sharing experience can take advantage of parallel exploration and obtains state-of-the-art performance on Atari games, while also saving memory through sharing a single experience replay. We use the V-trace distributed reinforcement learning agent . Updates are computed on mini-batches of 32 (regular) and 128 (replay) trajectories, each corresponding to 19 steps in the environment. In the context of DeepMind Lab, we consider the multi-task suite DMLab-30 , as the visuals and the dynamics are more consistent across tasks. Furthermore the multi-task regime is particularly suitable for the investigation of strongly off-policy data distributions arising from sharing the replay across agents, as concurrently learning agents can easily be stuck in different policy plateaus, generating substantially different data . As in , in the multi-task setting each agent trains simultaneously on a uniform mixture of all tasks rather than individually on each game. The score of an agent is thus the median across all 30 tasks. , we augment our agent with multi-task Pop-Art normalization and PixelControl. We use a PreCo LSTM instead of the vanilla one . Updates are computed on mini-batches of multiple trajectories chosen as above, each corresponding to 79 steps in the environment. In early experiments we found that computing the entropy cost only on the online data provided slightly better , hence we have done so throughout our experiments. In all our experiments, experience sampled from memory is mixed with online data within each minibatch -following Proposition 3. Episodes are removed in a first in first out order, so that replay always holds the most recent experience. Unless explicitly stated otherwise we consider hyper-parameter sweeps, some of which share experience via replay. In this setting multiple agents start from-scratch, run concurrently at identical speed, and add their new experience into a common replay buffer. All agents will then draw uniform samples from the replay buffer. On DMLab-30 we consider both regular hyper-parameter sweeps and sweeps with population based training (PBT) (a). On DMLab-30 sweeps contain 10 agents with hyper-parameters sampled similar as but fixed RMSProp = 0.1. On Atari sweeps contain 9 agents with different constant learning rate and entropy cost combinations {3 · 10 −4, 6 · 10 −4, 1.2 · 10 −3} × {5 · 10 −3, 1 · 10 −2, 2 · 10 −2} (distributed by factors {1/2, 1, 2} around the initial parameters reported in ). Although our focus is on efficient hyper-parameter sweeps given crude initial parameters, we also present a single-agent LASER experiment using the same tuned schedule as , a 87.5% replay ratio and a 15M replay. We store the entire episodes in the replay buffer and replay each episode from the beginning, using the most recent network parameters to recompute the LSTM states along the way: this is particularly critical when sharing experience between different agents, which may have arbitrarily different state representations. Prioritized experience replay has the potential to provide more efficient learning compared to uniform experience replay (; . However, it also introduces a number of new hyper-parameters and design choices: the most critical are the priority metric, how strongly to bias the sampling distribution, and how to correct for the ing bias. Uniform replay is instead almost parameter-free, requires little tuning and can be easily shared between multiple agents. Experiments provided in Figure 4 in the appendix showed little benefit of actor critic prioritized replay on DMLab-30. Furthermore priorities are typically computed from the agent specific metrics such as the TD-error, which are ill-defined when replay is shared among multiple agents. Hence we used uniform replay for our further investigations. Figure 2 (left) shows that performance degrades significantly when online data is not present in the batch. This experimentally validates Propositions 2 and 3 that highlight difficulties of learning purely off-policy. Furthermore Figure 2 (right) shows that best are obtained with experience replay of 10M capacity and 87.5% ratio. A ratio of 87.5% = 7/8 corresponds to 7 replay samples for each online sample. We have considered ratios of 1/2, 3/4, and 7/8 and observed stable training for all of them. Observe that among those values, larger ratios are more data-efficient as they take advantage of more replayed experience per training step. In line with proposition 2 we observe in Figure 3 (left) that hyper-parameter sweeps without trustregion are even surpassed by the baseline without experience replay. State-of-the-art are obtained in Figure 3 (right) when experience is shared with trust-region in a PBT sweep. Observe that this indicates parallel exploration benefits and saves memory at the same time: in our sweep of 10 replay agents the difference between 10 × 10M (separate replays) and 10M (shared replay) is 10-fold. This effect would be even more pronounced with larger sweeps. As discussed in section 2.3, the bias in V-trace occurs due to the clipping of importance ratios. A potential solution of reducing the bias would be to increase theρ threshold to clip less aggressively and accept increased variance. Figure 4 in the appendix shows that this is not a solution. We apply our proposed agent to Atari which has been a long established suite to evaluate reinforcement algorithms . Since we focus on sample-efficient learning we present our in comparison to prior work at 200M steps (Figure 1). Shared experience replay obtains even better performance than not shared experience replay. This confirms the efficient use of parallel exploration . The fastest prior agent to reach 400% is presented by requiring more than 3,000M steps. LASER with shared replay achieves 423% in 60M per agent. Given 200M steps it achieves 448%. We also present a single (no sweep) LASER agent that achieves 431% in 200M steps. We have presented LASER -an off-policy actor-critic agent which employs a large and shared experience replay to achieve data-efficiency. By sharing experience between concurrently running experiments in a hyper-parameter sweep it is able to take advantage of parallel exploration. As a it achieves state-of-the-art data efficiency on 57 Atari games given 200M environment steps. Furthermore it achieves competitive on both DMLab-30 and Atari under regular, not shared experience replay conditions. To facilitate this algorithm we have proposed two approaches: a) mixing replayed experience and on-policy data and b) a trust region scheme. We have shown theoretically and demonstrated through a series of experiments that they enable learning in strongly off-policy settings, which present a challenge for conventional importance sampling schemes. Increasing the clipping constantρ in V-trace reduces bias in favour of increased variance. We investigate if reducing bias in this manner enables sharing experience replay between multiple agents in a hyper-parameter sweep. Figure 4 (left) shows that this is not a solution, thus motivating our trust region scheme. In fact sharing experience replay in this particular way is worse than pure online learning. This motivates the use of our proposed trust region scheme. On a side note, increased clipping thresholds ing in worse performance verifies the importance of variance reduction through clipping. Right: Median human normalized performance across 30 tasks for the best agent in a sweep, averaged across 2 replicas. All replay experiments use 50% replay ratio and a capacity of 3 million observations. We investigate if uncorrected LSTM states can be used in combination with different replay modes. We consider uniform sampling and prioritization via the critic's loss, and include both full (β = 1) and partial (β = 0.5) importance corrections A.2 PRIORITIZED AND UNIFORM EXPERIENCE REPLAY, LSTM STATES With prioritized experience replay each transition τ is sampled with probability P (τ) ∝ p α τ, for a suitable unnormalized priority score p τ and a global tunable parameter α. It is common to then weight updates computed from that sample by 1/P (τ) β for 0 < β ≤ 1, where β = 1 fully corrects for the bias introduced in the state distribution. In one step temporal difference methods, typical priorities are based on the immediate TD-error, and are typically recomputed after a transition is sampled from replay. This means low priorities might stay low and get stale -even if the transition suddenly becomes relevant. To alleviate this issue, the sampling distribution is mixed with a uniform, as controlled by a third hyper parameter. The performance of agents with prioritized experience replay can be quite sensitive to the hyperparameters α, β, and. A critical practical consideration is how to implement random access for recurrent memory agents such as agents using an LSTM. Prioritized agents sample a presumably interesting transition from the past. This transition may be at any position within the episode. To infer the correct recurrent memory-state at this environment-state all earlier environment-states within that episode would need to be replayed. A prioritized agent with a random access pattern would thus require costly LSTM refreshes for each sampled transition. If LSTM states are not recomputed representational missmatch occurs. Sharing experience between multiple agents amplifies the issue of LSTM state representation missmatch. Here each agent has its own network parameters and the state representations between agents may be arbitrarily different. As a mitigation use a burn-in window or to initialize with a constant starting state. We note that those solutions can only partially mitigate the fundamental issue and that counter examples such as arbitrarily long T-Mazes can be constructed easily. We thus advocate for uniform sampling. In our implementation we uniformly sample an episode. Then we replay each episode from the beginning, using the most recent network parameters to recompute the LSTM states along the way: this is particularly critical when sharing experience between different agents, which may have arbitrarily different state representations. This solution is exact and cost-efficient as it only requires one additional forward pass for each learning step (forward + backward pass). An even more cost efficient approach would be to not refresh LSTM states at all. Naturally this comes at the cost of representational missmatch. However it would allow for an affordable implementation of prioritized experience replay. We investigate this in Figure 4 (right) and observe that it is not viable. We compare a baseline V-trace agent with no experience replay, one with uniform experience replay, and two different prioritized replay agents. We do not refresh LSTM states for any of the agents. The uniform replay agent is more data efficient then the baseline, and also saturates at a higher level of performance. The best prioritized replay agent uses full importance sampling corrections (β = 1). However it performs no higher than with uniform replay. We therefore we used uniform replay with full state correction for all our investigations in the paper. For evaluation, we average episode returns within buckets of 1M (Atari) and 10M (DMLab) environment steps for each agent instance, and normalize scores on each game by using the scores of a human expert and a random agent (van). In the multi-task setting, we then define the performance of each agent as the median normalized score of all levels that the agent trains on. Given the use of population based training, we need to perform the comparisons between algorithms at the level of sweeps. We do so by selecting the best performing agent instance within each sweep at any time. Note that for the multi-task setting, our approach of first averaging across many episodes, then taking the median across games, on DMLab further downsampling to 100M env steps, and only finally selecting the maximum within the sweep, in substantially lower variance than if we were to compute the maximum before the median and smoothing. All DMLab-30 sweeps are repeated 3× with the exception of ρ = 2 and ρ = 4 in Figure 4. We then plot a shaded area between the point-wise best and worst replica and a solid line for the mean. Atari sweeps having 57 games are summarized and plotted by the median of the human-normalized scores. We present algorithm pseudocode for LASER with trust region (Algorithm 1). For clarity we present a version without LSTM and focus on the single agent case. The multi-agent case is a simple extension where all agents save to the same replay database and also sample from the same replay. Also each agent starts with different network parameters and hyper-parameters. The LSTM state recomputation can be achieved with Replayer Threads (nearly identical to Actor Threads) that sample entire epsiodes from replay, step through them while reevaluating the LSTM state and slice the experience into trajectories of length T. Similar to regular LSTM Actor Threads from the Replayer Threads send each trajectory together with an LSTM state to the learning thread via a queue. The Learner Thread initializes the LSTM with the transmitted state when the LSTM is unrolled over the trajectory. Initialize parameter vectors θ. Initialize π 1 = π θ. Actor Thread: while training is ongoing do Sample trajectory unroll u = {τ t} t∈{1,...,T} of length T by acting in the environment using the latest π k where τ t = (s t, a t, r t, µ t = π k (s t |·)). Compute trust-region V-trace return V t,b using 8 where, where ρ is the clipped v-trace importance sampling ratio. Perform gradient update to θ using, denote the ing π θ as π k+1. end for We have stated five propositions in our paper for which we provide proofs below. Proposition 1. The V-trace value estimate Vπ is biased: It does not match the expected return of π but the return of a related implied policyπ defined by equation 9 that depends on the behaviour policy µ:π Proof.. Proposition 2. The V-trace policy gradient is biased: given the the optimal value function V * the V-trace policy gradient does not converge to a locally optimal π * for all off-policy behaviour distributions µ. Consider a tabular counter example with a single (locally) optimal policy at s t given by π * (s t) = argmax π a∈A π(a|s t)Q * (a, s t) that always selects the action argmax a Q * (a, s t). Even in this ideal tabular setting V-trace policy gradient estimates a differentπ * rather than the optimal π * as follows ∇V *,π (s t) = E µ [ρ t (r t + γV * (s t+1)∇ log π(a t |s t)] = E µ [ρ t Q * (s t, a t)∇ log π(a t |s t)] = E µ min π(a t |s t) µ(a t |s t),ρ Q * (s t, a t)∇ log π(a t |s t) = E µ π(a t |s t) µ(a t |s t) min 1,ρ µ(a t |s t) π(a t |s t) Q * (s t, a t)∇ log π(a t |s t) = E π min 1,ρ µ(a t |s t) π(a t |s t) Q * (s t, a t)∇ log π(a t |s t) = E π [ω(s t, a t)Q * (s t, a t)∇ log π(a t |s t)] = E π [Q *,ω (s t, a t)∇ log π(a t |s t)] Observe how the optimal Q-function Q * is scaled by ω(s t, a t) = min 1,ρ µ(at|st) π(at|st) ≤ 1 ing in implied state-action values Q *,ω. This penalizes actions where µ(a t |s t)ρ < π(a t |s t) and makes V-trace greedy w.r.t. to the remaining ones. Thus µ can be chosen adversarially to corrupt the optimal state action value. Note thatρ is a constant typically chosen to be 1. To prove the lemma consider a counter example such as an MDP with two actions and Q * = and µ = (0.9, 0.1) and initial π = (0.5, 0.5). Here the second action with expected return 5 is clearly favourable. Abusing notation µ/π = (1.8, 0.2). Thus Qπ,ω = (2 * 1, 5 * 0.2) =. Thereforẽ π * = wrongly selects the first action. with the V-trace distortion factor ω(s t, a t) = min 1,ρ µ(at|st) π(at|st) ≤ 1 that can de-emphasize action values and Q ω (s, a) = ω(s, a)Q(s, a). We display the Atari per-level performance of various agents at 50M and 200M environment steps in Table 2. The scores correspond to the agents presented in Figure 1. The LASER scores are computed by averaging the last 100 episode returns before 50M or respectively 200M environment frames have been experienced. Following the procedure defined by we initialize the environment with a random number of no-op actions (up to 37 in our case). Again following episodes are terminated after 30 minutes of gameplay. Note that have not published per-level scores. Rainbow scores are obtained from.
We investigate and propose solutions for two challenges in reinforcement learning: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning.
1,102
scitldr
Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of time, rather than iterations. This in a simple rule to select the estimator that leads to the best optimization convergence guarantee. This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness). Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given. Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights. This is enabled by a reduction to a mixed-integer quadratic program. Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight. In stochastic gradient variational inference (SGVI) there are multiple gradient estimators with varying costs and variances. Estimators may be obtained using the reparameterization trick (; ; Titsias and Lázaro-), the score function method , or other techniques (Titsias and Lázaro- ; ;). Also, many control variates can be added to an estimator to reduce variance . The cost and variance of an estimator significantly affects optimization convergence speed . The use of different estimators leads to different optimization performances, and the estimator with optimal cost-variance tradeoff is often situationdependent (for an example see Fig. 1). In settings where multiple estimators with varying costs and variances are available, selecting the optimal one is important. Rather than rely on the user to manually select one, we propose that estimator selection could be done adaptively. This paper investigates how, given a pool of gradient estimators, automatically choose one to get the best convergence guarantee for stochastic optimization. We study cost-variance tradeoffs by analyzing the convergence rates of several variants of SGD. We express convergence rates in terms of time rather than iterations. This leads to what we call the "G 2 T principle": A simple rule that predicts, given a pool of gradient estimators, which one in the best convergence guarantees for optimization. We use the principle to propose two gradient estimator selection algorithms: One for the case in which a finite pool of estimators is available, and other when the pool contains an infinite number of estimators, each indexed by control variate weights (i.e. control variate selection). Notation: We use g(w, ξ), where ξ is a random variable, to denote an unbiased estimator of target's gradient, G 2 (g) to denote a bound on g's expected squared norm, and T (g) to denote the computational cost of computing estimator g(w, ξ), measured in seconds. 2 T Principle and Gradient Estimator Selection Given a set of gradient estimators with varying costs and variances, our goal is to find the one that gives the best convergence guarantee for optimization algorithms. Convergence guarantees for several variants of SGD are shown in Table 1. Given a pool of estimators with known G 2 and T, the one with minimum G 2 T should be used. In practice, however, G 2 and T are typically not known. We propose to use estimates. Assuming that the cost of an estimator g(w, ξ) is independent of w, an estimateT (g) of T (g) can be obtained for each g ∈ G through a single initial profiling phase. Dealing with G 2 (g) is more involved. Convergence guarantees assume that E ||g(w, ξ)|| 2 ≤ G 2 (g) for all w. Often (e.g. when w is unbounded) this is not true for any finite G. We propose an approach that is justified under two assumptions: (i) Optimization starting from a point w 0 will only visit a restricted part of parameter space. It is sufficient to bound E ||g(w, ξ)|| 2 for the set of w that may actually be encountered. (ii) E g(w, ξ) 2 tends to decrease slowly over time. When these are true, it makes sense to also form an estimatê G 2 (g) through an initial profiling phase, and to update these estimates a small number of times as optimization proceeds. The approach is summarized in Alg. 1. It is possible to use multiple control variates to reduce a gradient estimator's variance. However, some control variates might be computationally expensive but only in a small reduction in variance. It may be better to remove them and accept a noisier but cheaper estimator. How can we select what control variates to use? When an unbiased gradient estimator g base and control variates c 1,...c J are given, a gradient estimator can be expressed as The available estimators are G = {g a : a ∈ R J}. The number of estimators g a ∈ G is infinite, and Alg. 1 cannot be used (cannot measureT andĜ 2 for each estimator g a individually). We show that, despite having an infinite number of estimators, when estimators are indexed by control variate weights finding the one with minimumĜ 2T can be done efficiently. This is because two properties hold: (i) EstimatesT (g a) andĜ 2 (g a) can be efficiently obtained for all estimators g a ∈ G through the use of shared statistics (a finite number of evaluations of the base estimator and control variates); and (ii) The ing (combinatorial) optimization problem a * = arg min aĜ 2 (g a)T (g a) can be reduced to a Mixed Integer Quadratically Constrained Program (MIQCP), which can be solved quickly in practice. For all g ∈ G measure timeT (g). for k = 1, 2, · · · do if time to re-select estimator then for each estimator g dô Algorithm 2 SGD with automatically selected control variates. Require: Set of estimators, G. Require: Times to re-select estimator. Require: Number of MC samples M. For g base measure time t 0. This section presents an overview of the experiments. Full details are in the appendix. We tackle inference problems using SGVI. We consider three models: Logistic regression, a hierarchical regression model, and a Bayesian neural network. For the simple logistic regression model we use a Gaussian with a full rank covariance as variational distribution q w (z). For the other more complex models we use a factorized Gaussian. We use SGD with momentum to optimize, and five samples z ∼ q w (z) to form Monte Carlo gradient estimates. For both Algs. 1 and 2 we update the estimator used (by minimizingĜ 2T) three times during training. We first present an empirical validation for Alg. 1. We compare the achieved by using three different gradient estimators: (Rep) the plain reparameterization estimator, (Miller) the estimator proposed by Miller et al. , and (STL) the "sticking the landing" estimator . We also run Alg. 1 with the set of estimators G = {Rep, Miller, STL}, which uses the estimator g ∈ G with minimumĜ 2T. We now present an empirical validation for Alg. 2 (control variate selection). We consider the same three models as above. The set of candidate estimators is G Auto = {g a : a ∈ R 3}, where g a is as defined in eq.. The base estimator is plain reparameterization, and there are three candidate control variates (c 1, c 2, c 3). The goal is to check if Alg. 2 successfully navigates cost/variance tradeoffs. We thus compare against using each possible fixed subsets of control variates S ⊆ {c 1, c 2, c 3}, with the weights that minimize the estimator's variance (which can be estimated efficiently ( Log. Regression (a1a) We compare against using different fixed subsets of control variates with the weights that minimize the estimator's variance. Lines are identified as follows: "Auto" stands for using Alg. 2 to select what control variates to use and their weights, "Base" stands for optimizing using the base gradient alone, "1" stands for using the fixed set of control variates {c 1} with the minimum variance weights, "12" stands for using the fixed set of control variates {c 1, c 2}, and so on. Appendix A. Appendix Three different models were considered: a Bayesian neural network, a hierarchical Poisson model, and Bayesian logistic regression. Bayesian logistic regression: We use the dataset a1a. The training set is given by, where y i is binary. The model is specified by Hierarchical Poisson model:. The model measures the relative stop-and-frisk events in different precincts in New York city, for different ethnicities. The model is specified by In this case, e stands for ethnicity and p for precinct, Y ep for the number of stops in precinct p within ethnicity group e (observed), and N ep for the total number of arrests in precinct p within ethnicity group e (observed). BNN: As done by Miller et al. we use a subset of 100 rows from the "Red-wine" dataset (regression). We implement a neural network with one hidden layer with 50 units and Relu activations. Let D = {x i, y i} N i=1 be the training set. The model is specified by log α ∼ N (0, 10 2), log τ ∼ N (0, 10 2), W ∼ N (0, α 2 I), (weights and biases) A.2. Details on the simulations Control variates used: c 1: Difference between the entropy term computed exactly and estimated using reparameterization: c(w, ξ) = ∇ w log q w (T w (ξ)) − ∇ w Eq w log q w (Z). c 2: Control variate by Miller et al. based on a second order Taylor expansion of log p(x, z). c 3: Difference between the prior term computed exactly and estimated using reparameterization: c(w, ξ) = ∇ w log p(T w (ξ)) − ∇ w Eq w log p(Z). Algorithmic details: For Alg. 2 we use M = 400 to estimateĜ 2 (except for Logistic regression, where we use M = 200). We re-select the optimal estimator three times during training, initially, after 10% of training is done, and after 50% of training is done. Optimization details: We use SGD with momentum (β = 0.9) with 5 samples z ∼ q w (z) to form the Monte Carlo gradient estimates. For all models we find an initial set of parameters by optimizing with the base gradient for 300 steps and a fixed learning rate of 10 −5. This initialization was helpful in practice because w tends to change rapidly at the beginning of optimization. After this brief initialization, E ||g(w, ξ)|| 2 tends to change much more slowly, meaning our technique is more helpful. The performance of all algorithms depends on the step-size. To give a fair comparison, Figs. 1 and 2 summarize by showing the with the best step-size for each estimator. (12 stepsizes between 10 −6 and 10 −3 were considered.) Estimators with control variates can be expressed as An expression forT (g a) can be obtained by noticing that computing g a only requires computing the base gradient and the control variates with non-zero weights. Then, for all 2. Using Tw(ξ) = µ + D 1/2 ξ, where ξ ∼ N (0, I), µ is the mean of qw and D 1/2 is the Cholesky factorization of the covariance of qw. Thus, we can computeT (g a) for all g a ∈ G only by profiling the base gradient and each control variate individually. Similarly,Ĝ 2 (g a, w) is determined by the same set of base gradient and control variate evaluations, regardless of the value of a. Suppose that, at iteration k, we sample ξ k1,..., ξ kM. Then, for all g a ∈ G, Thus, we can computeĜ 2 (g a, w k) for all g a ∈ G using only M evaluations of the base gradient g base and each control variate c i. Equations and characterize the (estimated) cost and variance of the gradient estimator with weights a. We find the weights that in the optimal cost-variance tradeoff by solving a * (w) = arg min whereT (g a) andĜ 2 (g a, w) are as in equations and. The solution a * (w) indicates what control variates to use (those with a * i = 0), and their weights. Solving the (combinatorial) minimization problem in equation may be challenging. However, theorem 1 states that it can be reduced to a MIQCP, which can be solved fast using solvers such as. Theorem 1 When different gradient estimators are indexed by a set of J control variate weights, the problem of finding a * (w) as in equation can be reduced to solving a mixed integer quadratically constrained program with 2J + 2 variables, one quadratic constraint, and one linear constraint. A mixed integer quadratic program is an optimization problem in which the objective function and constraints are quadratic (or linear), and some (or all) variables are restricted to be integers: Ax + b = 0, where x ∈ R n, Q 0,..., Q m ∈ R n×n, and some components of x are restricted to be integers. We now prove the theorem 1. Proof GivenT The final minimization problem shown in equation 14 has the form of a general MIQCP, shown in equation 6, with the exception of the last constraint b i = 1[a i = 0]. Despite not being in the original definition of a MIQCP, several solver accept constraints of this type (, the solver used in our simulation, does).
We propose a gradient estimator selection algorithm with the aim on improving optimization efficiency.
1,103
scitldr
We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we give new on the benefits of multi-generator architecture of GANs. We show that the minimax gap shrinks to \epsilon as the number of generators increases with rate O(1/\epsilon). This improves over the best-known of O(1/\epsilon^2). At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem. Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets. Generative Adversarial Nets (GANs) are emerging objects of study in machine learning, computer vision, natural language processing, and many other domains. In machine learning, study of such a framework has led to significant advances in adversarial defenses BID25 BID22 and machine security BID3 BID22. In computer vision and natural language processing, GANs have ed in improved performance over standard generative models for images and texts BID11, such as variational autoencoder BID14 and deep Boltzmann machine BID20. A main technique to achieve this goal is to play a minimax two-player game between generator and discriminator under the design that the generator tries to confuse the discriminator with its generated contents and the discriminator tries to distinguish real images/texts from what the generator creates. Despite a large amount of variants of GANs, many fundamental questions remain unresolved. One of the long-standing challenges is designing universal, easy-to-implement architectures that alleviate the instability issue of GANs training. Ideally, GANs are supposed to solve the minimax optimization problem BID11, but in practice alternating gradient descent methods do not clearly privilege minimax over maximin or vice versa (page 35,), which may lead to instability in training if there exists a large discrepancy between the minimax and maximin objective values. The focus of this work is on improving the stability of such minimax game in the training process of GANs. To alleviate the issues caused by the large minimax gap, our study is motivated by the so-called Stackelberg competition in the domain of game theory. In the Stackelberg leadership model, the players of this game are one leader and multiple followers, where the leader firm moves first and then the follower firms move sequentially. It is known that the Stackelberg model can be solved to find a subgame perfect Nash equilibrium. We apply this idea of Stackelberg leadership model to the architecture design of GANs. That is, we design an improved GAN architecture with multiple generators (followers) which team up to play against the discriminator (leader). We therefore name our model Stackelberg GAN. Our theoretical and experimental establish that: GANs with multi-generator architecture have smaller minimax gap, and enjoy more stable training performances. Our Contributions. This paper tackles the problem of instability during the GAN training procedure with both theoretical and experimental . We study this problem by new architecture design. Figure 1: Stackelberg GAN stabilizes the training procedure on a toy 2D mixture of 8 Gaussians. Top Row: Standard GAN training. It shows that several modes are dropped. Bottom Row: Stackelberg GAN training with 8 generator ensembles, each of which is denoted by one color. We can see that each generator exactly learns one mode of the distribution without any mode being dropped. Under review as a conference paper at ICLR 2019 (a) Step 0 Standard GAN training. It shows that several modes are dropped. Bottom Row: Stackelberg GAN training with 8 generator ensembles, each of which is denoted by one color. We can see that each generator exactly learns one mode of the distribution without any mode being dropped.• We propose Stackelberg GAN framework of having multiple generators in the GAN architecture. Our framework is general that can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences with prior work. Although the idea of having multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1 and MGAN BID13, there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13, various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have enough capacity. In contrast, in the Stackelberg GAN model we uniformly ensemble the losses of various standard GAN without any assumption on the model capacity. b) In MIX+GAN BID1, the losses are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary because the expressive power of each generator already allows implicit scaling of each generator. To the contrary, in the Stackelberg GAN we apply equal weights for all generators.• We prove that the minimax duality gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our has no assumption on the expressive power of generators and discriminator, but instead depends on their non-convexity. With extra condition on the expressive power of generators, we show that Stackelberg GAN is able to achieve ✏-approximate equilibrium with e O(1/✏) generators (see Theorem 3). This Stackelberg GAN training with 10 generator ensembles on real images without cherry pick, where each row corresponds to one generator. We can see that each generator exactly learns one mode of the distribution without any mode being dropped.[Pengtao: It is kind of abrupt that you say "Stackelberg GAN stabilizes the training procedure" in the beginning sentence, then the rest talks about losing mode. In the introduction, a convincing tie between instability and mode collapse is still missing.]• We propose Stackelberg GAN framework of having multiple generators in the GAN architecture. Our framework is general that can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences with prior work. Although the idea of having multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1 and MGAN BID13, there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13, various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have enough capacity. In contrast, in the Stackelberg GAN model we uniformly ensemble the losses of various standard GAN without any assumption on the model capacity. b) In MIX+GAN BID1, the losses are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary because the expressive power of each generator already allows implicit scaling of each generator. To the contrary, in the Stackelberg GAN we apply equal weights for all generators.• We prove that the minimax duality gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our has no assumption on the • We propose the Stackelberg GAN framework of multiple generators in the GAN architecture. Our framework is general since it can be applied to all variants of GANs, e.g., vanilla GAN, Wasserstein GAN, etc. It is built upon the idea of jointly optimizing an ensemble of GAN losses w.r.t. all pairs of discriminator and generator. Differences from prior work. Although the idea of having multiple generators in the GAN architecture is not totally new, e.g., MIX+GAN BID1, MGAN BID13, MAD-GAN BID9 and GMAN BID8, there are key differences between Stackelberg GAN and prior work. a) In MGAN BID13 and MAD-GAN BID9, various generators are combined as a mixture of probabilistic models with assumption that the generators and discriminator have infinite capacity. Also, they require that the generators share common network parameters. In contrast, in the Stackelberg GAN model we allow various sampling schemes beyond the mixture model, e.g., each generator samples a fixed but unequal number of data points independently. Furthermore, each generator has free parameters. We also make no assumption on the model capacity in our analysis. This is an important research question as raised by BID2. b) In MIX+GAN BID1, the losses are ensembled with learned weights and an extra regularization term, which discourages the weights being too far away from uniform. We find it slightly unnecessary because the expressive power of each generator already allows implicit scaling of each generator. In the Stackelberg GAN, we apply equal weights for all generators and obtain improved guarantees. c) In GMAN BID8, there are multiple discriminators while it is unclear in theory why multi-discriminator architecture works well. In this paper, we provide formal guarantees for our model. • We prove that the minimax duality gap shrinks as the number of generators increases (see Theorem 1 and Corollary 2). Unlike the previous work, our has no assumption on the expressive power of generators and discriminator, but instead depends on their non-convexity. With extra condition on the expressive power of generators, we show that Stackelberg GAN is able to achieve -approximate equilibrium with O(1/) generators (see Theorem 3). This improves over the best-known in BID1 which requires generators as many as O(1/ 2). At the core of our techniques is a novel application of the ShapleyFolkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constrained optimization problem. This in tighter bounds than that of the covering number argument as in BID1. We also note that MIX+GAN is a heuristic model which does not exactly match the theoretical analysis in BID1, while this paper provides formal guarantees for the exact model of Stackelberg GAN.• We empirically study the performance of Stackelberg GAN for various synthetic and real datasets. We observe that without any human assignment, surprisingly, each generator automatically learns balanced number of modes without any mode being dropped (see FIG2). Compared with other multi-generator GANs with the same network capacity, our experiments show that Stackelberg GAN enjoys 26.76 Fréchet Inception Distance on CIFAR-10 dataset while prior achieve 31.34 (smaller is better), achieving an improvement of 14.61%. Before proceeding, we define some notations and formalize our model setup in this section. Notations. We will use bold lower-case letter to represent vector and lower-case letter to represent scalar. Specifically, we denote by θ ∈ R t the parameter vector of discriminator and γ ∈ R g the parameter vector of generator. Let D θ (x) be the output probability of discriminator given input x, and let G γ (z) represent the generated vector given random input z. For any function f (u), we denote by f DISPLAYFORM0 } the conjugate function of f. Letclf be the convex closure of f, which is defined as the function whose epigraph is the convex closed hull of that of function f. We define clf:= −cl(−f). We will use I to represent the number of generators. Preliminaries. The key ingredient in the standard GAN is to play a zero-sum two-player game between a discriminator and a generator -which are often parametrized by deep neural networks in practice -such that the goal of the generator is to map random noise z to some plausible images/texts G γ (z) and the discriminator D θ (·) aims at distinguishing the real images/texts from what the generator creates. For every parameter implementations γ and θ of generator and discriminator, respectively, denote by the payoff value DISPLAYFORM0 where f (·) is some concave, increasing function. Hereby, P d is the distribution of true images/texts and P z is a noise distribution such as Gaussian or uniform distribution. The standard GAN thus solves the following saddle point problems: DISPLAYFORM1 For different choices of function f, problem leads to various variants of GAN. For example, when f (t) = log t, problem is the classic GAN; when f (t) = t, it reduces to the Wasserstein GAN. We refer interested readers to the paper of BID18 for more variants of GANs. Stackelberg GAN. Our model of Stackelberg GAN is inspired from the Stackelberg competition in the domain of game theory. Instead of playing a two-player game as in the standard GAN, in Stackelberg GAN there are I + 1 players with two firms -one discriminator and I generators. One can make an analogy between the discriminator (generators) in the Stackelberg GAN and the leader (followers) in the Stackelberg competition. Stackelberg GAN is a general framework which can be built on top of all variants of standard GANs. The objective function is simply an ensemble of losses w.r.t. all possible pairs of generators and discriminator: DISPLAYFORM2 Thus it is very easy to implement. The Stackelberg GAN therefore solves the following saddle point problems: DISPLAYFORM3 We term w * − q * the minimax (duality) gap. We note that there are key differences between the naïve ensembling model and ours. In the naïve ensembling model, one trains multiple GAN models independently and averages their outputs. In contrast, our Stackelberg GAN shares a unique discriminator for various generators, thus requires jointly training. FIG3 shows the architecture of our Stackelberg GAN.How to generate samples from Stackelberg GAN? In the Stackelberg GAN, we expect that each generator learns only a few modes. In order to generate a sample that may come from all modes, we use a mixed model. In particular, we generate a uniformly random value i from 1 to I and use the i-th generator to obtain a new sample. Note that this procedure in independent of the training procedure. In this section, we develop our theoretical contributions and compare our with the prior work. We begin with studying the minimax gap of Stackelberg GAN. Our main show that the minimax gap shrinks as the number of generators increases. To proceed, denote by DISPLAYFORM0, where the conjugate operation is w.r.t. the second argument of φ(γ i ; ·). We clarify here that the subscript i in h i indicates that the function h i is derived from the i-th generator. The argument of h i should depend on i, so we denote it by u i. Intuitively, h i serves as an approximate convexification of −φ(γ i, ·) w.r.t the second argument due to the conjugate operation. Denote byclh i the convex closure of h i: DISPLAYFORM1 clh i represents the convex relaxation of h i because the epigraph ofclh i is exactly the convex hull of epigraph of h i by the definition ofclh i. Let DISPLAYFORM2 measures the non-convexity of objective function w.r.t. argument θ. For example, it is equal to 0 if and only if φ(γ i ; θ) is concave and closed w.r.t. discriminator parameter θ. We have the following guarantees on the minimax gap of Stackelberg GAN. DISPLAYFORM3 Denote by t the number of parameters of discriminator, i.e., θ ∈ R t. Suppose that h i (·) is continuous and domh i is compact and convex. Then the duality gap can be bounded by DISPLAYFORM4 provided that the number of generators I > t+1 ∆ worst γ. Remark 1. Theorem 1 makes mild assumption on the continuity of loss and no assumption on the model capacity of discriminator and generators. The analysis instead depends on their nonconvexity as being parametrized by deep neural networks. In particular, ∆, we have 0 ≤ w * − q * ≤. The of Theorem 1 and Corollary 2 are independent of model capacity of generators and discriminator. When we make assumptions on the expressive power of generator as in BID1, we have the following guarantee on the existence of -approximate equilibrium. Theorem 3. Under the settings of Theorem 1, suppose that for any ξ > 0, there exists a generator G such that E x∼P d,z∼Pz G(z) − x 2 ≤ ξ. Let the discriminator and the generators be L-Lipschitz w.r.t. inputs and parameters, respectively. Then for any > 0, DISPLAYFORM0 and a discriminator D θ * such that for some value V ∈ R, DISPLAYFORM1 Related Work. While many efforts have been devoted to empirically investigating the performance of multi-generator GAN, little is known about how many generators are needed so as to achieve certain equilibrium guarantees. Probably the most relevant prior work to Theorem 3 is that of BID1. In particular, BID1 showed that there exist I = 100t 2 ∆ 2 generators and one discriminator such that -approximate equilibrium can be achieved, provided that for all x and any ξ > 0, there exists a generator G such that E z∼Pz G(z) − x 2 ≤ ξ. Hereby, ∆ is a global upper bound of function |f |, i.e., f ∈ [−∆, ∆]. In comparison, Theorem 3 improves over this in two aspects: a) the assumption on the expressive power of generators in BID1. Therefore, Theorem 3 requires much fewer generators than that of BID1. DISPLAYFORM2 In this section, we empirically investigate the effect of network architecture and capacity on the mode collapse/dropping issues for various multi-generator architecture designs. Hereby, the mode dropping refers to the phenomenon that generative models simply ignore some hard-to-represent modes of real distributions, and the mode collapse means that some modes of real distributions are "averaged" by generative models. For GAN, it is widely believed that the two issues are caused by the large gap between the minimax and maximin objective function values (see page 35, BID10). Our experiments verify that network capacity (change of width and depth) is not very crucial for resolving the mode collapse issue, though it can alleviate the mode dropping in certain senses. Instead, the choice of architecture of generators plays a key role. To visualize this discovery, we test the performance of varying architectures of GANs on a synthetic mixture of Gaussians dataset with 8 modes and 0.01 standard deviation. We observe the following phenomena: Naïvely increasing capacity of one-generator architecture does not alleviate mode collapse. It shows that the multi-generator architecture in the Stackelberg GAN effectively alleviates the mode collapse issue. Though naïvely increasing capacity of one-generator architecture alleviates mode dropping issue, for more challenging mode collapse issue, the effect is not obvious (see FIG9 . (b) show that increasing the model capacity can alleviate the mode dropping issue, though it does not alleviate the mode collapse issue. (c) Multi-generator architecture with even small capacity resolves the mode collapse issue. Stackelberg GAN outperforms multi-branch models. We compare performance of multi-branch GAN and Stackelberg GAN with objective functions: DISPLAYFORM0 Hereby, the multi-branch GAN has made use of extra information that the real distribution is Gaussian mixture model with probability distribution function DISPLAYFORM1, so that each γ i tries to fit one component. However, even this we observe that with same model capacity, Stackelberg GAN significantly outperforms multi-branch GAN (see FIG11 (a)(c)) even without access to the extra information. The performance of Stackelberg GAN is also better than multi-branch GAN of much larger capacity (see FIG11). Generators tend to learn balanced number of modes when they have same capacity. We observe that for varying number of generators, each generator in the Stackelberg GAN tends to learn equal number of modes when the modes are symmetric and every generator has same capacity (see FIG13). In this section, we verify our theoretical contributions by the experimental validation. We first show that Stackelberg GAN generates more diverse images on the MNIST dataset than classic GAN. We follow the standard preprocessing step that each pixel is normalized via subtracting it by 0.5 and dividing it by 0.5. The detailed network setups of discriminator and generators are in TAB4. Figure 6 shows the diversity of generated digits by Stackelberg GAN with varying number of generators. When there is only one generator, the digits are not very diverse with many repeated "1"'s and much fewer "2"'s. As the number of generators increases, the generated images tend to be more diverse. In particular, for 10-generator Stackelberg GAN, each generator is associated with one or two digits without any digit being missed. We also observe better performance by the Stackelberg GAN on the Fashion-MNIST dataset. Fashion-MNIST is a dataset which consists of 60,000 examples. Each example is a 28 × 28 grayscale image associating with a label from 10 classes. We follow the standard preprocessing step that each pixel is normalized via subtracting it by 0.5 and dividing it by 0.5. We specify the detailed network setups of discriminator and generators in TAB4. Figure 7 shows the diversity of generated fashions by Stackelberg GAN with varying number of generators. When there is only one generator, the generated images are not very diverse without Figure 6: Standard GAN vs. Stackelberg GAN on the MNIST dataset without cherry pick. Left Figure: Digits generated by the standard GAN. It shows that the standard GAN generates many "1"'s which are not very diverse. Middle Figure: Digits generated by the Stackelberg GAN with 5 generators, where every two rows correspond to one generator. Right Figure: Digits generated by the Stackelberg GAN with 10 generators, where each row corresponds to one generator. Figure 7: Generated samples by Stackelberg GAN on CIFAR-10 dataset without cherry pick. Left Figure: Examples generated by the standard GAN. It shows that the standard GAN fails to generate bags. Middle Figure: Examples generated by the Stackelberg GAN with 5 generators, where every two rows correspond to one generator. Right Figure: Examples generated by the Stackelberg GAN with 10 generators, where each row corresponds to one generator. any bags being found. As the number of generators increases, the generated images tend to be more diverse. In particular, for 10-generator Stackelberg GAN, each generator is associated with one class without any class being missed. We then implement Stackelberg GAN on the CIFAR-10 dataset. CIFAR-10 includes 60,000 32×32 training images, which fall into 10 classes BID15 ). The architecture of generators and discriminator follows the design of DCGAN in BID19. We train models with 5, 10, and 20 fixed-size generators. The show that the model with 10 generators performs the best. We also train 10-generator models where each generator has 2, 3 and 4 convolution layers. We find that the generator with 2 convolution layers, which is the most shallow one, performs the best. So we report the obtained from the model with 10 generators containing 2 convolution layers. FIG15 shows the samples produced by different generators. The samples are randomly drawn instead of being cherry-picked to demonstrate the quality of images generated by our model. For quantitative evaluation, we use Inception score and Fréchet Inception Distance (FID) to measure the difference between images generated by models and real images. Results of Inception Score. The Inception score measures the quality of a generated image and is correlated well with human's judgment BID21. We report the Inception score obtained by our Stackelberg GAN and other baseline methods in TAB1. For fair comparison, we only consider the baseline models which are completely unsupervised model and do not need any label information. Instead of directly using the reported Inception scores by original papers, we replicate the experiment of MGAN using the code, architectures and parameters reported by their original papers, and evaluate the scores based on the new experimental . TAB1 shows that our model achieves a score of 7.62 in CIFAR-10 dataset, which outperforms the state-of-the-art models. For fairness, we configure our Stackelberg GAN with the same capacity as MGAN, that is, the two models have comparative number of total parameters. When the capacity of our Stackelberg GAN is as small as DCGAN, our model improves over DCGAN significantly. Results of Fréchet Inception Distance. We then evaluate the performance of models on CIFAR-10 dataset using the Fréchet Inception Distance (FID), which better captures the similarity between generated images and real ones BID12. As TAB1 shows, under the same capacity as DCGAN, our model reduces the FID by 20.74%. Meanwhile, under the same capacity as MGAN, our model reduces the FID by 14.61%. This improvement further indicates that our Stackelberg GAN with multiple light-weight generators help improve the quality of the generated images. Real data 11.24 ± 0.16 -WGAN 3.82 ± 0.06 -MIX+WGAN BID1 4.04 ± 0.07 -Improved-GAN BID21 4.36 ± 0.04 -ALI BID7 5.34 ± 0.05 -BEGAN BID4 5.62 -MAGAN BID24 5.67 -GMAN BID8 6.00 ± 0.19 -DCGAN BID19 6.40 ± 0.05 37.7 Ours (capacity as DCGAN)7.02 ± 0.07 29.88 D2GAN BID17 7.15 ± 0.07 -MAD-GAN (our run) BID9 6.67 ± 0.07 34.10 MGAN (our run) BID13 7.52 ± 0.1 31.34 Ours (FIG2 7.62 ± 0.07 26.76 We also evaluate the performance of Stackelberg GAN on the Tiny ImageNet dataset. The Tiny ImageNet is a large image dataset, where each image is labelled to indicate the class of the object inside the image. We resize the figures down to 32 × 32 following the procedure described in BID6 . FIG15 shows the randomly picked samples generated by 10-generator Stackelberg GAN. Each row has samples generated from one generator. Since the types of some images in the Tiny ImageNet are also included in the CIFAR-10, we order the rows in the similar way as FIG15 . In this work, we tackle the problem of instability during GAN training procedure, which is caused by the huge gap between minimax and maximin objective values. The core of our techniques is a multi-generator architecture. We show that the minimax gap shrinks to as the number of generators increases with rate O(1/), when the maximization problem w.r.t. the discriminator is concave. This improves over the best-known of O(1/ 2). Experiments verify the effectiveness of our proposed methods. TAB5 is by the weak duality. Thus it suffices to prove the other side of the inequality. All notations in this section are defined in Section 3.1. We first show that DISPLAYFORM0 Denote by DISPLAYFORM1 We have the following lemma. Lemma 4. We have DISPLAYFORM2 Proof. By the definition of p, we have p = inf γ1,...,γ I ∈R g sup θ∈R t Φ(γ 1, ..., γ I ; θ). Since (clp)(·) is the convex closure of function p(·) (a.k.a. weak duality theorem), we have (clp) ≤ p. We now show that sup DISPLAYFORM3 Note that p(u) = inf γ1,...,γ I ∈R g p γ1,...,γ I (u), where p γ1,...,γ I (u) = sup θ∈R t {Φ(γ 1, ..., γ I ; θ) − u T θ} = (− Φ(γ 1, ..., γ I ; ·)) * (−u), and that. We have the following lemma. DISPLAYFORM4 Lemma 5. Under the assumption in Theorem 1, DISPLAYFORM5 Proof. We note that DISPLAYFORM6 where u 1,..., u I, u ∈ R t. Therefore, DISPLAYFORM7 Consider the subset of R t+1: DISPLAYFORM8 Define the vector summation DISPLAYFORM9 is continuous and domh i is compact, the set DISPLAYFORM10 DISPLAYFORM11 We apply Lemma 6 to prove Lemma 5 with m = t + 1. Let (r, w) ∈ conv(Y) be such that r = 0, and w =clp. DISPLAYFORM12 i ∈I DISPLAYFORM13 Representing elements of the convex hull of DISPLAYFORM14 by Carathéodory theorem, we have that for each i ∈ I, there are vectors {u DISPLAYFORM15 Recall that we definȇ DISPLAYFORM16 and DISPLAYFORM17 We have for i ∈ I, DISPLAYFORM18 Thus, by Eqns. FORMULA27 and FORMULA30, we have DISPLAYFORM19 Therefore, we have DISPLAYFORM20 (by Eqns. FORMULA28 and FORMULA33) DISPLAYFORM21, (by Lemma 6) as desired. By Lemmas 4 and 5, we have proved that DISPLAYFORM22 To prove Theorem 1, we note that DISPLAYFORM23 When φ(γ i ; θ) is concave and closed w.r.t. discriminator parameter θ, we have clφ = φ. Thus, ∆ minimax θ = ∆ maximin θ = 0 and 0 ≤ w * − q * ≤. We first show that the equilibrium value V is 2f (1/2). For the discriminator D θ which only outputs 1/2, it has payoff 2f (1/2) for all possible implementations of generators G γ1,..., G γ I. Therefore, we have V ≥ 2f (1/2). We now show that V ≤ 2f (1/2). We note that by assumption, for any ξ > 0, there exists a closed neighbour of implementation of generator G ξ such that E x∼P d,z∼Pz G ξ (z) − x 2 ≤ ξ for all G ξ in the neighbour. Such a neighbour exists because the generator is Lipschitz w.r.t. its parameters. Let the parameter implementation of such neighbour of G ξ be Γ. The Wasserstein distance between G ξ and P d is ξ. Since the function f and the discriminator are L f -Lipschitz and L-Lipschitz, respectively, we have DISPLAYFORM0. Thus, for any fixed γ, we have DISPLAYFORM1 which implies that sup θ∈R t Φ(γ 1, ..., γ I ; θ) = 2f (1/2) for all γ 1,..., γ I ∈ Γ. So we have V = 2f (1/2). This means that the discriminator cannot do much better than a random guess. The above analysis implies that the equilibrium is achieved when D θ * only outputs 1/2. Denote by Θ the small closed neighbour of such θ * such that Φ(γ 1, ..., γ I ; θ) is concave w.r.t. θ ∈ Θ for any fixed γ 1,..., γ I ∈ Γ. We thus focus on the loss on Θ ⊆ R t and Γ ⊆ R g: DISPLAYFORM2 Since Φ(γ 1, ..., γ I ; θ) is concave w.r.t. θ ∈ Θ for all γ 1,..., γ I ∈ Γ, by Corollary 2, we have DISPLAYFORM3 The optimal implementations of γ 1,..., γ I is achieved by argmin γ1,...,γ I ∈Γ sup θ∈Θ DISPLAYFORM4 Proof. We define DISPLAYFORM5 Clearly, the vanilla GAN optimization can be understood as projecting under L: DISPLAYFORM6 In the Stackelberg GAN setting, we are projecting under a different distanceL which is defined as DISPLAYFORM7 We note that f is strictly concave and the discriminator has capacity large enough implies the followings: L(P 1, P 2), as a function of P 2, achieves the global minimum if and only if P 2 = P 1. The theorem then follows from this fact and.E NETWORK SETUP Adam(β 1 = 0.5, β 2 = 0.999) Weight, bias initialization N (µ = 0, σ = 0.01), 0
We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design, with theoretical guarantees.
1,104
scitldr
Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The ing algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam. Many modern neural networks and other machine learning models are over-parametrized. These models are typically trained to have near zero training loss, known as interpolation and often have strong generalization performance, as indicated by a range of empirical evidence including (23; 3). Due to a key property of interpolation -automatic variance reduction (discussed in Section 2.1), stochastic gradient descent (SGD) with constant step size is shown to converge to the optimum of a convex loss function for a wide range of step sizes. Moreover, the optimal choice of step size η * for SGD in that setting can be derived analytically. The goal of this paper is to take a step toward understanding momentum-based SGD in the interpolating setting. Among them, stochastic version of Nesterov's acceleration method (SGD+Nesterov) is arguably the most widely used to train modern machine learning models in practice. The popularity of SGD+Nesterov is tied to the well-known acceleration of the deterministic Nesterov's method over gradient descent. Yet, has not not theoretically clear whether Nesterov SGD accelerates over SGD. As we show in this work, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge, even in the linear setting, for step sizes that guarantee convergence of ordinary SGD. Intuitively, the lack of acceleration stems from the fact that, to ensure convergence, the step size of SGD+Nesterov has to be much smaller than the optimal step size for SGD. This is in contrast to the deterministic Nesterov method, which accelerates using the same step size as optimal gradient descent. As we prove rigorously in this paper, the slow-down of convergence caused by the small step size negates the benefit brought by the momentum term. We note that a similar lack of acceleration for the stochastic Heavy Ball method was analyzed in. To address the non-acceleration of SGD+Nesterov, we introduce an additional compensation term to allow convergence for the same range of step sizes as SGD. The ing algorithm, MaSS (Momentum-added Stochastic Solver) 1 updates the weights w and u using the following rules (with the compensation term underlined): Figure 1: Non-acceleration of Nesterov SGD and fast convergence of MaSS. w t+1 ← u t − η 1∇ f (u t), u t+1 ← (1 + γ)w t+1 − γw t + η 2∇ f (u t). (Here,∇ represents the stochastic gradient. The step size η 1, the momentum parameter γ ∈ and the compensation parameter η 2 are independent of t. We proceed to analyze theoretical convergence properties of MaSS in the interpolated regime. Specifically, we show that in the linear setting MaSS converges exponentially for the same range of step sizes as plain SGD, and the optimal choice of step size for MaSS is exactly η * which is optimal for SGD. Our key theoretical shows that MaSS has accelerated convergence rate over SGD. Furthermore, in the full batch (deterministic) scenario, our analysis selects η 2 = 0, thus reducing MaSS to the classical Nesterov's method. In this case our convergence rate also matches the well-known convergence rate for the Nesterov's method (15; 4). This acceleration is illustrated in Figure 1. Note that SGD+Nesterov (as well as Stochastic Heavy Ball) does not converge faster than SGD, in line with our theoretical analysis. We also prove exponential convergence of MaSS in more general convex setting under additional conditions. We further analyze the dependence of the convergence rate e −s(m)t and optimal hyper-parameters on the mini-batch size m. We identify three distinct regimes of dependence defined by two critical values m * 1 and m * 2: linear scaling, diminishing returns and saturation, as illustrated in Figure 2. The convergence speed per iteration s(m), as well as the optimal hyper-parameters, increase linearly as m in the linear scaling regime, sub-linearly in the diminishing returns regime, and can only increase by a small constant factor in the saturation regime. The critical values m * 1 and m * 2 are derived analytically. We note that the intermediate "diminishing terurns" regime is new and is not found in SGD. To the best of our knowledge, this is the first analysis of mini-batch dependence for accelerated stochastic gradient methods. We also experimentally evaluate MaSS on deep neural networks, which are non-convex. We show that MaSS outperforms SGD, SGD+Nesterov and Adam both in optimization and generalization, on different architectures of deep neural networks including convolutional networks and ResNet. The paper is organized as follows: In section 2, we introduce notations and preliminary . In section 3, we discuss the non-acceleration of SGD+Nesterov. In section 4 we introduce MaSS and analyze its convergence and optimal hyper-parameter selection. In section 5, we analyze the mini-batch MaSS. In Section 6, we show experimental . Over-parameterized models have drawn increasing attention in the literature as many modern machine learning models, especially neural networks, are over-parameterized and show strong generalization performance (16; 23; 2). Over-parameterized models usually in nearly perfect fit (or interpolation) of the training data (23; 18; 3). Exponential convergence of SGD with constant step size under interpolation and its dependence on the batch size is analyzed in. There are a few works that show or indicate the non-acceleration of existing stochastic momentum methods. First of all, the work theoretically proves non-acceleration of stochastic Heavy Ball method (SGD+HB) over SGD on certain synthetic data. Furthermore, these authors provide experimental evidence that SGD+Nesterov also converges at the same rate as SGD on the same data. The work theoretically shows that, for sufficiently small step-sizes, SGD+Nesterov and SGD+HB is equivalent to SGD with a larger step size. However, the in do not exclude the possibility that acceleration is possible when the step size is larger. The work concludes that "momentum hurts the convergence within the neighborhood of global optima", based on a theoretical analysis of SGD+HB. These are consistent with our analysis of the standard SGD+Nesterov. However, this does not apply to all momentum methods. Indeed, we will show that MaSS provably improves convergence over SGD. There is a large body of work, both practical and theoretical, on SGD with momentum, including (10; 8; 1). Adam, and its variant AMSGrad, are among the most practically used SGD methods with momentum. Unlike our method, Adam adaptively adjusts the step size according to a weight-decayed accumulation of gradient history. In the authors proposed an accelerated SGD algorithm, which can be written in the form shown on the right hand side in Eq.8, but with different hyper-parameter selection. Their ASGD algorithm also has a tail-averaging step at the final stage. In the interpolated setting (no additive noise) their analysis yields a convergence rate of O(P oly(κ,κ) exp(−)) for our algorithm with batch size 1. We provide some experimental comparisons between their ASGD algorithm and MaSS in Fig. 4. The work proposes and analyzes another first-order momentum algorithm and derives convergence rates under a different set of conditions -the strong growth condition for the loss function in addition to convexity. As shown in Appendix F.3, on the example of a Gaussian distributed data, the rates obtained in can be slower than those for SGD. In contrast, our algorithm is guaranteed to never have a slower convergence rate than SGD. Furthermore, in the same Gaussian setting MaSS matches the optimal accelerated full-gradient Nesterov rate. Additionally, in our work we consider the practically important dependence of the convergence rate and optimal parameter selection on the mini-batch size, which to the best of our knowledge, has not been analyzed for momentum methods., where f i only depends on a single data point (x i, y i). Let ∇f denote the exact gradient, and∇ m f denote the unbiased stochastic gradient evaluated based on a mini-batch of size m. For simplicity, we also denote∇f (w):=∇ 1 f (w). We use the concepts of strong convexity and smoothness of functions, see definitions in Appendix B.1. For loss function with µ-strong convexity and L-smoothness, the condition number κ is defined as κ = L/µ. In the case of the square loss,, and the Hessian matrix is H:. Let L and µ be the largest and the smallest non-zero eigenvalues of the Hessian respectively. Then the condition number is then κ = L/µ (note that zero eigenvalues can be ignored in our setting, see Section 4). Given a mini-batch size m, we define the m-stochastic condition number as κ m:= L m /µ. Following, we introduce the quantityκ (called statistical condition number in), which is the smallest positive real number such that E x 2 Hence, the quadratic loss function is also L m -smooth, for all m ≥ 1. By the definition of κ m, we also have Remark 2. It is important to note thatκ ≤ κ 1, since E x 2 We consider over-parametrized models that have zero training loss solutions on the training data (e.g.,). A solution f i (w) which fits the training data perfectly f i (w) = 0, ∀i = 1, 2, · · ·, n, is known as interpolating. In the linear setting, interpolation implies that the linear system {x has at least one solution. A key property of interpolation is Automatic Variance Reduction (AVR), where the variance of the stochastic gradient decreases to zero as the weight w approaches the optimal w *. For a detailed discussion of AVR see Appendix B.2. Thanks to AVR, plain SGD with constant step size can be shown to converge exponentially for strongly convex loss functions (13; 19; 14; 12). The set of acceptable step sizes is (0, 2/L m), where L m is defined in Eq.2 and m is the mini-batch size. Moreover, the optimal step size η * (m) of SGD that induces fastest convergence guarantee is proven to be 1/L m. In this section we prove that SGD+Nesterov, with any constant hyper-parameter setting, does not generally improve convergence over optimal SGD. Specifically, we demonstrate a setting where SGD+Nesterov can be proved to have convergence rate of (1 − O(1/κ)) t, which is same (up to a constant factor) as SGD. In contrast, the classical accelerated rate for the deterministic Nesterov's method is We will consider the following two-dimensional data-generating component decoupled model. Fix an arbitrary w * ∈ R 2 and randomly sample z from N. The data points (x, y) are constructed as follow: where e 1, e 2 ∈ R 2 are canonical basis vectors, The following theorem gives a lower bound for the convergence of SGD+Nesterov, regarding the linear regression problem on the component decoupled data model. See Appendix C for the proof. be a dataset generated according to the component decoupled model. Consider the optimization problem of minimizing quadratic function 2. For any step size η > 0 and momentum parameter γ ∈ of SGD+Nesterov with random initialization, with probability one, there exists a T ∈ N such that ∀t > T, where C > 0 is a constant. Compared with the convergence rate (1−1/κ) t of SGD, this theorem shows that SGD+Nesterov does not accelerate over SGD. This is very different from that in the deterministic gradient scenario, where the classical Nesterov's method has a strictly faster convergence guarantee than gradient descent. Intuitively, the key reason for the non-acceleration of SGD+Nesterov is a condition on the step size η required for non-divergence of the algorithm. Specifically, when momentum parameter γ is close to 1, η is required to be less than 2 ) (precise formulation is given in Lemma 1 in Appendix C). The slow-down ing from the small step size necessary to satisfy that condition cannot be compensated by the benefit of the momentum term. In particular, the condition on the step-size of SGD+Nesterov excludes η * that achieves fastest convergence for SGD. We show in the following corollary that, with the step size η *, SGD+Nesterov diverges. This is different from the deterministic scenario, where the Nesterov method accelerates using the same step size as gradient descent. Corollary 1. Consider the same optimization problem as in Theorem 1. Let step-size η = and acceleration parameter γ ∈ [0.6, 1], then SGD+Nesterov, with random initialization, diverges with probability 1. We empirically verify the non-acceleration of SGD+Nesterov as well as Corollary 1, in Section 6 and Appendix F.2. In this section, we propose MaSS, which introduces a compensation term (see Eq.1) onto SGD+Nesterov. We show that MaSS can converge exponentially for all the step sizes that in convergence of SGD, i.e., η ∈ (0, 2/L m). Importantly, we derive a convergence rate exp(−t/ √ κ 1κ), whereκ ≤ κ 1, for MaSS which is faster than the convergence rate for SGD exp(−t/κ 1). Moreover, we give an analytical expression for the optimal hyper-parameter setting. For ease of analysis, we rewrite update rules of MaSS in Eq.1 in the following equivalent form (introducing an additional variable v): There is a bijection between the hyper-parameters (η 1, η 2, γ) and (η, α, δ), which is given by: Remark 3 (SGD+Nesterov). In the literature, the Nesterov's method is sometimes written in a similar form as the R.H.S. of Eq.8. Since SGD+Nesterov has no compensation term, δ has to be fixed as η/α, which is consistent with the parameter setting in. Assumptions. We first assume square loss function, and later extend the analysis to general convex loss functions under additional conditions. For square loss function, the solution set W *:= {w ∈ R d |f (w) = 0} is an affine subspace in the parameter space R d. Given any w, we denote its closest solution as w *:= arg min v∈W * w − v, and define the error = w − w *. Be aware that different w may correspond to different w *, and that and (stochastic) gradients are always perpendicular to W * (see discussion in Appendix B.3). Hence, no actual update happens along W *. For this reason, we can ignore zero eigenvalues of H and restrict our analysis to the span of the eigenvectors of the Hessian with non-zero eigenvalues. Based on the equivalent form of MaSS in Eq.8, the following theorem shows that, for square loss function in the interpolation setting, MaSS is guaranteed to have exponential convergence when hyper-parameters satisfy certain conditions. Theorem 2 (Convergence of MaSS). Consider minimizing a quadratic loss function in the interpolation setting. Let µ be the smallest non-zero eigenvalue of the Hessian matrix H. Let L m be as defined in Eq.2. Denoteκ m:=κ/m + (m − 1)/m. In MaSS with mini batch of size m, if the positive hyper-parameters η, α, δ satisfy the following two conditions: then, after t iterations, Consequently, for some constant C > 0 which depends on the initialization. Remark 4. By condition Eq.10, the admissible step size η is (0, 2/L m), exactly the same as SGD for interpolated setting. Remark 5. One can easily check that the hyper-parameter setting of SGD+Nesterov does not satisfy the conditions in Eq.10. Proof sketch for Theorem 2. Denote F t:= E v t+1 − w * 2 we show that, under the update rules of MaSS in Eq.8, By the condition in Eq.10, c 1 ≤ 0, c 2 ≤ 0, then the last two terms are non-positive. Hence, Using that w t −w * 2 ≤ αF t /δ, we get the final . See detailed proof in Appendix D. Hyper-parameter Selection. From Theorem 2, we observe that the convergence rate is determined by (1−α) t. Therefore, larger α is preferred for faster convergence. Combining the conditions in Eq.10, we have By setting η * = 1/L m, which maximizes the right hand side of the inequality, we obtain the optimal selection α * = 1/ √ κ mκm. Note that this setting of η * and α * determines a unique δ * = α * /µ by the conditions in Eq.10. In summary, By Eq.9, the optimal selection of (η 1, η 2, γ) would be: κ m is usually larger than 1, which implies that the coefficient η * 2 of the compensation term is non-negative. The non-negative coefficient η 2 indicates that the weight u t is "over-descended" in SGD+Nesterov and needs to be compensated along the gradient direction. It is important to note that the optimal step size for MaSS as in Eq.13 is exactly the same as the optimal one for SGD. With such hyper-parameter selection given in Eq.14, we have the following theorem for optimal convergence: Theorem 3 (Acceleration of MaSS). Under the same assumptions as in Theorem 2, if we set hyperparameters in MaSS as in Eq.13, then after t iteration of MaSS with mini batch of size m, for some constant C > 0 which depends on the initialization. Remark 6. With the optimal hyper-parameters in Eq.13, the asymptotic convergence rate of MaSS is which is faster than the rate O(e −t/κm) of SGD (see ), since κ m ≥κ m. Remark 7 (MaSS Reduces to the Nesterov's method for full batch). In the limit of full batch m → ∞, we have κ m → κ,κ m → 1, the optimal parameter selection in Eq.14 reduces to It is interesting to observe that, in the full batch (deterministic) scenario, the compensation term vanishes and η * 1 and γ * are the same as those in Nesterov's method. Hence MaSS with the optimal hyper-parameter selection reduces to Nesterov's method in the limit of full batch. Moreover, the convergence rate in Theorem 3 reduces to O(e −t/ √ κ), which is exactly the well-known convergence rate of Nesterov's method (15; 4). Extension to Convex Case. First, we extend the definition of L 1 to convex functions,, for some > 0. In MaSS, if the hyper-parameters are set to be: then after t iterations, there exists a constant t. Based on our analysis, we discuss the effect of selection of mini-batch size m. We show that the domain of mini-batch size m can be partitioned into three intervals by two critical points: The three intervals/regimes are depicted in Figure 2, and the detailed analysis is in Appendix G. The optimal selection of hyper-parameters is approximated by: and the convergence rate in Eq. In the linear scaling regime, the hyper-parameter selections follow a Linear Scaling Rule (LSR): When the mini-batch size is multiplied by k, multiply all hyperparameters (η, α, δ) by k. This parallels the linear scaling rule for SGD which is an accepted practice for training neural networks. This three regimes partition is different from that for SGD, where only linear scaling and saturation regimes present. An empirical verification of the dependence of the convergence speed on m is shown in Figure 3. See the setup in Appendix G. Synthetic Data. We empirically verify the non-acceleration of SGD+Nesterov and the fast convergence of MaSS on synthetic data. Specifically, we optimize the quadratic function is generated by the component decoupled model described in Section 3. We compare the convergence behavior of SGD+Nesterov with SGD, as well as our proposed method, MaSS, and several other methods: SGD+HB, ASGD. We select the best hyper-parameters from dense grid search for SGD+Nesterov (step-size and momentum parameter), SGD+HB (step-size and momentum parameter) and SGD (step-size). For MaSS, we do not tune the hyper-parameters but use the hyper-parameter setting suggested by our theoretical analysis in Section 4; For ASGD, we use the setting provided by. Hyperparameters: we use optimal parameters for SGD+Nesterov, SGD+HB and SGD; the setting in for ASGD; Eq. 13 for MaSS. 9. We observe that the fastest convergence of SGD+Nesterov is almost identical to that of SGD, indicating the non-acceleration of SGD+Nesterov. We also observe that our proposed method, MaSS, clearly outperforms the others. In Appendix F.2, we provide additional experiments on more settings of the component decoupled data, and Gaussian distributed data. We also show the divergence of SGD+Nesterov with the same step size as SGD and MaSS in Appendix F.2. Real data: MNIST and CIFAR-10. We compare the optimization performance of SGD, SGD+Nesterov and MaSS on the following tasks: classification of MNIST with a fullyconnected network (FCN), classification of CIFAR-10 with a convolutional neural network (CNN) and Gaussian kernel regression on MNIST. See detailed description of the architectures in Appendix H.1. In all the tasks and for all the algorithms, we select the best hyper-parameter setting over dense grid search, except that we fix the momentum parameter γ = 0.9 for both SGD+Nesterov and MaSS, which is typically used in practice. All algorithms are implemented with mini batches of size 64 for neural network training. Test Performance. We show that the solutions found by MaSS have good generalization performance. We evaluate the classification accuracy of MaSS, and compare with SGD, SGD+Nesterov and Adam, on different modern neural networks: CNN and ResNet. See description of the architectures in Appendix H.1. In the training processes, we follow the standard protocol of data augmentation and reduction of learning rate, which are typically used to achieve state-of-the-art in neural networks. In each task, we use the same initial learning rate for MaSS, SGD and SGD+Nesterov, and run the same number of epochs (150 epochs for CNN and 300 epochs for ResNet-32). Detailed experimental settings are deferred to Appendix H.2. Average of 3 runs that converge. Some runs diverge. † Adam uses initial step size 0.001. Table 6 compares the classification accuracy of these algorithms on the test set of CIFAR-10 (average of 3 independent runs). We observe that MaSS produces the best test performance. We also note that increasing initial learning rate may improves performance of MaSS and SGD, but degrades that of SGD+Nesterov. Moreover, in our experiment, SGD+Nesterov with large step size η = 0.3 diverges in 5 out of 8 runs on CNN and 2 out of 5 runs on ResNet-32 (for random initialization), while MaSS and SGD converge on every run. Step-size η 1, secondary step-size η 2, acceleration parameter γ ∈.. end while Output: weight w t. Note that the proposed algorithm initializes the variables w 0 and u 0 with the same vector, which could be randomly generated. As discussed in section 4, MaSS can be equivalently implemented using the following update rules: In this case, variables u 0, v 0 and w 0 should be initialized with the same vector. There is a bijection between the hyper-parameters (η 1, η 2, γ) and (η, α, δ), which is given by: B ADDITIONAL PRELIMINARIES ). A differentiable function f: R d → R is µ-strongly convex (µ > 0), if f (x) ≥ f (z) + ∇f (z), x − z + µ 2 x − z 2, ∀x, z ∈ R d. Definition 2 (Smoothness). A differentiable function f: R d → R is L-smooth (L > 0), if f (x) ≤ f (z) + ∇f (z), x − z + L 2 x − z 2, ∀x, z ∈ R d. In the interpolation setting, one can write the square loss as A key property of interpolation is that the variance of the stochastic gradient of decreases to zero as the weight w approaches an optimal solution w *. Proposition 1 (Automatic Variance Reduction). For the square loss function f in the interpolation setting, the stochastic gradient at an arbitrary point w can be written as Moreover, the variance of the stochastic gradient Since E[(H m − H) 2 ] is independent of w, the above proposition unveils a linear dependence of variance of stochastic gradient on the norm square of error. This observation underlies exponential convergence of SGD in certain convex settings (20; 13; 19; 14; 12). Consider the square loss function, f (w) = 1 2n where the stochastic gradient is computed based on a randomly sampled batch of size m. Recall that the solution set W *:= {w ∈ R d |f (w) = 0} is an affine subspace in the parameter space, and that w * is the solution in W * that is closest to w. Hence, w − w * is perpendicular to W *, i.e., w − w Hence the (stochastic) gradient is perpendicular to W *. The key proof technique is to consider the asymptotic behavior of SGD+Nesterov in the decoupled model of data when the condition number becomes large. Notations and proof setup. Recall that the square loss function based on the component decoupled data D, define in Eq.6, is in the interpolation regime, then for SGD+Nesterov, we have the recurrence relation It is important to note that each component of w t evolves independently, due to the fact thatH is diagonal. With:= w − w *, we define for each component j = 1, 2 that where [j] is the j-th component of vector. The recurrence relation in Eq.26 can be rewritten as with For the ease of analysis, we define u:= 1 − γ ∈ and t j:= ησ 2 j, j = 1, 2. Without loss of generality, we assume σ 2 1 = 1 in this section. In this case, t 1 = η and t 2 = η/κ, where κ is the condition number. [j]:, where Proof idea. For the two-dimensional component decoupled data, we have By definition of Φ in Eq.27, we can see that the convergence rate is lower bounded by the convergence rates of the sequences {Φ [j] t } t. By the relation Eq.28, we have that the convergence rate of the sequence {Φ t} t is controlled by the magnitude of the top eigenvalue λ max of B, if Φ t has nonzero component along the eigenvector of B with eigenvalue λ max (B). Specifically, if |λ max | > 1, Φ [j] t grows at a rate of |λ max | t, indicating the divergence of SGD+Nesterov; if |λ max | < 1, then converges at a rate of |λ max | t. In the following, We use the eigen-systems of matrices B [j], especially the top eigenvalue, to analyze the convergence behavior of SGD+Nesterov with any hyper-parameter setting. We show that, for any choice of hyper-parameters (i.e., step-size and momentum parameter), at least one of the following statements must holds: • B has an eigenvalue larger than 1. • B has an eigenvalue of magnitude This is formalized in the following two lemmas. Lemma 1. For any u ∈, if step size then, B has an eigenvalue larger than 1. We will analyze the dependence of the eigenvalues on κ, when κ is large to obtain Lemma 2. For any u ∈, if step size then, B has an eigenvalue of magnitude 1 − O (1/κ). Finally, we show that Φ t has non-zero component along the eigenvector of B with eigenvalue λ max, hence the convergence of SGD+Nesterov is controlled by the eigenvalue of B [j] with the largest magnitude. Lemma 3. Assume SGD+Nesterov is initialized with w 0 such that both components w 0 − w *, e 1 and w 0 − w *, e 2 are non-zero. Then, for all t > 2, Φ [j] t has a non-zero component in the eigen direction of B [j] that corresponds to the eigenvalue with largest magnitude. Remark 8. When w is randomly initialized, the conditions w 0 −w *, e 1 = 0 and w 0 −w *, e 2 = 0 are satisfied with probability 1, since complementary cases form a lower dimensional manifold which has measure 0. By combining Lemma 1, 2 and 3, we have that SGD+Nesterov either diverges or converges at a rate of (1 − O(1/κ)) t, and hence, we conclude the non-acceleration of SGD+Nesterov. In addition, Corollary 1 is a special case of Theorem 1 and is proven by combining Lemma 1 and 3. In high level, the proof ideas of Lemma 1 and 3 is analogous to those of, which proves the non-acceleration of stochastic Heavy Ball method over SGD. But the proof idea of Lemma 2 is unique. Proof of Lemma 1. The characteristic polynomial of B j, j = 1, 2, are: First note that lim λ→∞ D j (λ) = +∞ > 0. In order to show B 1 has an eigenvalue larger than 1, it suffices to verity that Replacing γ by 1 − u and ησ 2 1 by t 1, we have Solving for the inequality D 1 < 0, we have, for positive step size η. Proof of Lemma 2. We will show that at least one of the eigenvalues of B is 1 − O(1/κ), under the condition in Eq.32. First, we note that t 2 = t 1 /κ = η/κ, which is O(1/κ). We consider the following cases separately: 2 ) and o; and 5) u is Θ, the last of which includes the case where momentum parameter is a constant. Note that, for cases 1-4, u is o. In such cases, the step size condition Eq.32 can be written as It is interesting to note that η must be o to not diverge, when u is o. This is very different to SGD where a constant step size can in convergence, see. 2 ). In this case, the terms u 6, u 4 t 2, u 2 t 2 2 and t 3 2 are of the same order. We find that 2 ). Hence, Write t 2 = cu 2 asymptotically for some constant c. If 4t 2 ≥ u 2, i.e., 4c − 1 ≥ 0, then If 4t 2 ≤ u 2, i.e., 4c − 1 ≤ 0, then In either case, the first-order term is of order u. Recall that t 2 = η/κ, then we have Case 2: u is o(t 0.5 2) and ω(t 2). In this case, ). Hence, 2 ). This implies that t 0.5 2 = o(1/κ) and u = o(1/κ). Therefore, all the eigenvalues λ i are of order 1 − O(1/κ). Case 3: u is O(t 2). This case if forbidden by the assumption of this lemma. This is because, κ is o ). This is contradictory to u is O(t 2). Case 4: u is Ω(t 0.5 2) and o. In this case, we first consider the terms independent of t 2, i.e., constant term and u-only terms. These terms can be obtained by putting t 2 = 0. In such a setting, the eigenvalues are simplified as: Note that the u-only terms cancel in λ 2, so the first order after the constant term must be of t 2 (could be t 2 /u 2, t 2 /u etc.). In the following we are going to analyze the t 2 terms. Since u is Ω(t 0.5 2), u 2 has lower order than t 2, and t 2 /u 2 is o. This allows us to do Taylor expansion: where f (u) and g(u) are u-terms only, which, by the above analysis in Eq.37, are shown to contribute nothing to λ 2. Hence, we use the first terms of T 1 and T 2 above to analyze the first order term of λ 2. Plugging in these term to the expression of λ 2, and keeping the lowest order of t 2, we find a zero coefficient of the lowest order t 2 -term: Hence, λ 2 can be written as: where c is the coefficient. On the other hand, by Eq.35 and t 2 = η/κ, we have Therefore, we can write Eq.38 as λ 2 = 1 − O(1/κ).. This is the case where the momentum parameter is κ-independent. Using the same argument as in case 4, we have zero u-only terms. Then, directly taking Taylor expansion with respect to t 2 in: Proof of Lemma 3. This proof follows the idea of the proof for stochastic Heavy Ball method in. The idea is to examine the subspace spanned by Φ t, t = 0, 1, 2, · · ·, j = 1, 2, and to prove that the eigenvector of B [j] corresponding to top eigenvalue (i.e., eigenvalue with largest magnitude) is not orthogonal to this spanned subspace. This in turn implies that there exists a non-zero component of Φ [j] t in the eigen direction of B [j] corresponding to top eigenvalue, and this decays/grows at a rate of λ t max (B [j] ). [j] t has non-zero component in the eigen direction of B [j] with top eigenvalue, then Φ [j] t should also have non-zero component in the same direction. Thus, it suffices to show that at least one of Φ 3 has non-zero component in the eigen direction with top eigenvalue. SinceH is diagonal for this two-dimensional decoupled data, w and w evolves independently, and we can analyze each component separately. In addition, it can be seen that each of the initial values w, which is non-zero by the assumption of this lemma, just acts as a scale factor during the training. Hence, without loss of generality, we can assume w 0 − (w *) [j] = 1, for each j. Then, according to the recurrence relation of SGD+Nesterov in Eq.26, where s [j] Denote the vectorized form of Φ t ), which is a 4 × 1 column vector. We stack the vectorized forms of Φ 3 to make a 4 × 4 matrix, denoted as M [j]: Note that Φ [j] t, t = 0, 1, 2, 3, are symmetric tensors, which implies that M [j] contains two identical rows. Specifically, the second and third row of M [j] are identical. Therefore, the vector v T is an eigenvector of M [j] with eigenvalue 0. In fact, v is also an eigenvector of B [j] with eigenvalue γ(1 − ησ Hence, v is not the eigenvector along top eigenvalue, and therefore, is orthogonal to the eigen space with top eigenvalue. In order to prove at least one of Φ t, t = 0, 1, 2, 3, has a non-zero component along the eigen direction of top eigenvalue, it suffices to verify that M [j] is rank 3, i.e., spans a three-dimensional space. Equivalently, we consider the following matrix where we omitted the superscript [j] for simplicity of the expression. If the determinant of M [j] is not zero, then it is full rank, and hence M [j] spans a three-dimensional space. Plug in the expressions in Eq.41, then we have where t j = ησ We note that, for all u ∈ both t j are not positive. This means that, for all u and positive t j, the determinant det(M [j] ) can never be zero. Therefore, for each j = 1, 2, M [j] is full rank, and M [j] spans a three-dimensional space, which includes the eigenvector with the top eigenvalue of B [j]. Hence, at least one of Φ t, t ∈ {0, 1, 2, 3} has non-zero component in the eigen direction with top eigenvalue. By Φ t also have non-zero component in the eigen direction with top eigenvalue of B [j]. Proof of Theorem 1. Lemma 1 and 2 show that, for any hyper-parameter setting (η, γ) with η > 0 and γ ∈, either top eigenvalue of B is larger than 1 or top eigenvalue of B is 1 − O(1/κ). Hence, |λ max | is either greater than 1 or is 1 − O(1/κ). Lemma 3 shows that Φ t has non-zero component along the eigenvector of B with eigenvalue λ max (B). By Eq.28 and Lemma 1 and 2, the sequence {Φ t} t either diverges or converges at a rate of t. By definition of Φ in Eq.27, we have that {t} t either diverges or converges at a rate of (1 − O(1/κ)) t. Note that, for the two-dimensional component decoupled data, we have Therefore, the convergence rate of SGD+Nesterov is lower bounded by (1−O(1/κ)) t. Note that the convergence rate of SGD on this data is (1 − O(1/κ)) t, hence SGD+Nesterov does not accelerate SGD on the two-dimensional component decoupled dataset. Proof of Corollary 1. When η = 1/L 1 and γ ∈ [0.6, 1], the condition in Eq.31 is satisfied. By Lemma 1, the top eigenvalue λ max of B is larger than 1. By Lemma 3, Φ has non-zero component along the eigenvector with this top eigenvalue λ max. Hence, | w t − w *, e 1 | grows at a rate of We first give a lemma that is useful for dealing with the mini-batch scenario: Lemma 4. If square loss f is in the interpolated setting, i.e., there exists w * such that f (w Proof. onto Eq.53 and add it to Eq.52, then If the hyper-parameters are selected as: then the last two terms in Eq.54 are non-positive. Hence, which implies Since f (w t) ≤ L/2 · w t − w * 2 (by smoothness), then we have the final with C being a constant. Fix an arbitrary w * ∈ R 2 and let z be randomly drawn from the zero-mean Gaussian distribution with variance E[z 2] = 2, i.e. z ∼ N. The data points (x, y) ∈ D are constructed as follow: where e 1, e 2 ∈ R 2 are canonical basis vectors, σ 1 > σ 2 > 0. Note that the corresponding square loss function on D is in the interpolation regime, since f (w *) = 0. The Hessian and stochastic Hessian matrices turn out to be Note thatH is diagonal, which implies that stochastic gradient based algorithms applied on this data evolve independently in each coordinate. This allows a simplified directional analysis of the algorithms applied. Here we list some useful for our analysis. The fourth-moment of Gaussian variable Gaussian Data. Suppose the data feature vectors {x i} are zero-mean Gaussian distributed, and y i = w *, x i, ∀i, where w * is fixed but unknown. Then, by the fact that for zero-mean Gaussian random variables z 1, z 2, z 3 and z 4, we have In this subsection, we show additional empirical verification for the fast convergence of MaSS, as well as the non-acceleration of SGD+Nesterov, on synthetic data. In addition, we show the divergence of SGD+Nesterov when using the same step size as SGD and MaSS, as indicated by Corollary 1. We consider two families of synthetic datasets: • Component decoupled: (as defined in Section 3). Fix an arbitrary w * ∈ R 2 with all components non-zero. x i is drawn from N (0, diag(2σ 2)) with probability 0.5 each. y i = w *, x i for all i. • 3-d Gaussian: Fix an arbitrary w * ∈ R 3 with all components non-zero. x i are independently drawn from N (0, diag(σ Step size: η * = 1/L 1 = 1/6, and momentum parameter: γ is 0.9 or 0.99. performed on either 3-d Gaussian or component decoupled data with fixed σ 1 and σ 2 . For each setting of (σ 1, σ 2), we randomly select w *, and generate 2000 samples for the dataset. Batch sizes for all algorithms are set to be 1. We report the performances of SGD, SGD+Nesterov and SGD+HB using their best hyper-parameter setting selected from dense grid search. On the other hand, we do not tune hyper-parameters of MaSS, but use the suggested setting by our theoretical analysis, Eq. 14. Specifically, we use Component decoupled: 3-d Gaussian: For ASGD, we use the setting suggested by. Figure 6 (in addition to Fig 4) and Figure 7 show the curves of the compared algorithms under various data settings. We observe that: 1) SGD+Nesterov with its best hyper-parameters is almost identical to the optimal SGD; 2) MaSS, with the suggested hyper-parameter selections, converges faster than all of the other algorithms, especially SGD. These observations are consistent with our theoretical about non-acceleration of SGD+Nesterov, Theorem 1, and accelerated convergence of MaSS, Theorem 3. Recall that MaSS differs from SGD+Nesterov by only a compensation term, this experiment illustrates the importance of this term. Note that the vertical axis is log scaled. Then the linear decrease of log losses in the plots implies an exponential loss decrease, and the slopes correspond to the coefficients in the exponents. Divergence of SGD+Nesterov with large step size. As discussed in Corollary 1, SGD+Nesterov diverges with step size η * = 1/L 1 (when γ ∈ [0.6, 1]), which is the optimal choice of step size for both SGD and MaSS. We run SGD+Nesterov, with step size η * = 1/L 1, to optimize the square loss function on component decoupled data mentioned above. Figure 8 shows the divergence of SGD+Nesterov with two common choices of momentum parameter γ: 0.9 and 0.99. with the parameter ρ on the loss function they prove convergence rate 1 − 1/ρ 2 κ t of their method (called SGD with Nesterov acceleration in their paper), where t is the iteration number. In the following, we show that, on a simple (zero-mean) Gaussian distributed data, this rate is slower than that of SGD, which has a rate of (1 − 1/κ) t. On the other hand, MaSS achieves the accelerated rate 1 − 1/ (2 + d)κ t. We empirically verify the three-regime partition observed in section 5 using zero-mean Gaussian data. In this evaluation, we set the covariant matrix of the (zero-mean) Gaussian to be: In the experiments, we run MaSS with a variaty of mini-batch size m, ranging from 1 to 160, on this Gaussian dataset. For each training process, we compute the convergence speed s(m), which is defined to be the inverse of the number of iterations needed to achieve a training error of ε. Fully-connected Network. The fully-connected neural network has 3 hidden layers, with 100 ReLU-activated neurons in each layer. After each hidden layer, there is a dropout layer with keep probability 0.5. This network takes 784-dimensional vectors as input, and has 10 softmax-activated output neurons. It has ≈99k trainable parameters in total. Convolutional Neural Network (CNN). The CNN we considered has three convolutional layers with kernel size of 5 × 5 and without padding. The first two convolutional layers have 64 channels each, while the last one has 128 channels. Each convolutional layer is followed by a 2 × 2 max pooling layer with stride of 2. On top of the last max pooling layer, there is a fully-connected ReLU-activated layer of size 128 followed by the output layer of size 10 with softmax non-linearity. A dropout layer with keep probability 0.5 is applied after the full-connected layer. The CNN has ≈576k trainable parameters in total. Residual Network (ResNet). We train a ResNet with 32 convolutional layers. The ResNet-32 has a sequence of 15 residual blocks: the first 5 blocks have an output of shape 32 × 32 × 16, the following 5 blocks have an output of shape 16×16×32 and the last 5 blocks have an output of shape 8×8×64. On top of these blocks, there is a 2×2 average pooling layer with stride of 2, followed by a output layer of size 10 with softmax non-linearity. The ResNet-32 has ≈467k trainable parameters in total. We use the fully-connected network to classify the MNIST dataset, and use CNN and ResNet to classify the CIFAR-10 dataset.
This work proves the non-acceleration of Nesterov SGD with any hyper-parameters, and proposes new algorithm which provably accelerates SGD in the over-parameterized setting.
1,105
scitldr
We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model. FGL achieves this goal by connecting nodes across layers based on spatial similarity. The use of structured smoothness, as implemented by FGL, is motivated by applications to structured spatial data, which is, in turn, motivated by domain knowledge. The proposed model architecture outperforms conventional neural network architectures across a variety of simulated and real datasets with structured smoothness. The effectiveness of predictive models often depends on the choice of inductive bias, and the extent to which this inductive bias captures real-world structure. For instance, one example of such bias encoding leading to improved performance is convolution. In principle, convolutional weights could be learned directly from data. However, in practice, imposing this structure leads to improved performance when compared to fully connected models, and as a , Convolutional neural networks (CNNs) have enjoyed wide use for computer vision tasks . Similarly, recurrent neural networks such as LSTMs are effective for text , and certain graphical models are ideal for sentence segmentation and labeling . Our work follows this philosophy. Specifically, we propose a feedforward layer for deep neural networks that is suitable for neuroimaging and potentially useful for other data where variables can be grouped due to underlying structure. Data with multiple input variables often exhibit some structure. For example, the El Nino dataset consists of measurements by weather buoys in the ocean, and one expects that nearby buoys can be grouped together. Similarly, socio-economic data can often be grouped together by geographic proximity. Financial market data of individual stocks can be grouped together based on the industrial sector to which a company belongs. Along similar lines, brain parcellations are a well studied paradigm for capturing the structure of brain activity, often via statistical parcellation based on ward clustering . The of ward clustering is a tree where leaf nodes represent voxels of the brain and interior nodes represent grouping of voxels into spatial clusters. Figure 1 visualizes the output of ward clustering at various granularities when applied to the human connectome project resting state brain data . Contribitions: Our primary technical contribution is the Fixed Grouping Layer (FGL). FGL is designed to extract features within each group, and additionally guarantees that each output vector is only affected by the input vectors related to it by the specified grouping. We demonstrate the benefit of using FGL on simulated experiments and real neuroimaging data. We compare FGL against fully connected networks, convolutional neural networks, CoordConv , and a closely related method proposed by. We extensively evaluate the performance of FGL on simulated and real brain imaging data showing improved performance. Functional Magnetic Resonance Imaging (fMRI) is a popular brain imaging technique which measures a physiological correlate of neuron activity . The brain imaging scans are generally of two kinds: resting state and task data. Resting state data (rfMRI) is collected while the subject is at rest, i.e., while the subject is not actively engaged in a task. Task data (tfMRI) is collected while the subject is engaged in a predefined task, for example, a motor task such as moving their fingers. fMRI data can be represented as 3-dimensional images, and have rich structure that has been studied extensively, including in the ML literature . Importantly, coarse correspondences have been discovered between brain regions and specific functions or behavior We particularly focus on brain decoding -a standard task in fMRI brain data analysis where the brain image is used to predict the associated task or stimulus. Broadly, there are two types of brain decoding models: end to end models and models which perform dimensionality reduction followed by a low-dimensional prediction. On one hand, dimension reduction directly captures the notion of grouping variables together. On the other hand, end to end models rarely employ brain spatial structure. This observation motivates our work. In recent years, decoding from fMRI studies has been attempted using a variety of methods: use factored logistic regression while use convolutional neural networks. use a factored model after performing dimensionality reduction. Inspired by similar motivations, construct a regularizer using feature groupings obtained from a fast clustering method. They demonstrate that such a regularizer outperforms dropout and L 2 regularization. However, they do not consider employing this structure in a deep model, opting instead for a wide shallow approach. Compared to , our illustrate the benefits of depth combined with spatial sparsity for brain image decoding. Next, we formally define our idea of groups. Given a set of input variables X = {x i : 0 ≤ i < n in, i ∈ Z}, a grouping of variables, denoted by G is a subset of the power-set of X such that each x i is in at least one set in G. That is, G ⊂ 2 X, such that, ∀x i ∈ X: x i ∈ G. Each set in G is a group of variables. For example, in the case of a colored image, each pixel can be considered a variable with an associated feature vector of length 3, i.e., each x i represents a pixel and A spatially smooth grouping of these variables corresponds to a segmentation of the image. Optionally, the groups can be mutually exclusive. An FGL layer takes as input n in vectors of c in length each, where the n in vectors are grouped into n out groups. Further, let c out be the length of the vector associated with each group. Note that these groups do not have to be mutually exclusive, but mutually exclusive groups offer benefits that we describe in the supplementary. The model architecture is allowed to use multiple channels (analogous to the # of filters of standard convolutional networks). c in and c out are the number of input and output channels. Mathematically, the Fixed Grouping Layer is given by: nout,cout is the matrix representing the output with each row represents one group. x ∈ R nin,cin is a matrix representing the input -one row for each input vector. A is a binary matrix that represents the grouping -A j,i = 1 if and only if x i is in group j. represents the Hadamard product (elementwise multiplication) . u, v, b are parameters of the model. v is used for a linear transformation from R cin to R cout, i.e., v ∈ R cin,cout. u is a matrix of size n in × c out. b the bias, is a matrix of size n out × c out We denote the i th input vector, i.e., the i th row of x, by x i. Observe that FGL is a fully connected layer when there is only one group which contains all variables. We construct a deep neural network for classification using repeated layers of FGL (and activation functions), followed by either a fully connected network or an FGL that groups all inputs into a single group. This is inspired from traditional CNN based classification models. We provide a visualization of a simplified model in Figure S2. Regularization: We use weight normalization , which is a reparameterization of the weights of a neural network that decouples the norm and the direction of the weights. That is, for a single dimension of a fully connected layer, weight normalization reparameterizes w as w = g(θ/||θ||), where θ is a vector of the same length as w and g is a scalar. The network now learns g, θ instead of w. For FGL we apply weight norm on both u and v. Weight norm is applied by treating u as c out different vectors and v as the weights of a fully connected layer. We use Pytorch to implement models, and nilearn to preprocess and visualize fMRI images. Training was done using Adam on 4 K80 GPUs. Code is available at https://www.github.com/anon/repo and a minimal version is provided in the supplementary. We consider brain decoding as a classification task, and use two common types of models as baselines: fully connected networks and convolution based models such as standard Convolutional Neural Networks (CNNs) and their CoordConv variant. To the best of our knowledge, our baselines include state of the art approaches on the considered datasets. Multinomial logistic regression is a standard model (a), given by: and bias b ∈ R k where k is the number of possible labels. Here,ŷ is the vector of predicted probabilities for each class or label. Clearly multinomial logistic regression by itself uses no spatial information. Feedforward Neural Networks (FNN): FNNs are ideal for tasks where the input has neither a grid-like structure nor a sequential structure. The architecture is an alternating sequence of linear transformations and activation functions . Convolutional Neural Networks (CNNs): , are a popular tool in deep learning. Many problems, where the inputs have a spatial representation, lend themselves to convolution. CNNs are popular not only for their flexibility but also because of the assumptions they make about the nature of the input -one of them being that dependencies between pixels are local and spatially invariant. These assumptions are usually appropriate for natural images. However, in the case of brain fMRI, since features are also dependent on position, i.e., features are not position invariant, CNNs might not work as well. demonstrate that CNNs are sometimes unable to capture locationdependent spatial representations. e.g. transforming a pair of coordinates to a one-hot representation. One reason for this failure could be that the coordinate transformation problem directly conflicts with the underlying assumptions of CNNs. They propose the CoordConv layer as a solution. CoordConv is essentially a convolutional layer except that it includes coordinates as additional input channels. Thus, CoordConv enables CNNs where local dependencies change across the image. First, we discuss grouping via Voronoi Diagrams. Given a set of points, called sites, a Voronoi diagram is the division of a plane into regions based on distance to the sites. Usually, positions whose closest site is the same are grouped together. Consider the grouping induced by a set of m sites P = {p i : 2, 0 ≤ i < m} for some s indicating the size of the plane: We use Voronoi diagrams because they create regions which are spatially connected. To increase the complexity of the task, we use groupings which are unions of arbitrarily chosen Voronoi regionsing in groups comprised of multiple spatially connected regions which may not be connected to each other. We provide an example in Figure 2b. Consider input data sampled from a Gaussian prior: x ∼ N (0, S), where 0 is a zero-vector and S is an arbitrary covariance matrix of appropriate size. We let x ∈ R s 2 for an integer s -the idea being that x is a flattened version of a grayscale image of size s × s. Next, suppose that datapoints are labelled based on a linear function. That is, z|x ∼ N (F x, Σ), for a fixed covariance matrix Σ, and a matrix F of size k × s 2 where k is the number of labels. The label, y, is assigned based on z: We briefly analyze this simple generative model in the context of FGL. Using conjugate priors (b), it is straightforward to show that, To explore the implications of equation, consider an F that is sparse such that the non-zero positions in each row of F correspond to a segmentation of the input image (a grouping of pixels). For example, circular patches on the image, or in our case, Voronoi diagrams. Additionally, if Σ is an identity matrix, then each dimension of z corresponds to the sum of values of a group of pixels. We create a dataset which samples x from the the Gaussian prior with S being an identity matrix. We use s = 128 so that each x can be interpreted as a square image. We create F by first creating a Voronoi diagram of 512 randomly selected points, and then merging these regions into k = 32 groups. We sample z from the the corresponding conditional distribution. We fix a random W and then assign the label y with highest likelihood to the datapoint x. We sample 50000 points to create the simulated dataset. A visualization of the process is provided in Figure 2a. To ensure that the dataset wasn't too noisy, we plot a histogram of probability of assigned label in Figure 2c. The Figure 3: (a) Test accuracy (with error bars) on held out 20% of simulated dataset vs. fraction of data used for training. The graph indicates that FGL has better empirical sample complexity. The small magnitude of error in estimation of performance indicates that models are well trained and the difference is due to the models themselves. (b) Minimum (across classes) F1 Score on held out test set vs. fraction of data used for training. The difference in performance is not due to performance on a single class/region, but rather across all labels. (c) Histogram of ground truth probability of labels for points where FGL is correct but CNN misclassifies. This demonstrates that the misclassification by CNN is not only on noisy datapoints but also for datapoints where the label should be clear. histogram shows that only a small number of datapoints are noisy -in most cases, the assigned label has a probability of at least 0.5. To demonstrate the benefit of using the Voronoi Diagram during classification, we train 4 modelsLogistic Regression (LR), a Convolutional Neural Network (Conv), a CoordConv variant (CC) of the same CNN, and a model using our proposed layer -FGL followed by a fully connected network. Our FGL model is provided the voronoi regions. The number of parameters in each model is roughly the same. Since the dataset uses labels that are linear in terms of x, we use no non-linear activations in any of our models. We found that using maxpooling in the CNN and CoordConv hurt performance. We don't report with an FNN because it performs similar to LR. We create a test set using 20% of the simulated dataset. The remaining points are used for training. For each model, we train using various quantities of available data, and test on the held out set. The are aggregated over 10 runs -with a randomly sampled test set for each run. A plot of the test accuracy vs. fraction of data used for training is given in Figure 3. We find that the standard deviation of accuracies of these models is small -indicating that the failures are not due to poor initialization or poor training but rather a difference in models. This experiment was designed to demonstrate a failure of convolution based models and also fully connected methods. Although this satisfies our intuition that using spatial structure should help drastically improve performance, we investigate the datapoints at which the CNN failed but FGL did not. The first thing to check was the probability of assigned labels for these points -a histogram of the same for a random subset of the testing set is provided in Figure 3c. The next sanity check is to ensure that the drop in performance isn't just for one set of regions or one class. To that end, we plot the lowest F1 score (lowest across classes) in Figure 3b. We see the same trend -FGL performs better than CNNs, CoordConv and Logistic Regression. These plots indicate the validity of the gain in performance -it is due to neither noisy labels, nor failure on any one label. Hence, using a grouping of variables seems to provide a significant benefit. We evaluate our models on 5 datasets which were used by: Archi , Brainomics , Cam-CAN , LA5c , an aggregation of fMRI datasets. Our only required preprocessing for the task contrast data was to upsample Cam-CAN and Brainomics to the same resolution as HCP. The assessment of fMRI decoding is an important topic in its own right due to unique spatial characteristics and typical sample sizes. show that leave-one-out strategies for cross validation can be unstable, and suggest that using reasonable defaults is a good strategy. Additionally, it is well known that having common subjects between train and test datasets can lead to misleading . This is because such a test set does not measure how well a model can generalize from one subject to another. Hence, we evaluate models via out-of-sample accuracy, i.e., we hold out some subjects (30%) for the testing dataset in each run. Further, we train all models with reasonable defaults that we found were not critical for model performance. showed that ward clustering provides good parcellations of the brain, we perform ward clustering on a fraction of HCP resting state data (total size of 4TB). We downsample the resting state time series data to about 1% of the original frequency, and use the brain activations at each position as the feature vectors for clustering. The downsampling is needed due to hardware constraints. We did not use task datasets for clustering since we would have to hold out more data for clustering -exacerbating data scarcity. Additionally, resting state data is more easily acquired , and there are strong correlations between tfMRI and rfMRI . Thus, using rfMRI should provide a good if not better parcellation of the brain. To make a deep network using FGL we require a hierarchical clustering and not just a parcellation. Hence, instead of using the segmentation produced by the parcellation algorithm provided by nilearn, we use the computed ward tree. We then slice into the ward clustering to produce parcellations with 32, 256 and 1024 regions. These have been visualized in Figure 1. Clearly, these groups are spatially connected. We need groupings of voxels into 1024 groups, then a grouping of these 1024 groups into 256 groups and finally a grouping of 256 groups into 32 groups. Since ward clustering is a hierarchical clustering scheme and outputs a tree structure, we extract these groupings by making appropriate cuts into the tree. Fully Connected Models: We experimented with Fully Connected Neural Networks (FNN) and Multinomial Logistic Regression (LR) with and without Dimension Reduction using our parcellation. We found that using Dimension Reduction reduced performance and hence do not report it. For FNNs, we tried 2 and 3 layer versions with intermediate sizes chosen from 64, 128, 256 and 512. The model with intermediate layer sizes of 512 and 128 worked best. The aforementioned models take a masked fMRI image as input and we used the MNI152 mask provided by nilearn. Emperically, we find that, for brain decoding, using a linear activation performs better than using non-linear activations. For this reason, we use a linear activation for our models. Unfortunately, we are unclear about why this occurs for this domain. We also evaluate Feature Grouping suggested by which is also designed to exploit structured sparsity but does this using a wide model (implemented using multiple randomized clusterings), unlike our deep FGL approach. We used code provided by the authors. We experimented with a variety of architectures and found no improvement by using residual connections or Batch-Norm. We also report using CoordConv. We found that using non-linear activations hurt the model's performance, similar to our finding with FNNs. Further, maxpooling also reduced performance. The architecture is 5 3-D convolution layers of stride 2 and kernel size 4. The input volumes have size 91 × 109 × 91, and convolution reduces the volume to 2 × 3 × 2 with 128 channels. We flatten this volume and pass it through a fully connected network to get the score for each label. The architecture for the CoordConv is identical to the CNN since CoordConv only concatenates a few input channels to the input image. We use Conv to refer to the Convolutional Neural Network and CC to refer to the CoordConv variant. We use 3 layers of FGL, each of which use the Parcellation described earlier. The input images have 212455 voxels after masking. We treat every voxel as a variable with a single feature. These voxels are then reduced to 1024 groups with feature vectors of length 8 each. Next, these groups are reduced to 256 variables with 64 features and finally to 32 variables with 128 features. The final prediction is made by flattening the output of the last FGL layer and passing it through a fully connected layer. The ing number of parameters is roughly 2 million, which is also roughly the same number of parameters used for CNN and CC. While this is a lot of parameters, we found that reducing the number of parameters by changing the number of features for each intermediate variable decreases performance for both convolution and FGL. We split each dataset multiple times into a train and test set. The split is done such that no subject in the test set appears in the training set. In each split, 30% of subjects are used for testing, and all or a part of the remaining subjects are used for training. Convolution based models were trained for 50 epochs, feedforward neural networks for 30 and FGL for 20. These hyperparameters were selected by monitoring for overfitting on the training set (using a further validation split). We perform experiments to study: the benefit of FGL given a lot of data, the benefits of FGL at small sample sizes, the effect of depth, and the effect of intermediate vector length. Large sample setting: The first experiment uses all of the training data (70% of total data) to train. We report out-of-sample accuracy in Table 1. We also report the p-values of one sided Wilcoxon rank sum tests between the performance of each model compared to 3 Layer FGL on the HCP dataset. Small sample setting: The second set of experiments varies the fraction of data used for training on the smaller datasets -namely, Archi, Brainomics and Cam-CAN. We explore small sample performance because limited sample sizes are typical for fMRI studies. Figure 4 plots test accuracy against fraction of data used for training. It demonstrates that FGL outperforms the baselines even when only a small amount of training data is used. Effect of depth, FGL depth vs. width: We train two additional models to study the effect of depth: one which uses only the first layer of FGL, and another that uses the first two layers. In Table 1, they're referred to as "FGL (1 Layer)" and "FGL (2 Layers)" respectively. The show that increasing FGL depth provides a statistically significant improvement in test accuracy on the HCP dataset. When compared to Feature grouping , the in Table 1 show that even after extensive tuning (we used the reported best performing settings and attempted additional tuning), this approach is not competitive -suggesting a significant representation benefit by exploiting FGL depth vs. width for this task. Unfortunately, the provided code did not scale to the largest datasets -hence the missing on HCP. To study the effect of intermediate vector length, we train five single layer FGL models with intermediate vector lengths (c out) of 1, 2, 4, 8 and 16. We plot the test accuracy against c out in Figures 5a and 5b. On 4 out of 5 datasets, increasing c out improves test accuracy. This is expected because the classification task is not binary and each region of the brain could contribute to multiple cognitive processes in different ways. However, the effect is not as pronounced on the Cam-CAN dataset. We suspect that this is because Cam-CAN has fewer labels than the other datasets. We note that c out could be interpreted as width of our model, similar to channel size of CNNs, which is known to be important. These experiments demonstrate the clear benefit of using FGL compared to other models with roughly the same number of parameters. When using 70% of data for training, FGL provides 2-6% of improvement in test accuracy on 4 of 5 datasets. A similar trend exists even when smaller amounts of data is used. As for the 5th dataset, Brainomics, FGL is on par with CNN based methods but better than Fully connected networks. While the effect of depth is not clear on smaller datasets, we note that on the HCP dataset, deeper models have a statistically significant improvement in performance. Further, an increase in c out also improves performance. Our main argument is that current methods discard important information about spatial smoothness as encoded by hierarchical spatial clustering. As pointed out, the main cost of our method is in constructing these hierarchies. For brain imaging, most datasets include readily available resting state data. From the larger view, we plan to encourage application communities to develop applicationspecific FGL architectures, which can be shared across several related tasks. In this work we propose a new layer architecture, the Fixed Grouping Layer (FGL), parameterized by a grouping of input variables. FGL explicitly extracts features within each input group. This is in contrast to convolution which extracts local features across the input, and fully connected networks which extract both global and local features. We demonstrate the benefit of using FGL on 5 real fMRI datasets of different sizes. Future work will involve the application of FGL to other tasks and application domains. Input Specification: In this work we deal with image-like data, either in 2D or 3D. Consider an input with s pixels or voxels in c channels -for example, a 64 × 64 image with RGB colors will have s = 4096 and c = 3. Such an input is treated as s variables with feature vectors of c length for each variable. Groupings: Since the output of the first FGL layer is feature vectors for each group, the grouping of the second FGL layer must group together the outputs of the first layer. Hence, we need a hierarchical structure with input variables at the leaf nodes. In this work, we use a Ward clustering of the brain, however, other clusterings may be more appropriate in other settings. We ran an experiment using the simulated dataset to estimate how robust FGL was to imperfect A. Apart from providing FGL the true voronoi diagrams, we also ran FGL using the clusters from K-Means clustering That is, A ji = 1 if pixel i was in cluster j according to K-Means. We did this using 16, 32 and 48 clusters obtained by clustering pixels in the training dataset. The are reported in S1. We see that although there is a drop when using imperfect A, FGL still outperforms logistic regression and convolution. This emphasises the benefit of capturing structure. We do see a larger drop when we use a clustering that is not representative enough (16 clusters), but don't see much gain from using a more representative one (48 clusters). While the FGL model is straightforward, multiple variants of it are possible. First, notice that FGL is essentially the following three operations (ignoring the bias): • Linear transformation: The multiplication, xv, transforms the data x from one basis to another using a linear transform v. • Rescaling: The hadamard product with u rescales each vector along each dimension independently • Aggregation: The multiplication by A aggregates the vectors (xv) u within each group using summation. Performing these operations in a different order creates some basic variants: for example, we could aggregate within groups, then rescale, and finally perform a linear transformation. These changes to operation order will require the parameters to be defined differently. For example, if the hadamard product with u is done after aggregation, then u will need to have n out rows. Another interesting variant is to replace the aggregation with a max operation within each group along each dimension. We think this is similar to doing a maxpool operation in convolution neural networks while the summation by A is similar to a weighted-sum-pool depending on the values of A ji. Early showed worse when using the max reduction variant and hence we did not investigate it further. However, it might prove effective in cases where a signal being present in one variable within a group is equivalent to the group showing that variable. Another possible benefit that we do not investigate is the use of multiple variable groupings -we can concatenate the A matrices that represent each grouping to make FGL extract features within each group from the union of both groups. That is, if A, A are the matrices that represent two groupings, we could use. This would allow one to make use of multiple types of groupings. For example, we could create parcellations at different points of the accuracyreproducibility tradeoff studied by , and make use of both. Similarly, one could create parcellations from different datasets and use them at once. However, using a single parcellation was sufficient to create a significant gain in performance, hence we don't go deeper in this direction. We mention a few other variants that did not perform as well in supplement S3. Our implementation of FGL using PyTorch is available at https://github.com/ anonymous/link. In this section, we discuss some challenges in implementation -If the number of input variables is large, performing A((xv) u) as a matrix multiplication is expensive. There are some ways to work around this: • Since A is a binary matrix, we can treat ((xv) u) as a matrix of embeddings, and lookup the indices at which A is non-zero, then performing necessary aggregation. • If the variable groups are mutually exclusive -that is, each input variable only belongs to one group, then A((xv) u) can be performed by scattering (xv) u according to the indices at which A is non zero. Prior literature (; ;) has shown that initialization of deep networks matters. Generally, a layer's weights are randomly initialized by sampling from a uniform distribution, denoted by U [−m, m] for some m based on the number of inputs, outputs and the activation function. Hence, after minor modifications for different activation functions, we use the following initialization:, v ij ∼ U − 1 1 + 5c in, 1 1 + 5c in Figure S2: Illustration of FGL. Given the numbered hierarchical segmentation, FGL extracts features for each segment. The inputs are 9 variables corresponding to segments of a square, which are first grouped as {{1, 2}, {3, 4, 5}, {6, 7}, {8, 9}}. The ing 4 groups are then grouped into 2 groups using the grouping {{1, 2}, {3, 4}}. The output is passed to a fully connected network to predict labels. Note that intermediate layers can use feature vectors of length greater than 1. One of the major benefits of using convolution is that it performs parameter sharing -which comes with its own benefits. Adapting FGL to perform parameter sharing is much harder. Typically, a fully connected from n in × c in numbers to n out × c out numbers would require n in × n out × c in × c out parameters. But this number is astronomical. To avoid using as many parameters, we decompose the operation into a multiplication by v followed by a Hadamard product with u. Doing so reduces the number of parameters to c in × c out + n in × c out. This is much more tractable, but more reduction might be possible: sharing parameters between groups seems lucrative, unfortunately, different groups can have different sizes and an arbitrary ordering -prevent us from parameter sharing further. If group sizes were constant and an ordering of variables was fixed, it would be possible to further reduce the number of parameters from O(n in) to O(groupsize). We use a straightforward architecture for convolution -repeated convolutional layers of stride 2 and a kernel size of 4 with appropriate padding, followed by a fully connected network. We found that using maxpool for downsampling reduced performance, and so did using non-linear activation functions. A visualization is provided in S1.
A feedforward layer to incorporate structured smoothness into a deep learning model
1,106
scitldr
Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal -- what really matters to the network is the *effective* receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works. The rich diversity of object appearance in images arises from variations in object semantics and deformation. Semantics describe the high-level abstraction of what we perceive, and deformation defines the geometric transformation tied to specific data . Humans are remarkably adept at making abstractions of the world ; we see in raw visual signals, abstract semantics away from deformation, and form concepts. Interestingly, modern convolutional networks follow an analogous process by making abstractions through local connectivity and weight sharing . However, such a mechanism is an inefficient one, as the emergent representations encode semantics and deformation together, instead of as disjoint notions. Though a convolution responds accordingly to each input, how it responds is primarily programmed by its rigid kernels, as in Figure 1(a, b). In effect, this consumes large model capacity and data modes. We argue that the awareness of deformations emerges from adaptivity -the ability to adapt at runtime (; ;). Modeling of geometric transformations has been a constant pursuit for vision researchers over decades (; ; ;). A basic idea is to spatially recompose data towards a common mode such that semantic recognition suffers less from deformation. A recent work that is representative of this direction is Deformable Convolution . As shown in Figure 1 (c), it augments the convolutions with free-form sampling grids in the data space. It is previously justified as adapting receptive field, or what we phrase as the "theoretical receptive field", that defines which input pixels can contribute to the final output. However, theoretical receptive field does not measure how much impact an input pixel actually has. On the other hand, reconfigure data towards common arrangement to counter the effects of geometric deformation. (d) Our Deformable Kernels (DKs) instead resample kernels and, in effect, adapt kernel spaces while leaving the data untouched. Note that (b) and (c) share kernel values but sample different data locations, while (b) and (d) share data locations but sample different kernel values. propose to measure the effective receptive field (ERF), i.e. the partial derivative of the output with respect to the input data, to quantify the exact contribution of each raw pixel to the convolution. Since adapting the theoretical receptive field is not the goal but a means to adapt the ERF, why not directly tune the ERF to specific data and tasks at runtime? Toward this end, we introduce Deformable Kernels (DKs), a family of novel and generic convolutional operators for deformation modeling. We aim to augment rigid kernels with the expressiveness to directly interact with the ERF of the computation during inference. Illustrated in Figure 1 (d), DKs learn free-form offsets on kernel coordinates to deform the original kernel space towards specific data modality, rather than recomposing data. This can directly adapt ERF while leaving receptive field untouched. The design of DKs that is agnostic to data coordinates naturally leads to two variants -the global DK and the local DK, which behave differently in practice as we later investigate. We justify our approach with theoretical which show that ERF is strictly determined by data sampling locations and kernel values. Used as a generic drop-in replacement of rigid kernels, DKs achieve empirical coherent with our developed theory. Concretely, we evaluate our operator with standard base models on image classification and object detection. DKs perform favorably against prior works that adapt during runtime. With both quantitative and qualitative analysis, we further show that DKs can work orthogonally and complementarily with previous techniques. We distinguish our work within the context of deformation modeling as our goal, and dynamic inference as our means. Deformation Modeling: We refer to deformation modeling as learning geometric transformations in 2D image space without regard to 3D. One angle to attack deformation modeling is to craft certain geometric invariances into networks. However, this usually requires designs specific to certain kinds of deformation, such as shift, rotation, reflection and scaling (; ; ; ; ;). Another line of work on this topic learns to recompose data by either semi-parameterized or completely free-form sampling in image space: Spatial Transformers learns 2D affine transformations, Deep Geometric Matchers learns thin-plate spline transformations, Deformable Convolutions learns free-form transformations. We interpret sampling data space as an effective approach to adapt effective receptive fields (ERF) by directly changing receptive field. At a high-level, our Deformable Kernels (DKs) share intuitions with this line of works for learning geometric transformations, yet are instantiated by learning to sample in kernel space which directly adapt ERF while leaving theoretical receptive fields untouched. While kernel space sampling is also studied in Deformable Filter and KPConv , but in their contexts, sampling grids are computed from input point clouds rather than learned from data corpora. Dynamic Inference: Dynamic inference adapts the model or individual operators to the observed data. The computation of our approach differs from self-attention in which linear or convolution modules are augmented with subsequent queries that extract from the same input. We consider our closest related works in terms of implementation as those approaches that adapt convolutional kernels at run time. It includes but is not limited to Dynamic Filters , Selective Kernels and Conditional Convolutions. All of these approaches can learn and infer customized kernel spaces with respect to the data, but are either less inefficient or are loosely formulated. Dynamic Filters generate new filters from scratch, while Conditional Convolutions extend this idea to linear combinations of a set of synthesized filters. Selective Kernels are, on the other hand, comparably lightweight, but aggregating activations from kernels of different size is not as compact as directly sampling the original kernel space. Another line of works contemporary to ours is to compose free-form filters with structured Gaussian filters, which essentially transforms kernel spaces by data. Our DKs also differ from these works with the emphasize of direct adaptation the ERF rather than the theoretical receptive field. As mentioned previously, the true goal should be to adapt the ERF, and to our knowledge, our work is the first to study dynamic inference of ERFs. We start by covering preliminaries on convolutions, including the definition of effective receptive field (ERF). We then formulate a theoretical framework for analyzing ERFs, from which we gain insights to motivate our Deformable Kernels (DKs). We then elaborate two different instantiations of DKs, namely the global and local DK. Finally, we distinguish DKs from Deformable Convolutions and present a unified approach together with them. Our analysis suggests compatibility between DKs and the prior work. 2D Convolution: Let us first consider an input image I ∈ R D×D. By convolving it with a kernel W ∈ R K×K of stride 1, we have an output image O whose pixel values at each coordinate j ∈ R 2 can be expressed as by enumerating discrete kernel positions k within the support This defines a rigid grid for sampling data and kernels. Theoretical Receptive Field: The same kernel W can be stacked repeatedly to form a linear convolutional network with n layers. The theoretical receptive field can then be imagined as the "accumulative coverage" of kernels at each given output unit on the input image by deconvolving back through the network. This property characterizes a set of input fields that could fire percepts onto corresponding output pixels. The size of a theoretical receptive field scales linearly with respect to the network depth n and kernel size K . Effective Receptive Field: Intuitively, not all pixels within a theoretical receptive field contribute equally. The influence of different fields varies from region to region thanks to the central emphasis of stacked convolutions and also to the non-linearity induced by activations. The notion of effective receptive field (ERF) is thus introduced to measure the impact of each input pixel on the output at given locations. It is defined as a partial derivative field of the output with respect to the input data. With the numerical approximations in linear convolution networks, the ERF was previously identified as a Gaussian-like soft attention map over input images whose size grows fractionally with respect to the network depth n and linearly to the kernel size K. Empirical validate this idea under more complex and realistic cases when networks exploit non-linearities, striding, padding, skip connections, and subsampling. We aim to revisit and complement the previous analysis on ERFs by. While the previous analysis concentrates on studying the expectation of an ERF, i.e., when network depth n approaches infinity or all kernels are randomly distributed without learning in general, our analysis focuses on how we can perturb the computation such that the change in ERF is predictable, given an input and a set of kernel spaces. We start our analysis by considering a linear convolutional network, without any unit activations, as defined in Section 3.1. For consistency, superscripts are introduced to image I, kernel W, and subscripts to kernel positions k to denote the index s ∈ [1, n] of each layer. Formally, given an input image I and a set of K × K kernels {W (s) } n s=1 of stride 1, we can roll out the final output O ≡ I (n) by unfolding Equation 1 as By definition 1, the effective receptive field value of output coordinate j that takes input coordinate i can be computed by where 1[·] denotes the indicator function. This indicates that ERF is related only to the data sampling location j, kernel sampling location k, and kernel matrices {W (s) }. If we replace the m th kernel W (m) with a 1 × 1 kernel of a single parameter W (m) km sampled from it, the value of ERF becomes to where S = [1, n] \ {m}. Since a K × K kernel can be deemed as a composition of K 2 1 × 1 kernels distributed on a square grid, Equation 3 can thus be reformulated as For the case of complex non-linearities, where we here consider post ReLU activations in Equation 1, We can follow a similar analysis and derive corresponding ERF as 1 The original definition of ERF in focuses on the central coordinate of the output, i.e. j =, to partially avoid the effects of zero padding. In this work, we will keep j in favor of generality while explicitly assuming input size D → ∞. Here we can see that the ERF becomes data-dependent due to the coefficient C, which is tied to input coordinates, kernel sampling locations, and input data I. The more detailed analysis of this coefficient is beyond the scope of this paper. However, it should be noted that this coefficient only "gates" the contribution of the input pixels to the output. So in practice, ERF is "porous" -there are inactive (or gated) pixel units irregularly distributed around the ones that fire. This phenomenon also appeared in previous studies (such as in , Figure 1). The maximal size of an ERF is still controlled by the data sampling location and kernel values as in the linear cases in Equation 5. is that all computations are linear, making it compatible with any linear sampling operators for querying kernel values of fractional coordinates. In other words, sampling kernels in effect samples the ERF on the data in the linear case, but also roughly generalizes to non-linear cases as well. This finding motivates our design of Deformable Kernels (DKs) in Section 3.3. In the context of Equation 1, we resample the kernel W with a group of learned kernel offsets denoted as {∆k} that correspond to each discrete kernel position k. This defines our DK as and the value of ERF as Note that this operation leads to sub-pixel sampling in the kernel space. In practice, we use bilinear sampling to interpolate within the discrete kernel grid. Intuitively, the size (resolution) of the original kernel space can affect sampling performance. Concretely, suppose we want to sample a 3 × 3 kernel. DKs do not have any constraint on the size of the original kernel space, which we call the "scope size" of DKs. That said, we can use a W of any size K even though the number of sampling locations is fixed as K 2. We can thus exploit large kernels -the largest ones can reach 9×9 in our experiments with nearly no overhead in computation since bilinear interpolations are extremely lightweight compared to the cost of convolutions. This can also increase the number of learning parameters, which in practice might become intractable if not handled properly. In our implementation, we will exploit depthwise convolutions such that increasing scope size induces a negligible amount of extra parameters. As previously discussed, sampling the kernel space in effect transforms into sampling the ERF. On the design of locality and spatial granularity of our learned offsets, DK naturally delivers two variants -the global DK and the local DKs, as illustrated in Figure 2. In both operators, we learn a kernel offset generator G that maps an input patch into a set of kernel offsets that are later applied to rigid kernels. In practice, we implement G global as a stack of one global average pooling layer, which reduces feature maps into a vector, and another fully-connected layer without non-linearities, which projects the reduced vector into an offset vector of 2K 2 dimensions. Then, we apply these offsets to all convolutions for the input image following Equation 7. For local DKs, we implement G local as an extra convolution that has the same configuration as the target kernel, except that it only has 2K 2 output channels. This produces kernel sampling offsets {∆k} that are additionally indexed by output locations j. It should be noted that similar designs were also discussed in , in which filters are generated given either an image or individual patches from scratch rather than by resampling. Intuitively, we expect the global DK to adapt kernel space between different images but not within a single input. The local DK can further adpat to specific image patches: for smaller objects, it is better to have shaper kernels and thus denser ERF; for larger objects, flatter kernels can be more beneficial for accumulating a wider ERF. On a high level, local DKs can preserve better locality and have larger freedom to adapt kernel spaces comparing to its global counterpart. We later compare these operators in our experiments. The core idea of DKs is to learn adaptive offsets to sample the kernel space for modeling deformation, which makes them similar to Deformable Convolutions , at both the conceptual and implementation levels. Here, we distinguish DKs from Deformable Convolutions and show how they can be unified. where they aim to learn a group of data offsets {∆j} with respect to discrete data positions j. For consistency for analysis, the value of effective receptive field becomes This approach essentially recomposes the input image towards common modes such that semantic recognition suffers less from deformation. Moreover, according to our previous analysis in Equation 5, sampling data is another way of sampling the ERF. This, to a certain extent, also explains why Deformable Convolutions are well suited for learning deformation-agnostic representations. Moreover, we can learn both data and kernel offsets in one convolutional operator. Conceptually, this can be done by merging Equation 7 with Equation 9, which leads to We also investigate this operator in our experiments. Although the two techniques may be viewed as serving a similar purpose, we find the collaboration between Deformable Kernels and Deformable Convolutions to be powerful in practice, suggesting strong compatibility. We evaluate our Deformable Kernels (DKs) on image classification using ILSVRC and object detection using the COCO benchmark. Necessary details are provided to reproduce our , together with descriptions on base models and strong baselines for all experiments and ablations. For taskspecific considerations, we refer to each corresponding section. Implementation Details: We implement our operators in PyTorch and CUDA. We exploit depthwise convolutions when designing our operator for better computational efficiency 2. We initialize kernel grids to be uniformly distributed within the scope size. For the kernel offset generator, we set its learning rate to be a fraction of that of the main network, which we cross-validate for each base model. We also find it important to clip sampling locations inside the original kernel space, such that k + ∆k ∈ K in Equation 7. Base Models: We choose our base models to be ResNet-50 and MobileNet-V2 , following the standard practice for most vision applications. As mentioned, we exploit depthwise convolution and thus make changes to the ResNet model. Concretely, we define our ResNet-50-DW base model by replacing all 3 × 3 convolutions by its depthwise counterpart while doubling the dimension of intermediate channels in all residual blocks. We find it to be a reasonable base model compared to the original ResNet-50, with comparable performance on both tasks. During training, we set the weight decay to be 4 × 10 −5 rather than the common 10 −4 for both models since depthwise models usually underfit rather than overfit (; ;). We set the learning rate multiplier of DK operators as 10 −2 for ResNet-50-DW and 10 −1 for MobileNet-V2 in all of our experiments. Strong Baselines: We develop our comparison with two previous works: Conditional Convolutions for dynamics inference, and Deformable Convolutions for deformation modeling. We choose Conditional Convolutions due to similar computation forms -sampling can be deemed as an elementewise "expert voting" mechanism. For fair comparisons, We reimplement and reproduce their . We also combine our operator with these previous approach to show both quantitative evidence and qualitative insight that our working mechanisms are compatible. We first train our networks on the ImageNet 2012 training set . Similar to; , training is performed by SGD for 90 epochs with momentum 0.9 and batch size 256. We set our learning rate of 10 −1 so that it linearly warms up from zero within first 5 epochs. A cosine training schedule is applied over the training epochs. We use scale and aspect ratio augmentation with color perturbation as standard data augmentations. We evaluate the performance of trained models on the ImageNet 2012 validation set. The images are resized so that the shorter side is of 256 pixels. We then centrally crop 224 × 224 windows from the images as input to measure recognition accuracy. We first ablate the scope size of kernels for our DKs and study how it can affect model performance using ResNet-50-DW. As shown in Table 1, our DKs are sensitive to the choice of the scope size. We shown that when only applied to 3 × 3 convolutions inside residual bottlenecks, local DKs induce a +0.7 performance gain within the original scope. By further enlarging the scope size, performance increases yet quickly plateaus at scope 4 × 4, yielding largest +1.4 gain for top-1 accuracy. Our speculation is that, although increasing scope size theoretically means better interpolation, it also makes the optimization space exponentially larger for each convolutional layer. And since number of entries for updating is fixed, this also leads to relatively sparse gradient flows. In principle, we set default scope size at 4 × 4 for our DKs. We next move on and ablate our designs by comparing the global DK with the local DK, shown in the table. Both operators helps while the local variants consistently performs better than their global counterparts, bringing a +0.5 gap on both base models. We also study the effect of using more DKs in the models -the 1 × 1 convolutions are replaced by global DKs 3 with scope 2 × 2. Note that all 1 × 1 convolutions are not depthwise, and therefore this operation induces nearly 4 times of parameters. We refer their only for ablation and show that adding more DKs still helps - especially for MobileNet-V2 since it is under-parameterized. This finding also holds for previous models as well. We further compare and combine DKs with Conditional Convolutions and Deformable Convolutions. Results are recorded in Table 2. We can see that DKs perform comparably on ResNet-V2 and compare favorably on MobileNet-V2 -improve +0.9 from Deformable Convolutions and achieve comparable with less than a quarter number of parameters compared to Conditional Convolutions. Remarkably, we also show that if combined together, even larger performance gains are in reach. We see consistent boost in top-1 accuracy compared to strong baselines: +1.3/+1.0 on ResNet-50-DW, and +1.2/+1.2 on MobileNet-V2. These gaps are bigger than those from our own ablation, suggesting the working mechanisms across the operators to be orthogonal and compatible. We examine DKs on the COCO benchmark . For all experiments, we use Faster R-CNN with FPN as the base detector, plugging in the backbones we previously trained on ImageNet. For MobileNet-V2, we last feature maps of the each resolution for FPN post aggregation. Following the standard protocol, training and evaluation are performed on the 120k images in the train-val split and the 20k images in the test-dev split, respectively. For evaluation, we measure the standard mean average precision (mAP) and shattered scores for small, medium and large objects. Table 4: Comparisons to strong baselines for object detection DKs perform fall short to Deformable Convolution, but combination still improves performance. Table 3 and Table 4 follow the same style of analysis as in image classification. While the baseline methods of ResNet achieve 36.6 mAP, indicating a strong baseline detector, applying local DKs brings a +1.2 mAP improvement when replacing 3x3 rigid kernels alone and a +1.8 mAP improvement when replacing both 1x1 and 3x3 rigid kernels. This trend magnifies on MobileNet-v2 models, where we see an improvement of +1.6 mAP and +2.4 mAP, respectively. Results also confirm the effectiveness of local DKs against global DKs, which is again in line with our expectation that local DKs can model locality better. For the comparisons with strong baselines, an interesting phenomenon worth noting is that though DKs perform better than Deformable Convolutions on image classification, they fall noticeably short for object detection measured by mAP. We speculate that even though both techniques can adapt ERF in theory (as justified in Section 3.2), directly shifting sampling locations on data is easier to optimize. Yet after combining DKs with previous approaches, we can consistently boost performance for all the methods -+0.7/+1.2 for Deformable Convolutions on each base models, and +1.7/+1.1 for Conditional Convolutions. These findings align with the from image classification. We next investigate what DKs learn and why they are compatible with previous methods in general. Awareness of Object Scale: Since deformation is hard to quantify, we use object scale as a rough proxy to understand what DKs learn. In Figure 3, we show the t-SNE of learned model dynamics by the last convolutional layers in MobileNet-V2 using Conditional Convolution and our DKs. We validate the finding as claimed by that the experts of Conditional Convolutions have better correlation with object semantics than their scales (in reference to Figure 6 from their paper). Instead, our DKs learn kernel sampling offsets that strongly correlate to scales rather than semantics. This sheds light on why the two operators are complementary in our previous experiments. deformations. We compare the of rigid kernels, Deformable Convolutions, our DKs, and the combination of the two operators. For all examples, note that the theoretical receptive field covers every pixel in the image but ERFs contain only a central portion of it. Deformable Convolutions and DKs perform similarly in terms of adapting ERFs, but Deformable Convolutions tend to spread out and have sparse responses while DKs tend to concentrate and densely activate within an object region. Combining both operators yields more consistent ERFs that exploit both of their merits. In this paper, we introduced Deformable Kernels (DKs) to adapt effective receptive fields (ERFs) of convolutional networks for object deformation. We proposed to sample kernel values from the original kernel space. This in effect samples the ERF in linear networks and also roughly generalizes to non-linear cases. We instantiated two variants of DKs and validate our designs, showing connections to previous works. Consistent improvements over them and compatibility with them were found, as illustrated in visualizations. image patch kernel patch Figure 5: Illustration of feed-forwarding through a 3×3 local Deformable Kernel from a 4×4 scope. For each input patch, local DK first generates a group of kernel offsets {∆k} from input feature patch using the light-weight generator G (a 3×3 convolution of rigid kernel). Given the original kernel weights W and the offset group {∆k}, DK samples a new set of kernel W using a bilinear sampler B. Finally, DK convolves the input feature map and the sampled kernels to complete the whole computation. We now cover more details on implementing DKs by elaborating the computation flow of their forward and backward passes. We will focus on the local DK given its superior performance in practice. The extension to global DK implementation is straight-forward. In Section 3.3, we introduce a kernel offset generator G and a bilinear sampler B. Figure 5 illustrates an example of the forward pass. Concretely, given a kernel W and a learned group of kernel offsets {∆k} on top of a regular 2D grid {k}, we can resample a new kernel W by a bilinear operator B as where B(k + ∆k, k) = max(0, 1 − |k x + ∆k x − k x |) · max(0, 1 − |k y + ∆k y − k y |). Given this resampled kernel, DK convolves it with the input image just as in normal convolutions using rigid kernels, characterized by Equation 1. The backward pass of local DK consists of three types of gradients: the gradient to the data of the previous layer, the gradient to the full scope kernel of the current layer and the additional gradient to the kernel offset generator of the current layer. The first two types of gradients share same forms of the computation comparing to the normal convolutions. We now cover the computation for the third flow of gradient that directs where to sample kernel values. In the context of Equation 7, the partial derivative of a output item O j w.r.t. x component of a given kernel offset ∆k x (similar for its y component ∆k y) can be computed as where ∂B(k + ∆k, k) ∂∆k x = max(0, 1 − |k y + ∆k y − k y |) ·    0 |k x + ∆k x − k x | ≥ 1 1 k x + ∆k x < k x −1 k x + ∆k x ≥ k x. Table 5: Network architecture of our ResNet-50-DW comparing to the original ResNet-50 Inside the brackets are the general shape of a residual block, including filter sizes and feature dimensionalities. The number of stacked blocks on each stage is presented outside the brackets. "G = 128" suggests the depthwise convolution with 128 input channels. Two models have similar numbers of parameters and FLOPs. At the same time, depthwise convolutions facilitate the computation efficiency of our Deformable Kernels. B NETWORK ARCHITECTURES Table 5 shows the comparison between the original ResNet-50 and our modified ResNet-50-DW. The motivation of introducing depthwise convolutions to ResNet is to accelerate the computation of local DKs based on our current implementations. The ResNet-50-DW model has similar model capacity/complexity and performance (see Table 1) compared to its non-depthwise counterpart, making it an ideal base architecture for our experiments. On the other hand, in all of our experiments, MobileNet-V2 base model is left untouched. We here show additional comparison of ERFs when objects have different kinds of deformations in Figure 6. Comparing to baseline, our method can adapt ERFs to be more persistent to object's semantic rather than its geometric configuration.
Don't deform your convolutions -- deform your kernels.
1,107
scitldr
Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology. Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function. Existing approaches for detecting structural similarity between proteins from sequence are unable to recognize and exploit structural patterns when sequences have diverged too far, limiting our ability to transfer knowledge between structurally related proteins. We newly approach this problem through the lens of representation learning. We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information. We train bidirectional long short-term memory (LSTM) models on protein sequences with a two-part feedback mechanism that incorporates information from (i) global structural similarity between proteins and (ii) pairwise residue contact maps for individual proteins. To enable learning from structural similarity information, we define a novel similarity measure between arbitrary-length sequences of vector embeddings based on a soft symmetric alignment (SSA) between them. Our method is able to learn useful position-specific embeddings despite lacking direct observations of position-level correspondence between sequences. We show empirically that our multi-task framework outperforms other sequence-based methods and even a top-performing structure-based alignment method when predicting structural similarity, our goal. Finally, we demonstrate that our learned embeddings can be transferred to other protein sequence problems, improving the state-of-the-art in transmembrane domain prediction. Proteins are linear chains of amino acid residues that fold into specific 3D conformations as a of the physical properties of the amino acid sequence. These structures, in turn, determine the wide array of protein functions, from binding specificity to catalytic activity to localization within the cell. Information about structure is vital for studying the mechanisms of these molecular machines in health and disease, and for development of new therapeutics. However, experimental structure determination is costly and atomic structures have only been determined for a tiny fraction of known proteins. Methods for finding proteins with related structure directly from sequence are of considerable interest, but the problem is challenging, because sequence similarity and structural similarity are only loosely related BID0 BID1 BID2 BID3, e.g. similar structural folds can be formed by diverse sequences. As a , our ability to transfer knowledge between proteins with similar structures is limited. In this work, we address this problem by learning protein sequence embeddings using weak supervision from global structural similarity for the first time. Specifically, we aim to learn a bidirectional LSTM (biLSTM) embedding model, mapping sequences of amino acids to sequences of vector representations, such that residues occurring in similar structural contexts will be close in embedding space. This is difficult, because we have not observed position-level correspondences between sequences, only global sequence similarity. We solve this by defining a whole sequence similarity measure from sequences of vector embeddings. The measure decomposes into an alignment of the sequences and pairwise comparison of the aligned positions in embedding space. For the alignment, we propose a soft symmetric alignment (SSA) mechanism -a symmetrization of the directional alignment commonly used in attention mechanisms. Furthermore, in order to take advantage of information about local structural context within proteins, we extend this framework to include position-level supervision from contacts between residues in the individual protein structures. This multitask framework FIG0 ) allows us to newly leverage both global structural similarity between proteins and residue-residue contacts within proteins for training embedding models. The similarity prediction module takes pairs of proteins represented by their sequences of vector embeddings and predicts their shared SCOP level. Sequences are first aligned based on L1 distance between their vector embeddings using SSA. From the alignment, a similarity score is calculated and related to shared SCOP levels by ordinal regression. The contact prediction module uses the sequence of vector embeddings to predict contacts between amino acid positions within each protein. The contact loss is calculated by comparing these predictions with contacts observed in the 3D structure of the protein. Error signal from both tasks is used to fit the parameters of the encoder. We first benchmark our model's ability to correctly predict structural similarity between pairs of sequences using the SCOPe ASTRAL dataset BID4. This dataset contains protein domains manually classified into a hierarchy of structural categories (Appendix Figure 3). We show that our model dramatically outperforms other sequence-based protein comparison methods when predicting comembership in the SCOP hierarchy. Remarkably, our model even outperforms TMalign BID5, which requires structures as input and therefore structures must be known a priori. In contrast, our model uses only sequence as input. Next, we perform an ablation study to evaluate the importance of our modeling components for structural similarity prediction. We also consider an additional task, secondary structure prediction, to assess the model's ability to capture local structure features. We demonstrate that SSA outperforms alternative alignment methods for both of these tasks and that inclusion of the contact prediction training task further improves performance. Finally, we demonstrate that the embeddings learned by our model are generally applicable to other protein machine learning problems by leveraging our embeddings to improve the state-of-the-art in transmembrane prediction. This work presents the first attempt in learning protein sequence embeddings from structure and takes a step towards bridging the sequence-structure divide with representation learning. Current work in protein sequence embeddings has primarily been focused on unsupervised k-mer co-occurence approaches to learn fixed sized vector representations BID6 BID7 based on similar methods in NLP BID8 BID9 BID10 BID11. Melvin et al. BID12 also learn fixed sized semantic embeddings by projecting alignment scores into a low dimensional vector to recapitulate rankings given by existing alignment tools and shared superfamily membership. However, fixed sized vector representations are limited, because they are not usable for any sequence labeling problems (e.g. active site prediction, transmembrane region prediction, etc.). Other methods have focused on manual feature engineering based on biophysical and sequence attributes BID13. These methods rely on expert knowledge and do not capture properties that emerge from interactions between amino acids. Instead, we seek to learn embeddings that encode the full structural context in which each amino acid occurs. This is inspired partly by the recent success of unsupervised contextual embedding models using bidirectional recurrent neural network language models BID14 BID15 where word embeddings, learned as a function of their context, have been successfully transferred to other tasks. In particular, we apply a similar language model for the first time on protein sequences as part of our supervised framework. Supervised embedding models have also been trained for natural language inference (NLI) but produce only fixed sized embeddings BID16 BID17 BID18.At the same time, problems involving word alignment given matched sequences, such as cross lingual word embeddings and document similarity, have also been explored. Cross lingual word embeddings are learned from unaligned parallel text, where sentences are matched between languages but words are not. Kočiský et al. BID19 learn bilingual word embeddings jointly with a FastAlign BID20 word alignment model using expectation maximization. BilBOWA BID21 learns cross lingual word embeddings using parallel sentences without word level alignments by assuming a uniform alignment between words. However, Gouws et al. BID21 assume all pairings between words are equally likely and do not infer them from current values of the embeddings. Related methods have been developed for measuring similarity between documents based on their words. Word Mover's Distance (WMD) and its supervised variant align words between pairs of documents by solving an optimal transport problem given by distance between word vectors. However, these methods are not designed for learning neural network embedding models and the embeddings are not contextual. Furthermore, WMD alignments are prohibitively expensive when alignments must be computed at every optimization step, scaling as O(p 3 logp) where p is the number of unique words. Our SSA solves these problems via an alignment mechanism inspired by previous work using soft alignments and attention mechanisms for sequence modeling BID22 BID23. Further elaborate directional alignments have been used for question answering and reading comprehension models BID24 BID25 BID26 and for natural language inference BID27 BID28 BID28. Unlike these methods, however, our SSA method is both symmetric and memoryless. Furthermore, it is designed for learning interpretable embeddings based on a similarity measure between individual sequence elements. It is also fast and memory efficient -scaling with the product of the sequence lengths. Protein fold recognition is the problem of classifying proteins into folds (for example, as defined by the SCOP database) based on their sequences. Approaches to this problem have largely been based on sequence homology using sequence similarity to classify structures based on close sequence matches BID4 BID29. These methods are either direct sequence alignment tools BID30 BID31 or based on profile HMMs in which multiple sequence alignments are first built by iterative search against a large sequence database, the multiple sequence alignments are converted into profile HMMs, and then sequences are compared using HMM-sequence or HMM-HMM alignments BID32 BID33. However, these methods are only appropriate for matching proteins with high sequence similarity BID1 BID2 BID29. In contrast, we focus on learning protein sequence representations that directly capture structure information in an easily transferable manner. We hypothesize that this approach will improve our ability to detect structural similarity from sequence while also producing useful features for other learning tasks. In this section, we describe the three components of our framework (FIG0) in detail: the specific choice of embedding model, a multi-layer bidirectional LSTM with additional inputs from a pretrained LSTM language model, soft symmetric alignment and ordinal regression components for relating sequences of vector representations for pairs of proteins to their global structural similarity, and the pairwise feature vectors and convolutional neural network design for residue-residue contact prediction. BiLSTM encoder. The encoder takes a sequence of amino acids representing a protein and encodes it into a sequence of vector representations of the same length. To allow the vector representations at each position to be functions of all surrounding amino acids, we structure the encoder as a stack of bidirectional LSTMs followed by a linear layer projecting the outputs of the last biLSTM layer into the final embedding space (Appendix Figure 2).Pretrained language model. The concept of feeding LSTM language model representations as inputs for supervised learning problems as part of a larger neural network model has shown recent success in NLP but has not yet been tried for biological sequences. Inspired partially by the success of ELMo BID15, we consider, in addition to 1-hot representations of the amino acids, the inclusion of the hidden layers of a pretrained bidirectional LSTM language model as inputs to the encoder described above. The language model is pretrained on the raw protein sequences in the protein families database (Pfam) BID34 to predict the amino acid at each position of each protein given the previous amino acids and the following amino acids (see Appendix section A.1 for details).Specifically, given the language model hidden states at each position, i, denoted as h, and the 1-hot representation of the amino acid at those positions, x i, we introduce a learned linear transformation of these representations with ReLU non-linearity, DISPLAYFORM0 which is passed as input to the biLSTM sequence encoder. The parameters W LM, W x, and b are trained together with the parameters of the biLSTM encoder. The parameters of the language model itself are frozen during training. In experiments without the language model, h LM i is set to zero for all positions. The primary task we consider for training the sequence embedding model with structural information is the prediction of global structural similarity between protein sequences as defined by shared membership in the SCOP hierarchy. SCOP is an expertly curated database of protein domain structures in which protein domains are assigned into a hierarchy of structures (Appendix Figure 3). Specifically, we define this as a multiclass classification problem in which a pair of proteins is classified into no similarity, class level similarity, fold level similarity, superfamily level similarity, or family level similarity based on the most specific level of the SCOP hierarchy shared by those proteins. We encode these labels as y ∈ {0, 1, 2, 3, 4} based on the number of levels shared (i.e. y=0 encodes no similarity, y=1 encodes class similarity, etc.). In the following two sections, we describe how protein sequences are compared based on their sequences of vector embeddings using soft symmetric alignment and then how this alignment score is used to predict the specific similarity class by taking advantage of the natural ordering of these classes in an ordinal regression framework. In order to calculate the similarity of two amino acid sequences given that each has been encoded into a sequence of vector representations, z 1...z n and z 1...z m, we develop a soft symmetric alignment mechanism in which the similarity between two sequences is calculated based on their vector embeddings asŝ DISPLAYFORM0 where a ij are entries of the alignment matrix given by DISPLAYFORM1, and DISPLAYFORM2 and A = n i=1 m j=1 a ij is the length of the alignment. Next, to relate this scalar similarity to the ordinal structural similarities defined using SCOP (y ∈ {0, 1, 2, 3, 4}), we adopt an ordinal regression framework. Specifically, we learn a series of binary classifiers to predict whether the structural similarity level is greater than or equal to each level, t, given the alignment score (Equation 1). Given parameters θ 1...θ 4 and b 1...b 4, the probability that two sequences share similarity greater than or equal to t is defined byp(y ≥ t) = sigmoid(θ tŝ + b t), with the constraint that θ t ≥ 0 to enforce thatp increases monotonically withŝ. The structural similarity loss is then given by DISPLAYFORM0 These parameters are fit jointly with the parameters of the sequence encoder by backpropogating through the SSA which is fully differentiable. Furthermore, given these classifiers, the predicted probability that two sequences belong to structural similarity level t isp(y = t) =p(y ≥ t)(1−p(y ≥ t + 1)) withp(y ≥ 0) = 1 by definition. We can augment our SSA framework, in which position-level correspondence is inferred between sequences, with position-level supervision directly in the form of within protein contacts between residues. We introduce a secondary task of within protein residue-residue contact prediction with the hypothesis that the fine-grained structural supervision provided by the observed contacts will improve the quality of the embeddings. Contact prediction is a binary classification problem in which we seek to predict whether residues at positions i and j within an amino acid sequence make contact in the 3D structure. Following common practice in the protein structure field, we define two positions as making contact if the Cα atoms of those residues occur within 8Åin the 3D structure. In order to predict contacts from the sequence of embedding vectors given by the encoder for an arbitrary protein of length N, we define a pairwise features tensor of size (NxNx2D) where D is the dimension of the embedding vector containing pairwise features given by the concatenation of the absolute element-wise differences and the element-wise products of the vector representations for each pair of positions, DISPLAYFORM0 We choose this featurization because it is symmetric, v ij = v ji, and has shown widespread utility for pairwise comparison models in NLP BID35. These vectors are then transformed through a single hidden layer of dimension H (implemented as a width 1 convolutional layer) and ReLU activation giving h ij = ReLU(W v ij + b). Contact predictions are then made by convolving a single 7x7 filter over the ing NxNxH tensor with padding and sigmoid activation to give an NxN matrix containing the predicted probability for each pair of residues forming a contact. Given the observed contacts, we define the contact prediction loss, L contact, to be the expectation of the cross entropy between the observed labels and the predicted contact probabilities taken over all pairs of residues within each protein in the dataset. Complete multitask loss. We define the full multitask objective by DISPLAYFORM1 where λ is a parameter that interpolates between the structural similarity and contact prediction losses. This error signal is backpropogated through the contact prediction specific parameters defined in section 3.3, the similarity prediction specific parameters defined in section 3.2, and the parameters of sequence encoder defined in section 3.1 to train the entire model end-to-end. Our encoder consists of 3 biLSTM layers with 512 hidden units each and a final output embedding dimension of 100 (Appendix Figure 2). Language model hidden states are projected into a 512 dimension vector before being fed into the encoder. In the contact prediction module, we use a hidden layer with dimension 50. These hyperparameters were chosen to be as large as possible while fitting on a single GPU with reasonable minibatch size. While we compare performance with simpler encoder architectures in section 4.2, it is possible that performance could be improved further with careful architecture search. However, that is beyond the scope of this work. Sequence embedding models are trained for 100 epochs using ADAM with a learning rate of 0.001 and otherwise default parameters provided by PyTorch. Each epoch consists of 100,000 examples sampled from the SCOP structural similarity training set with smoothing of the similarity level distribution of 0.5. In other words, the probability of sampling a pair of sequences with similarity level t is proportional to N 0.5 t where N t is the number of sequence pairs with t similarity in the training set. This is to slightly upweight sampling of highly similar pairs of sequences that would otherwise be rare. We choose 0.5 specifically such that a minibatch of size 64 is expected to contain two pairs of sequences with family level similarity. The structural similarity component of the loss is estimated with minibatches of 64 pairs of sequences. When using the full multitask objective, the contact prediction component uses minibatches of 10 sequences and λ = 0.1. Furthermore, during training we apply a small perturbation to the sequences by resampling the amino acid at each position from the uniform distribution with probability 0.05. These hyperparameters were selected using a validaton set as described in Appendix section A.2. All models were implemented in PyTorch and trained on a single NVIDIA Tesla V100 GPU. Each model took roughly 3 days to train and required 16 GB of GPU RAM. Additional runtime and memory details can be found in Appendix A.3.In the following sections, we refer to the 3-layer biLSTM encoder trained with the full framework as "SSA (full)" and the framework without contact prediction (i.e. λ = 1) as "SSA (no contact prediction)." We first evaluate the performance of our full SSA embedding model for predicting structural similarity between amino acid sequences using the SCOP dataset. We benchmark our embedding model against several widely used sequence-based protein comparison methods, Needleman-Wunsch alignment (NW-align), phmmer BID32, an HMM-to-sequence comparison method, and HHalign BID33, an HMMto-HMM comparison method. For HHalign, profle HMMs for each sequence were constructed by iterative search against the uniref30 database. We also benchmark our method agains TMalign, a method for evaluating protein similarity based on alignments of protein structures. Details for each of these baselines can be found in Appendix section A.4. Methods are compared based on accuracy Table 1: Comparison of the full SSA model with three protein sequence alignment methods (NWalign, phmmer, and HHalign) and the structure alignment method TMalign. We measure accuracy, Pearson's correlation (r), Spearman's rank correlation (ρ), and average precision scores for retrieving protein pairs with structural similarity of at least class, fold, superfamily, and family levels. of classifying the shared SCOP level, correlation between the similarity score and shared SCOP level, and average precision scores when considering correct matches at each level of the SCOP hierarchy (i.e. proteins that are in the same class, proteins in the same fold, etc.).The SCOP benchmark datasets are formed by splitting the SCOPe ASTRAL 2.06 dataset, filtered to a maximum sequence identity of 95%, into 22,408 train and 5,602 heldout sequences. From the heldout sequences, we randomly sample 100,000 pairs as the ASTRAL 2.06 structural similarity test set. Furthermore, we define a second test set using the newest release of SCOPe (2.07) by collecting all protein sequences added between the 2.06 and 2.07 ASTRAL releases. This gives a set of 688 protein sequences all pairs of which define the ASTRAL 2.07 new test set. The average percent identity between pairs of sequences within all three datasets is 13%. Sequence length statistics can be found in Appendix TAB4.We find that our full SSA embedding model outperforms all other methods on all metrics on both datasets. On the 2.06 test set, we improve overall prediction accuracy from 0.79 to 0.95, Pearson's correlation from 0.37 to 0.91, and Spearman's rank correlation from 0.23 to 0.69 over the next best sequence comparison method, HHalign, without requiring any database search to construct sequence profiles. Furthermore, our full SSA model is much better for retrieving proteins sharing the same fold -the structural level of most interest for finding distant protein homologues -improving the average precision score by 0.28 on the 2.06 test set and 0.10 on the 2.07 test set over HHalign. We find that our full SSA embedding model even outperforms TMalign, a method for comparing proteins based on their 3D structures, when predicting shared SCOP membership. This is remarkable considering that our model uses only sequence information when making predictions whereas TMalign is provided with the known protein structures. The largest improvement comes at the SCOP class level where TMalign achieves much lower average precision score for retrieving these weak structural matches. We next evaluate the individual model components on two tasks: structure similarity prediction on the ASTRAL 2.06 test set and 8-class secondary structure prediction on a 40% sequence identity filtered dataset containing 22,086 protein sequences from the protein data bank (PDB) BID36, a repository of experimentally determined protein structures. Secondary structure prediction is a sequence labeling problem in which we attempt to classify every position of a protein sequence into one of eight classes describing the local 3D structure at that residue. We use this task to measure the utility of our embeddings for position specific prediction problems. For this problem, we split the secondary Table 2: Study of individual model components. Results of structural similarity prediction on the ASTRAL 2.06 test set and secondary structure prediction are provided for embedding models trained with various components of our multitask framework. The SSA model trained without the language model component of the encoder and without contact prediction (SSA (without language model)), the SSA, UA, and ME models trained without contact prediction (ME, UA, SSA (without contact prediction)), and the full SSA embedding model. structure dataset into 15,461 training and 6,625 testing sequences. We then treat each position of each sequence as an independent datapoint with features either given by the 100-d embedding vector or 1-hot encoding of the k-mer at that position and train a fully connected neural network (2 hidden layers, 1024 units each, ReLU activations) to predict the secondary structure class from the feature vector. These models are trained with cross entropy loss for 10 epochs using ADAM with learning rate 0.001 and a minibatch size of 256.SSA outperforms alternative comparison methods. We first demonstrate the importance of our SSA mechanism when training the contextual embedding model by comparing the performance of biLSTM encoders trained with SSA versus the same encoders trained with uniform alignment and a mean embedding comparison approaches BID21. In uniform alignment (UA), we consider a uniform prior over possible alignments giving the similarity score. For the mean embedding method (ME), we instead calculate the similarity score based on the difference between the average embedding of each sequence. For these baselines, we substitutê DISPLAYFORM0 in for SSA similarity (Equation 1) respectively during model training and prediction. These models are trained without contact prediction (λ = 1) to compare the alignment component in isolation. We find that not only are the SSA embeddings better predictors of secondary structure than k-mer features (accuracy 0.487 vs. 0.444 for 3-mers), but that the SSA mechanism is necessary for achieving best performance on both the structure similarity and local structure prediction tasks. As seen in table 2, the ME model achieves close to SSA performance on secondary structure prediction, but is significantly worse for SCOP similarity prediction. The UA model, on the other hand, is close to SSA on SCOP similarity but much worse when predicting secondary structure. This suggests that our SSA mechanism captures the best of both methods, allowing embeddings to be position specific as in the ME model but also being better predictors of SCOP similarity as in the UA model. Contact prediction improves embeddings. Although the SSA mechanism allows our embedding model to capture position specific information, we wanted to explore whether positional information within sequences in the form of contact prediction could be used to improve the embeddings. We train models with and without the contact prediction task and find that including contact prediction improves both the structural similarity prediction and secondary structure prediction . The accuracy of secondary structure prediction improves from 0.487 without to 0.630 with contact prediction (Table 2). This suggests that the contact prediction task dramatically improves the quality Table 3: Accuracy of transmembrane prediction using structural embeddings in 10-fold cross validation and comparison with other transmembrane prediction methods. BiLSTM+CRF models using either our full SSA model embeddings or 1-hot encodings of the amino acids as features are displayed below the dotted line. We compare with for a variety of transmembrane prediction methods previously reported on the TOPCONS dataset.of the local embeddings on top of the weak supervision provided by whole structure comparison. For reference, we also report contact prediction performance for our full SSA model in Appendix A.5.Encoder architecture and pretrained language model are important. Finally, we show, for the first time with biological sequences, that a language model pretrained on a large unsupervised protein sequence database can be used to transfer information to supervised sequence modeling problems. SCOP similarity classification for SSA embedding models trained with and without LM hidden layer inputs shows that including the LM substantially improves performance, increasing accuracy from 0.898 to 0.938. Furthermore, we examine the extent to which LM hidden states capture all useful structural information by training SSA embedding models with less expressive power than our 3-layer biLSTM architecture (Appendix TAB5). We find that the LM hidden states are not sufficient for high performance on the structural similarity task with linear, fully connected (i.e. width 1 convolution), and single layer biLSTM embedding models having lower accuracy, Pearson, and Spearman correlations than the 3-layer biLSTM on the ASTRAL 2.06 test set. We demonstrate the potential utility of our protein sequence embedding model for transfering structural information to other sequence prediction problems by leveraging our embedding model for transmembrane prediction. In transmembrane prediction, we wish to detect which, if any, segments of the amino acid sequence cross the lipid bilayer for proteins integrated into the cell membrane. This is a well studied problem in protein biology with methods generally consisting of HMMs with sophisticated, manually designed hidden state transition distributions and emission distributions including information about residue identity, amino acid frequencies from multiple sequence alignments, local structure, and chemical properties. Newer methods are also interested in detection of signal peptides, which are short amino acid stretches at the beginning of a protein sequence signaling for this protein to be inserted into the cell membrane. To benchmark our embedding vectors for this problem, we develop a conditional random field (CRF) model in which the propensity of each hidden state given the sequence of embedding vectors is defined by a single layer biLSTM with 150 units. As a baseline, we include an identical biLSTM + CRF model using only 1-hot encodings of the amino acids as features. For the transition probabilities between states, we adopt the same structure as used in TOPCONS BID37 and perform 10-fold cross validation on the TOPCONS transmembrane benchmark dataset. We report for correctly predicting regions in proteins with only transmembrane domains (TM), transmembrane domains and a signal peptide (SP+TM), neither transmembrane nor signal peptide domains (Globular), or a signal peptide but no transmembrane regions (Globular+SP). Transmembrane state labels are predicted with Viterbi decoding. Again following TOPCONS, predictions are counted as correct if, for TM proteins, our model predicts no signal peptide, the same number of transmembrane regions, and those regions overlap with real regions by at least five positions. Correct SP+TM predictions are defined in the same way except that proteins must be predicted to start with a signal peptide. Globular protein predictions are correct if no transmembrane or signal peptides are predicted and Globular+SP predictions are correct if only a leading signal peptide is predicted. We find that our transmembrane predictions rank first or tied for first in 3 out of the 4 categories (SP+TM, Globular, and Globular+SP) and ranks second for the TM category. Overall, our transmembrane predictions are best with prediction accuracy of 0.89 vs 0.87 for TOPCONS. Remarkably, this is by simply replacing the potential function in the CRF with a function of our embedding vectors, the hidden state grammar is the same as that of TOPCONS. Furthermore, the performance cannot be attributed solely to the biLSTM+CRF structure, as the biLSTM+CRF with 1-hot encoding of the amino acids performs poorly, tying MEMSAT-SVM for worst performance. This is particularly noteworthy, because TOPCONS is a meta-predictor. It uses outputs from a wide variety of other transmembrane prediction methods to define the transmembrane state potentials. In this work, we proposed a novel alignment approach to learning contextual sequence embeddings with weak supervision from a global similarity measure. Our SSA model is fully differentiable, fast to compute, and can be augmented with position-level structural information. It outperforms competition in predicting protein structural similarity including, remarkably, structure alignment with TMalign. One consideration of training using SCOP, however, is that we focus exclusively on single-domain protein sequences. This means that the highly contextual embeddings given by the biLSTM encoder to single domains may differ from embeddings for the same domain in a multi-domain sequence. One interesting extension would thus be to modify the encoder architecture or training procedure to better model domains in multi-domain contexts. Nonetheless, the ing embeddings are widely useful, allowing us to improve over the state-of-the-art in transmembrane region prediction, and can easily be applied to other protein prediction tasks such as predicting functional properties, active site locations, protein-protein interactions, etc. Most methods that use HMM sequence profiles or position-specific scoring matrices could be augmented with our embeddings. The broader framework extends to other related (non-biological) tasks. A APPENDIX Figure The bidirectional LSTM language model was trained on the full set of protein domain sequences in the Pfam database, 21,827,419 total sequences. The language model was trained to predict the amino acid at position i given observations of all amino acids before i and all amino acids after i by minimizing the cross entropy loss with log predicted log probabilities given by the sum of the forward and reverse LM direction predictions DISPLAYFORM0 where p F (x i) is the probability given by the forward direction LSTM and p R (x i) is the probability given by the reverse direction LSTM.The language model architecture consisted of a 2-layer LSTM with 1024 units in each layer followed by a linear transformation into the 20-d amino acid prediction. All parameters were shared between the forward and reverse direction components. The model was trained for a single epoch using ADAM with a learning rate of 0.001 and minibatch size of 32. We select the resampling probability and λ hyperparameters based on structural similarity prediction accuracy on a validation set held out from the SCOP ASTRAL 2.06 training set (Section 4.1). For these experiments, we hold out 2,240 random sequences from the 22,408 sequences of the training set. From these held out sequences, we randomly sample 100,000 pairs as the validation set. Resampling Contact Prediction Accuracy Table 6: Evaluation of amino acid resampling probability and contact prediction loss weight, λ, for structural similarity prediction on the validation set. (Top) Resampling probability of 0.05 is compared with no resampling for 3-layer biLSTM encoders with and without language model components that are trained using SSA without contact prediction. (Bottom) Comparison of models trained using the full framework with λ = 0.5, λ = 0.33, and λ = 0.1.Resampling probability. We consider models trained with amino acid resampling probability 0.05 and without amino acid resampling in the simplified framework (SSA without contact prediction) using the 3-layer biLSTM encoder with and without the language model component. We find that the structural similarity are slightly improved when using amino acid resampling regardless of whether the LM component of the encoder is included (Appendix Table 6). Based on this , all models were trained with amino acid resampling of 0.05.Multitask loss weight, λ. We evaluate three values of λ, interpolating between the structural similarity loss and contact prediction loss, for predicting structural similarity on the validation set. Prediction accuracy increased progressively as λ decreased and all models trained with contact prediction outperformed those trained without contact prediction (Appendix Table 6). Because it gave the best prediction accuracy on the validation set, λ = 0.1 was selected for training models with our full framework. For the NW-align method, similarity between protein sequences was computed using the BLOSUM62 substitution matrix with gap open and extend penalties of -11 and -1 respectively. For phmmer, each pair of sequences was compared in both directions (i.e. query->target and target->query) using the'-max' option. The similarity score for each pair was treated as the average off the query->target and target->query scores. For HHalign, multiple sequence alignments were first built for each sequence by using HHblits to search for similar sequences in the uniclust30 database with a maximum of 2 rounds of iteration (-n 2). Sequences pairs were then scored by using HHalign to score the target->query and query->target HMM-HMM alignments. Again, the average of the two scores was treated as the overall HHalign score. Finally, for TMalign, the structures for each pair of proteins were aligned and the scores for the query->target and target->query alignments were averaged to give the overall TMalign score for each pair of proteins. To calculate the classification accuracy from the above scores, thresholds were found to maximize prediction accuracy when binning scores into similarity levels using 100,000 pairs of sequences sampled from the ASTRAL 2.06 training set. Although we use contact prediction as an auxiliary task to provide position-level structural supervision for the purpose of improving embedding quality and structure similarity prediction, we include for predicting contacts using the trained contact prediction module here. We report for contact prediction using the full SSA model on the SCOP ASTRAL 2.06 test set and 2.07 new test set in Appendix Table 7. Precision, recall, and F1 score are calculated using a probability threshold of 0.5 for assigning predicted contacts. We consider performance for predicting all contacts (i.e. contacts between all amino acid positions, excluding neighbors, |i − j| ≥ 2), our training objective, and for distant contacts (|i − j| ≥ 12) which are the focus of co-evolution based methods. We also report the precision of the top L, L/2, and L/5 contact predictions, where L is the length of the protein sequence. Table 7: Contact prediction performance of the full SSA model on the SCOP ASTRAL 2.06 test set and the 2.07 new test set. We report precision, recall, F1 score, and the area under the precision-recall curve (AUPR) for predicting all contacts (|i − j| ≥ 2) and distant contacts (|i − j| ≥ 12) in the test set proteins. We also report precision of the top L, L/2, and L/5 predicted contacts. DISPLAYFORM0 In order to facilitate some comparison with state-of-the-art co-evolution based contact prediction methods, we also report for contact prediction using our full SSA model on the publicly released set of free modelling targets from CASP12 BID39. This dataset consists of 21 protein domains. We include the complete list at the end of this section. We compare with deep convolutional neural network models using co-evolution features, RaptorX-Contact BID40, iFold & Deepfold-Contact BID41, and MetaPSICOV BID42, and with GREMLIN BID43, an entirely co-evolution based approach. We find that our model dramatically outperforms these methods when predicting all contacts but performs worse when predicting only distant contacts (Appendix Table 8). This is unsurprising, as our model is trained to predict all contacts, of which local contacts are much more abundant than distant contacts, whereas the co-evolution methods are designed to predict distant contacts. Furthermore, we wish to emphasize that our model is tuned for structural similarity prediction and that our contact prediction module is extremely simple, being only a single fully connected layer followed by a single convolutional layer. It is possible that much better performance could be achieved using our embeddings with a more sophisticated contact prediction architecture. That said, our model does outperform the pure co-evolution method, GREMLIN, based on AUPR for predicting distant contacts. These suggest that our embeddings may be useful as features, in combination with co-evolution based features, for improving dedicated contact prediction models on both local and distant contacts. Table 8: Contact prediction performance of the full SSA model on the CASP12 free modelling targets with of state-of-the-art co-evolution based methods for comparison. We report precision, recall, F1 score, and the area under the precision-recall curve (AUPR) for predicting all contacts (|i − j| ≥ 2) and distant contacts (|i − j| ≥ 12). We also report precision of the top L, L/2, and L/5 predicted contacts. TAB4
We present a method for learning protein sequence embedding models using structural information in the form of global structural similarity between proteins and within protein residue-residue contacts.
1,108
scitldr
Inspired by the adaptation phenomenon of biological neuronal firing, we propose regularity normalization: a reparameterization of the activation in the neural network that take into account the statistical regularity in the implicit space. By considering the neural network optimization process as a model selection problem, the implicit space is constrained by the normalizing factor, the minimum description length of the optimal universal code. We introduce an incremental version of computing this universal code as normalized maximum likelihood and demonstrated its flexibility to include data prior such as top-down attention and other oracle information and its compatibility to be incorporated into batch normalization and layer normalization. The preliminary showed that the proposed method outperforms existing normalization methods in tackling the limited and imbalanced data from a non-stationary distribution benchmarked on computer vision task. As an unsupervised attention mechanism given input data, this biologically plausible normalization has the potential to deal with other complicated real-world scenarios as well as reinforcement learning setting where the rewards are sparse and non-uniform. Further research is proposed to discover these scenarios and explore the behaviors among different variants. The Minimum Description Length (MDL) principle asserts that the best model given some data minimizes the combined cost of describing the model and describing the misfit between the model and data BID9 with a goal to maximize regularity extraction for optimal data compression, prediction and communication BID4. Most unsupervised learning algorithms can be understood using the MDL principle BID10, treating the neural network as a system communicating the input to a receiver. If we consider the neural network training as the optimization process of a communication system, each input at each layers of the system can be described as a point in a low-dimensional continuous constraint space BID13. If we consider the neural networks as population codes, the constraint space can be subdivided into the input-vector space, the hidden-vector space, and the implicit space, which represents the underlying dimensions of variability in the other two spaces, i.e., a reduced representation of the constraint space. For instance, given a image of an object, the rotated or scaled version still refers to the same object, thus each image instance of the same object can be represented by a position on a 2D implicit space with one dimension as orientation and the other as size BID13. The relevant information about the implicit space can be constrained to ensure a minimized description length of the system. This type of constraint can also be found in biological brains of primates: high-level brain areas are known to send top-down feedback connections to lower-level areas to select of the most relevant information in the current input given the current task BID2, a process similar to the communication system. This type of modulation is performed by collecting statistical regularity in a hierarchical encoding process among brain areas. One feature of the neural coding during the hierarchical processing is the adaptation: in vision neuroscience, vertical orientation reduce their firing rates to that orientaiton after the adaptation BID1, while the cell responses to other orientations may increase BID3. These behaviors well match the information theoretical point-of-view that the most relevant information (saliency), which depends on the statistical regularity, have higher "information", just as the firing of the neurons. The more regular the input features are, the lower it should yield the activation. We introduce the minimum description length (MDL), such that the activation of neurons can be analogous to the code length of the model (a specific neuron or neuronal population) -a shorter code length would be assigned to a more regular input (such as after adaptation), and a longer code length to a more rare input or event. In this paper, we adopt the similar definition of implicit space as in BID13, but extend it beyond unsupervised learning, into a generic neural network optimization problem in both supervised and unsupervised setting. Given the neuroscience inspiration described above, we consider the formulation and computation of description length differently. Instead of considering neural networks as population codes, we formulate each layer of neural networks during training a state of module selection. In our setup, the description length is computed not in the scale of the entire neural networks, but by the unit of each layer of the network. In addition, the optimization objective is not to minimize the description length, but instead, to take into account the minimum description length as part of the normalization procedure to reparameterize the activation of each neurons in each layer. The computation of the description length (or model cost as in BID13) aims to minimize it, while we directly compute the minimum description length in each layer not to minimize anything, but to reassign the weights based on statistical regularities. Finally, we compute the description length by an optimal universal code obtained by the batch input distribution in an online incremental fashion. We begin our presentation in section 2, formulating the problem setting in neural network training as a layer-specific model selection process under MDL principle. We then introduce the proposed regularity normalization (RN) method, its formulation and the incremental implementation. We also present several variants of the regularity normalization by incorporating batch and layer normalizations, termed regularity batch normalization (RBN) and regularity layer normalization (RLN), as well as including the data prior as a top-down attention mechanism during the training process, termed saliency normalization (SN). In appendix A, we present the preliminary on the imbalanced MNIST dataset and demonstrated that our approach is advantageous over existing normalization methods in different imbalanced scenarios. In the last section, we conclude our methods and point out several future work directions as the next step of this research. DISPLAYFORM0 Figure 1: Normalized maximal likelihood. Data sample xi are drawn from the data distribution X and model θi is the optimal model that describes data xi with the shortest code length. θj is an arbitrary model that is notθ3, so P (x3|θj) is not considered when computing optimal universal code according to NML formulation. Given a model class Θ consisting of a finite number of models parameterized by the parameter set θ. Given a data sample x, each model in the model class describes a probability P (x|θ) with the code length computed as − log P (x|θ). The minimum code length given any arbitrary θ would be given by L(x|θ(x)) = − log P (x|θ(x)) with model θ(x) which compresses data x most efficiently and offers the maximum likelihood P (x|θ(x)) BID4. However, the compressibility of the model will be unattainable for multiple inputs, as the probability distributions are different. The solution relies on a universal code,P (x) defined for a model class Θ such that for any data sample x, the shortest code for x is always L(x|θ(x)) BID11 ). Among the universal code, the normalized maximum likelihood (NML) probability minimizes the worst-case regret with the minimax optimal solution is given by BID8: DISPLAYFORM0 where the summation is over the entire data sample space. Fig 1 describes the optimization problem of finding optimal model P (x i |θ i) given data sample x i among model class Θ. The models in the class, P (x|θ), are parameterized by the parameter set θ. x i are data sample from data X. With this distribution, the regret is the same for all data sample x given by Grünwald FORMULA2: DISPLAYFORM1 which defines the model class complexity, i.e. how many data samples can be well explained by Θ. Batch 2, t = 2 more batches...normalize to be summed to 1 DISPLAYFORM0 Figure 2: Model selection in neural network. If we consider each time step of the optimization (drawn here to be batch-dependent) as the process of choose the optimal model from model class Θ i for ith layer of the neural networks, the optimized parameterθ i j with subscript j as time step t = j and superscript i as layer i can be assumed to be the optimal model among all models in the model class Θ i. The normalized maximum likelihood can be computed by choosing P (x i j |θ i j), the "optimal" model with shortest code length given data x i j, as the summing component in the normalization. In the neural network setting where optimization process are performed in batches (as incremental data sample x j with j denoting the batch j), the model selection process is formulated as a partially observable problem (as in Fig 2). Herein to illustrate our approach, we consider a feedforward neural network as an example, without loss of generalizability to other architecture (such as convolutional layers or recurrent modules). x i j refers to the activation at layer i at time point j (batch j). θ i j is the parameters that describes x i j (i.e. weights for layer i − 1) optimized after j − 1 steps (seen batch 0 through j − 1). Because one cannot exhaust the search among all possible θ, we assume that the optimized parameterθ i j at time step j (seen batch 0 through j−1) is the optimal model P (x i j |θ i j) for data sample x i j. Therefore, we generalize the optimal universal code with NML formulation as: Fig 2) refers to the parameter already optimized for i − 1 steps and have seen sequential data sample x 0 through x i−1. This distribution is updated every time a new data sample is given, and can therefore be computed incrementally. DISPLAYFORM1 DISPLAYFORM2 3 REGULARITY NORMALIZATION Regularity normalization is outlined in Algorithm 1, where the input would be the activation of each neurons in certain layer and batch. Parameters COM P and θ are updated after each batch, through the incrementation in the normalization and optimization in the training respectively. The incrementation step involves computing the log sum of two values, which can be easily stabilized with the log-sum-exp trick. The normalization factor is then computed as the shortest code length L given the NML. NML distribution can be modified to also include a data prior function, s(x), given by BID14: DISPLAYFORM0 where the data prior function s(x) can be anything, ranging from the emphasis of certain inputs, to the cost of certain data, or even top-down attention. For instance, we can introduce the prior knowledge of the fraction of labels (say, in an imbalanced data problem where the oracle informs the model of the distribution of each label in the training phase); or in a scenario where we wish the model to focus specifically on certain features of the input, say certain texture or color (just like a convolution filter); or in the case where the regularity drifts (such as the user preferences over years): in all these applications, the procedure can be more strategic given these additional information. Thus, we formulate this additional functionality into our regularity normalization, to be saliency normalization (SN), where P N M L is computed with the addition of a pre-specified s(x). Input: Values of x over a mini-batch: B = {x1,···,m}; Output: yi = RN (xi) given Parameter: COM Pt,θt DISPLAYFORM0 In our current setup, the normalization is computed elementwise, considering the implicit space of the model parameters to be onedimensional (i.e. all activations across the batch and layer are considered to be represented by the same implicit space). Instead, the definition of the implicit can be more than onedimensional to increase the expressibility of the method, and can also be user-defined. For instance, we can perform regularity normalization over the layer dimension such that the implicit space has the dimension of the layer, as the regularity layer normalization (RLN), and similarly, if over the dimension of the batch, regularity batch normalization (RBN), which have the potential to inherit BN and LN's innate advantages. Inspired by the neural code adaptation of biological brains, we propose a biologically plausible normalization method taking into account the regularity (or saliency) of the activation distribution in the implicit space, and normalize it to upweight activation for rarely seen scenario and downweight activation for commonly seen ones. We introduce the concept from MDL principle and proposed to consider neural network training process as a model selection problem. We compute the optimal universal code length by normalized maximum likelihood in an incremental fashion, and showed this implementation can be easily incorporated with established methods like batch normalization and layer normalization. In addition, we proposed saliency normalization, which can introduce topdown attention and data prior to facilitate representation learning. Fundamentally, we implemented with an incremental update of normalized maximum likelihood, constraining the implicit space to have a low model complexity and short universal code length. Preliminary offered a proof of concept to the proposed method. Given the limited experiments at the current state, our approach empirically outperforms existing normalization methods its advantage in the imbalanced or limited data scenario as hypothesized. Next steps of this research include experiments with variants of the regularity normalization (SN, RLN, RBN etc.), as well as the inclusion of top-down attention given by data prior (such as feature extracted from signal processing, or task-dependent information). In concept, regularity-based normalization can also be considered as an unsupervised attention mechanism imposed on the input data. As the next step, we are currently exploring this method to convolutional and recurrent neural networks, and applying to popular state-of-the-art neural network architectures in multiple modalities of datasets, as well as the reinforcement learning setting where the rewards can be very sparse and non-uniform. Table 1: Test errors of the imbalanced permutation-invariant MNIST 784-1000-1000-10 task "Balanced" "Rare minority" "Highly imbalanced" "Dominant oligarchy" n = 0 n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 baseline 4.80 ± 0.34 14. As a proof of concept, we evaluated our approach on MNIST dataset BID7 and computed the total number of classification errors as a performance metric. As we specifically wish to understand the behavior where the data inputs are non-stationary and highly imbalanced, we created an imbalanced MNIST benchmark to test seven methods: batch normalization (BN), layer normalization (LN), weight normalization (WN), and regularity normalization (RN), as well as three variants: saliency normalization (SN) with data prior as class distribution, regularity layer normalization (RLN) where the implicit space is defined to be layer-specific, and a combined approach where RN is applied after LN (LN+RN).Given the nature of regularity normalization, it should better adapt to the regularity of the data distribution than other methods, tackling the imbalanced data issue by up-weighting the activation of the rare sample features and down-weighting those of the dominant sample features. To simulate changes in the context (input) distribution, in each epoch we randomly choose n classes out of the ten, and set their sampling probability to be 0.01 (only 1% of those n classes are used in the training). In this way, the training data may trick the models into preferring to classifying into the dominant classes. For simplicity, we consider the classical 784-1000-1000-10 feedforward neural network with ReLU activation functions for all six normalization methods, as well as the baseline neural network without normalization. As we are looking into the short-term sensitivity of the normalization method on the neural network training, one epoch of trainings are being recorded (all model face the same randomized imbalanced distribution). Training, validation and testing sets are shuffled into 55000, 5000, and 10000 cases. In the testing phase, the data distribution is restored to be balanced, and no models have access to the other testing cases or the data distribution. Stochastic gradient decent is used with learning rate 0.01 and momentum set to be 0.9.When n = 0, it means that no classes are downweighted, so we termed it the "fully balanced" scenario. When n = 1 to 3, it means that a few cases are extremely rare, so we termed it the "rare minority" scenario. When n = 4 to 8, it means that the multi-class distribution are very different, so we termed it the "highly imbalanced" scenario. When n = 9, it means that there is one or two dominant classes that is 100 times more prevalent than the other classes, so we termed it the "dominant oligarchy" scenario. In real life, rare minority and highly imbalanced scenarios are very common, such as predicting the clinical outcomes of a patient when the therapeutic prognosis data are mostly tested on one gender versus the others, or in reinforcement learning setting where certain or most types of rewards are very sparse. Table 1 reports the test errors (in %) of eight methods in 10 training conditions. In the balanced scenario, the proposed regularity-based method doesn't show clear advantages over existing methods, but still managed to perform the classification tasks without major deficits. In both the "rare minority" and "highly imbalanced" scenarios, regularity-based methods performs the best in all groups, suggesting that the proposed method successfully constrained the model to allocate learning resources to the "special cases" which are rare and out of normal range, while BN and WN failed to learn it completely (as in the confusion matrices not shown here). In the "dominant oligarchy" scenario, LN performs the best, dwarfing all other normalization methods. However, as in the case of n = 8, LN+RN performs considerably well, with performance within error bounds to that of LN, beating other normalization methods by over 30 %. It is noted that LN also managed to capture the features of the rare classes reasonably well in other imbalanced scenarios, comparing to BN, WN and baseline. The hybrid methods RLN and LN+RN both displays excellent performance in the imbalanced scenarios, suggesting that combining regularity-based normalization with other methods is advantageous. These are mainly in the short term domain as a proof of concept. Further analysis need to be included to fully understand these behaviors in the long term (the converging performance over 100 epochs). However, the major test accuracy differences in the highly imbalanced scenario (RN over BN/WN/baseline for around 20%) in the short term provides promises in its ability to learn from the extreme regularities. Normalization. Batch normalization (BN) performs global normalization along the batch dimension such that for each neuron in a layer, the activation over all the mini-batch training cases follows standard normal distribution, reducing the internal covariate shift BID6. Similarly, layer normalization (LN) performs global normalization over all the neurons in a layer, and have shown effective stabilizing effect in the hidden state dynamics in recurrent networks BID0. Weight normalization (WN) applied normalization over the incoming weights, offering computational advantages for reinforcement learning and generative modeling BID12. Like BN and LN, we apply the normalization on the activation of the neurons, but as an element-wise reparameterization (over both the layer and batch dimension). In section 3.2, we also proposed the variant methods based on our approach with batch-wise and layer-wise reparameterization, the regularity batch normalization (RBN) and regularity layer normalization (RLN).Description length in neural networks. BID5 first introduce the description length to quantify neural network simplicity and develop an optimization method to minimize the amount of information required to communicate the weights of the neural network. BID13 considered the neural networks as population codes and used MDL to develop highly redundant population code. They showed that by assuming the hidden units reside in low-dimensional implicit spaces, optimization process can be applied to minimize the model cost under MDL principle. Our proposed method adopt a similar definition of implicit space, but consider the implicit space as data-dependent encoding statistical regularities. Unlike BID13 and BID5, we consider the description length as a indicator of the data input and assume that the implicit space is constrained when we normalize the activation of each neurons given its statistical regularity. Unlike the implicit approach to compute model cost, we directly compute the minimum description length with optimal universal code obtained in an incremental style.
Considering neural network optimization process as a model selection problem, we introduce a biological plausible normalization method that extracts statistical regularity under MDL principle to tackle imbalanced and limited data issue.
1,109
scitldr
In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances. We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution. In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Autoencoders (WAE) and Variational Autoencoders (VAE), while benefiting from an embarrassingly simple implementation. Scalable generative models that capture the rich and often nonlinear distribution of highdimensional data, (i.e., image, video, and audio), play a central role in various applications of machine learning, including transfer learning BID13 BID24, super-resolution BID15 BID20, image inpainting and completion BID34, and image retrieval BID6, among many others. The recent generative models, including Generative Adversarial Networks (GANs) BID0 BID1 BID10 BID29 and Variational Autoencoders (VAE) BID4 BID14 BID23 enable an unsupervised and end-to-end modeling of the high-dimensional distribution of the training data. Learning such generative models boils down to minimizing a dissimilarity measure between the data distribution and the output distribution of the generative model. To this end, and following the work of Arjovsky et al. BID0 and Bousquet et al. BID4 we approach the problem of generative modeling from the optimal transport point of view. The optimal transport problem BID17 BID33 provides a way to measure the distances between probability distributions by transporting (i.e., morphing) one distribution into another. Moreover, and as opposed to the common information theoretic dissimilarity measures (e.g., f -divergences), the p-Wasserstein dissimilarity measures that arise from the optimal transport problem: 1) are true distances, and 2) metrize a weak convergence of probability measures (at least on compact spaces). Wasserstein distances have recently attracted a lot of interest in the learning community BID0 BID4 BID8 BID11 BID17 due to their exquisite geometric characteristics BID30. See the supplementary material for an intuitive example showing the benefit of the Wasserstein distance over commonly used f -divergences. In this paper, we introduce a new type of autoencoders for generative modeling (Algorithm 1), which we call Sliced-Wasserstein Autoencoders (SWAE), that minimize the sliced-Wasserstein distance between the distribution of the encoded samples and a predefined samplable distribution. Our work is most closely related to the recent work by Bousquet et al. BID4 and the followup work by Tolstikhin et al. BID32. However, our approach avoids the need to perform costly adversarial training in the encoding space and is not restricted to closed-form distributions, while still benefiting from a Wasserstein-like distance measure in the encoding space that permits a simple numerical solution to the problem. In what follows we first provide an extensive review of the preliminary concepts that are needed for our formulation. In Section 3 we formulate our proposed method. The proposed numerical scheme to solve the problem is presented in Section 4. Our experiments are summarized in Section 5. Finally, our work is Concluded in Section 6. Let X denote the compact domain of a manifold in Euclidean space and let x n ∈ X denote an individual input data point. Furthermore, let ρ X be a Borel probability measure defined on X. We define the probability density function p X (x) for input data x to be: DISPLAYFORM0 Let φ: X → Z denote a deterministic parametric mapping from the input space to a latent space Z (e.g., a neural network encoder). Utilizing a technique often used in the theoretical physics community (See BID9), known as Random Variable Transformation (RVT), the probability density function of the encoded samples z can be expressed in terms of φ and p X by: DISPLAYFORM1 where δ denotes the Dirac distribution function. The main objective of Variational Auto-Encoders (VAEs) is to encode the input data points x ∈ X into latent codes z ∈ Z such that: 1) x can be recovered/approximated from z, and 2) the probability density function of the encoded samples, p Z, follows a prior distribution q Z. Similar to classic auto-encoders, a decoder ψ: Z → X is required to map the latent codes back to the original space such that DISPLAYFORM2 where y denotes the decoded samples. It is straightforward to see that when ψ = φ −1 (i.e. ψ(φ(·)) = id(·)), the distribution of the decoder p Y and the input distribution p X are identical. Hence, the objective of a variational auto-encoder simplifies to learning φ and ψ such that they minimize a dissimilarity measure between p Y and p X, and between p Z and q Z. Defining and implementing the dissimilarity measure is a key design decision, and is one of the main contributions of this work, and thus we dedicate the next section to describing existing methods for measuring these dissimilarities. We first emphasize that the VAE work in the literature often assumes stochastic encoders and decoders BID14, while we consider the case of only deterministic mappings. Different dissimilarity measures have been used between p X and p Y in various work in the literature. Most notably, Nowozin et al. BID25 showed that for the general family of f -divergences, D f (p X, p Y), (including the KL-divergence, Jensen-Shannon, etc.), using the Fenchel conjugate of the convex function f and minimizing D f (p X, p Y) leads to a min-max problem that is equivalent to the adversarial training widely used in the generative modeling literature BID10 BID22 BID23.Others have utilized the rich mathematical foundation of the OT problem and Wasserstein distances BID0 BID4 BID11 BID32. In Wasserstein-GAN, BID0 utilized the Kantorovich-Rubinstein duality for the 1-Wasserstein distance, W 1 (p X, p Y), and reformulated the problem as a min-max optimization that is solved through an adversarial training scheme. In a different approach, BID4 utilized the autoencoding nature of the problem and showed that W c (p X, p Y) could be simplified as: DISPLAYFORM0 Note that Eq. is equivalent to Theorem 1 in BID4 for deterministic encoder-decoder pair, and also note that φ and ψ are parametric differentiable models (e.g. neural networks). Furthermore, Eq. supports a simple implementation where for i.i.d samples of the input distribution {x n} N n=1 the minimization can be written as: DISPLAYFORM1 We emphasize that Eq. (and consequently Eq. FORMULA4) takes advantage of the fact that the pairs x n and y n = ψ(φ(x n)) are available, hence calculating the transport distance coincides with summing the transportation costs between all pairs (x n, y n). For example, the total transport distance may be defined as the sum of Euclidean distances between all pairs of points. In this paper, we also use W c (p X, p Y) following Eq. to measure the discrepancy between p X and p Y. Next, we review the methods used for measuring the discrepancy between p Z and q Z. If q Z is a known distribution with an explicit formulation (e.g. Normal distribution) the most straightforward approach for measuring the (dis)similarity between p Z and q Z is the loglikelihood of z = φ(x) with respect to q Z, formally: DISPLAYFORM0 maximizing the log-likelihood is equivalent to minimizing the KL-divergence between p Z and q Z, D KL (p Z, q Z) (see supplementary material for more details and derivation of Equation FORMULA5). This approach has two major limitations: 1) The KL-Divergence and in general f -divergences do not provide meaningful dissimilarity measures for distributions supported on non-overlapping low-dimensional manifolds BID0 BID18 (see supplementary material), which is common in hidden layers of neural networks, and therefore they do not provide informative gradients for training φ, and 2) we are limited to distributions q Z that have known explicit formulations, which is very restrictive because it eliminates the ability to use the much broader class of distributions were we know how to sample from them, but do not know their explicit form. Various alternatives exist in the literature to address the above-mentioned limitations. These methods often sampleZ = {z j} N j=1 from q Z and Z = {z n = φ(x n)} N n=1 from p X and measure the discrepancy between these sets (i.e. point clouds). Note that there are no one-to-one correspondences betweenz j s and z n s. Tolstikhin et al. BID32 for instance, proposed two different approaches for measuring the discrepancy betweenZ and Z, namely the GAN-based and the maximum mean discrepancy (MMD)-based approaches. The GAN-based approach proposed in BID32 defines a discriminator network, D Z (p Z, q Z), to classifyz j s and z n s as coming from'true' and'fake' distributions correspondingly and proposes a min-max adversarial optimization for learning φ and D Z. This approach could be thought as a Fenchel conjugate of some f -divergence between p Z and q Z. The MMD-based approach, on the other hand, utilizes a positive-definite reproducing kernel k: Z × Z → R to measure the discrepancy betweenZ and Z, however, the choice of the kernel remain a data-dependent design parameter. An interesting alternative approach is to use the Wasserstein distance between p Z and q Z. The reason being that Wasserstein metrics have been shown to be particularly beneficial for measuring the distance between distributions supported on non-overlapping low-dimensional manifolds. Following the work of Arjovsky et al. BID0, this can be accomplished utilizing the Kantorovich-Rubinstein duality and through introducing a min-max problem, which leads to yet another adversarial training scheme similar the GAN-based method in BID32. Note that, since elements ofZ and Z are not paired an approach similar to Eq. could not be used to calculate the Wasserstein distance. In this paper, we propose to use the sliced-Wasserstein metric, [3, BID5 BID16 BID18 BID27 BID28, to measure the discrepancy between p Z and q Z . We show that using the slicedWasserstein distance ameliorates the need for training an adversary network, and provides an efficient but yet simple numerical implementation. Before explaining our proposed approach, it is worthwhile to point out the benefits of learning autoencoders as generative models over GANs. In GANs, one needs to minimize a distance between {ψ(z j)|z j ∼ q Z } M j=1 and {x n} M n=1 which are high-dimensional point clouds for which there are no correspondences between ψ(z j)s and x n s. For the autoencoders, on the other hand, there exists correspondences between the high-dimensional point clouds {x n} M n=1 and {y n = ψ(φ(x n))} M n=1, and the problem simplifies to matching the lower-dimensional point clouds {φ(x n)} M n=1 and {z j ∼ q Z} M j=1. In other words, the encoder performs a nonlinear dimensionality reduction, that enables us to solve a much simpler problem compared to GANs. Next we introduce the details of our approach. In what follows we first provide a brief review of the necessary equations to understand the Wasserstein and sliced-Wasserstein distances and then present our Sliced Wassersten Autoencoders (SWAE). The Wasserstein distance between probability measures ρ X and ρ Y, with corresponding densities dρ X = p X (x)dx and dρ Y = p Y (y)dy is defined as: DISPLAYFORM0 where Γ(ρ X, ρ Y) is the set of all transportation plans (i.e. joint measures) with marginal densities p X and p Y, and c: X × Y → R + is the transportation cost. Eq. FORMULA6 is known as the Kantorovich formulation of the optimal mass transportation problem, which seeks the optimal transportation plan between p X and p Y. If there exist diffeomorphic mappings, f: X → Y (i.e. transport maps) such that y = f (x) and consequently, DISPLAYFORM1 where det(D·) is the determinant of the Jacobian, then the Wasserstein distance could be defined based on the Monge formulation of the problem (see BID33 and BID17) as: DISPLAYFORM2 where MP is the set of all diffeomorphisms that satisfy Eq.. As can be seen from Eqs. FORMULA6 and FORMULA8, obtaining the Wasserstein distance requires solving an optimization problem. Various efficient optimization techniques have been proposed in the past (e.g. BID7 BID26 BID31). The case of one dimensional probability densities, p X and p Y, is specifically interesting as the Wasserstein distance has a closed-form solution. Let P X and P Y be the cumulative distributions of one-dimensional probability distributions p X and p Y, correspondingly. The Wassertein distance can then be calculated as: DISPLAYFORM3 The closed-form solution of Wasserstein distance for one-dimensional probability densities motivates the definition of sliced-Wasserstein distances. The interest in the sliced-Wasserstein distance is due to the fact that it has very similar qualitative properties as the Wasserstein distance, but it is much easier to compute, since it only depends on one-dimensional computations. The sliced-Wasserstein distance was used in BID27 BID28 to calculate barycenter of distributions and point clouds. Bonneel et al. provided a nice theoretical overview of barycenteric calculations using the sliced-Wasserstein distance. Kolouri et al. BID16 used this distance to define positive definite kernels for distributions and Carriere et al. BID5 used it as a distance for persistence diagrams. Sliced-Wasserstein was also recently used for learning Gaussian mixture models BID18.The main idea behind the sliced-Wasserstein distance is to slice (i.e. project) higherdimensional probability densities into sets of one-dimensional distributions and compare their one-dimensional representations via Wasserstein distance. The slicing/projection process is related to the field of Integral Geometry and specifically the Radon transform BID12. The relevant to our discussion is that a d-dimensional probability density p X could be uniquely represented as the set of its one-dimensional marginal distributions following the Radon transform and the Fourier slice theorem BID12. These one dimensional marginal distributions of p X are defined as: DISPLAYFORM0 where S d−1 is the d-dimensional unit sphere. Note that for any fixed θ ∈ S d−1, Rp X (·; θ) is a one-dimensional slice of distribution p X. In other words, Rp X (·; θ) is a marginal distribution of p X that is obtained from integrating p X over the hyperplane orthogonal to θ (See FIG0 . Utilizing the one-dimensional marginal distributions in Eq., the sliced Wasserstein distance could be defined as: DISPLAYFORM1 Given that Rp X (·; θ) and Rp Y (·; θ) are one-dimensional the Wasserstein distance in the integrand has a closed-form solution as demonstrated in BID8. The fact that SW c is a distance comes from W c being a distance. Moreover, the two distances also induce the same topology, at least on compact sets BID30.A natural transportation cost that has extensively studied in the past is the 2 2, c(x, y) = x − y 2 2, for which there are theoretical guarantees on existence and uniqueness of transportation plans and maps (see BID30 and BID33). When c(x, y) = x − y 2 2 the following inequality bounds hold for the SW distance: DISPLAYFORM2 where α is a constant. Chapter 5 in BID3 proves this inequality with β = (2(d + 1)) −1 (See BID30 for more details). The inequalities in FORMULA1 is the main reason we can use the sliced Wasserstein distance, SW 2, as an approximation for W 2. Our proposed formulation for the SWAE is as follows: DISPLAYFORM0 where φ is the encoder, ψ is the decoder, p X is the data distribution, p Y is the data distribution after encoding and decoding (Eq. FORMULA2), p Z is the distribution of the encoded data (Eq. FORMULA1), q Z is the predefined distribution (or a distribution we know how to sample from), and λ is a hyperparameter that identifies the relative importance of the loss functions. To further clarify why we use the Wasserstein distance to measure the difference between p X and p Y, but the sliced-Wasserstein distance to measure the difference between p Z and q Z, we reiterate that the Wasserstein distance for the first term can be solved via Eq. due to the existence of correspondences between y n and x n (i.e., we desire x n = y n), however, for p Z and q Z, analogous correspondences between thez i s and z j s do not exist and therefore calculation of the Wasserstein distance requires an additional optimization step (e.g., in the form of an adversarial network). To avoid this additional optimization, while maintaining the favorable characteristics of the Wasserstein distance, we use the sliced-Wasserstein distance to measure the discrepancy between p Z and q Z. The Wasserstein distance between two one-dimensional distributions p X and p Y is obtained from Eq. BID8. The integral in Eq. could be numerically calculated using FIG2. Therefore, the Wasserstein distance can be approximated by first sorting x m s and y m s and then calculating: DISPLAYFORM0 DISPLAYFORM1 Eq. FORMULA1 turns the problem of calculating the Wasserstein distance for two one-dimensional probability densities from their samples into a sorting problem that can be solved efficiently (O(M) best case and O(Mlog(M)) worst case). In scenarios where only samples from the d-dimensional distribution, p X, are available, x m ∼ p X, the empirical distribution can be estimated as FORMULA1 it is straightforward to show that the marginal distributions (i.e. slices) of the empirical distribution, p X, are obtained from: DISPLAYFORM0 DISPLAYFORM1, and ∀t ∈ R BID14 see the supplementary material for a proof. Minimizing the sliced-Wasserstein distance (i.e. as in the second term of Eq. 13) requires an integration over the unit sphere in R d, i.e., S d−1. In practice, this integration is substituted by a summation over a finite set DISPLAYFORM0. A fine sampling of S d−1 is required for a good approximation of SW c (p Z, q Z). Such sampling, however, becomes prohibitively expensive as the dimension of the embedding space grows. Alternatively, following the approach presented by Rabin and Peyré BID27, and later by Bonneel et al. and subsequently by Kolouri et al. BID18, we utilize random samples of S d−1 at each minimization step to approximate the sliced-Wasserstein distance. Intuitively, if p Z and q Z are similar, then their projections with respect to any finite subset of S d−1 would also be similar. This leads to a stochastic gradient descent scheme where in addition to the random sampling of the input data, we also random sample the projection angles from S d−1. Require: Regularization coefficient λ, and number of random projections, L. Initialize the parameters of the encoder, φ, and decoder, ψ while φ and ψ have not converged do Sample {x 1, ..., x M} from training set (i.e. p X) Sample {z 1, ..., DISPLAYFORM0 end while To optimize the proposed SWAE objective function in Eq. FORMULA1 we use a stochastic gradient descent scheme as described here. In each iteration, let {x m ∼ p X} M m=1 and {z m ∼ q Z} M m=1 be i.i.d random samples from the input data and the predefined distribution, q Z, correspondingly. Let {θ l} L l=1 be randomly sampled from a uniform distribution on S d−1. Then using the numerical approximations described in this section, the loss function in Eq. FORMULA1 can be rewritten as: It is worth pointing out that sorting is by itself an optimization problem (which can be solved very efficiently), and therefore the sorting followed by the gradient descent update on φ and ψ is in essence a min-max problem, which is being solved in an alternating fashion. DISPLAYFORM0 Here we show the of SWAE for two mid-size image datasets, namely the MNIST dataset BID19, and the CelebFaces Attributes Dataset (CelebA) BID21. For the encoder and the decoder we used mirrored classic deep convolutional neural networks with 2D average poolings and leaky rectified linear units (Leaky-ReLu) as the activation functions. The implementation details are included in the Supplementary material. For the MNIST dataset, we designed a deep convolutional encoder that embeds the handwritten digits into a two-dimensional embedding space (for visualization). To demonstrate the capability of SWAE on matching distributions p Z and q Z in the embedding/encoder space we chose four different q Z s, namely the ring distribution, the uniform distribution, a circle distribution, and a bowl distribution. FIG4 shows the of our experiment on the MNIST dataset. The left column shows samples from q Z, the middle column shows φ(x n)s for the trained φ and the color represent the labels (note that the labels were only used for visualization). Finally, the right column depicts a 25 × 25 grid in [−1, 1] 2 through the trained decoder ψ. As can be seen, the embedding/encoder space closely follows the predefined q Z, while the space remains decodable. The implementation details are included in the supplementary material. The CelebA face dataset contains a higher degree of variations compared to the MNIST dataset and therefore a two-dimensional embedding space does not suffice to capture the variations in this dataset. Therefore, while the SWAE loss function still goes down and the network achieves a good match between p Z and q Z the decoder is unable to match p X and p Y. Therefore, a higher-dimensional embedding/encoder space is needed. In our experiments for this dataset we chose a (K = 128)−dimensional embedding space. FIG5 demonstrates the outputs of trained SWAEs with K = 2 and K = 128 for sample input images. The input images were resized to 64 × 64 and then fed to our autoencoder structure. For CelebA dataset we set q Z to be a (K = 128)-dimensional uniform distribution and trained our SWAE on the CelebA dataset. Given the convex nature of q Z, any linear combination of the encoded faces should also in a new face. Having that in mind, we ran two experiments in the embedding space to check that in fact the embedding space satisfies this convexity assumption. First we calculated linear interpolations of sampled pairs of faces in the embedding space and fed the interpolations to the decoder network to visualize the corresponding faces. FIG6, left column, shows the interpolation for random pairs of encoded faces. It is clear that the interpolations remain faithful as expected from a uniform q Z. Finally, we performed Principle Component Analysis (PCA) of the encoded faces and visualized the faces corresponding to these principle components via ψ. The PCA components are shown on the left column of FIG6. Various interesting modes including, hair color, skin color, gender, pose, etc. can be observed in the PC components. We introduced Sliced Wasserstein Autoencoders (SWAE), which enable one to shape the distribution of the encoded samples to any samplable distribution. We theoretically showed that utilizing the sliced Wasserstein distance as a dissimilarity measure between the distribution of the encoded samples and a predefined distribution ameliorates the need for training an adversarial network in the embedding space. In addition, we provided a simple and efficient numerical scheme for this problem, which only relies on few inner products and sorting operations in each SGD iteration. We further demonstrated the capability of our method on two mid-size image datasets, namely the MNIST dataset and the CelebA face dataset and showed comparable to the techniques that rely on additional adversarial trainings. Our implementation is publicly available BID0. This work was partially supported by NSF (CCF 1421502). The authors would like to thank Drs. Dejan Slepćev, and Heiko Hoffmann for their invaluable inputs and many hours of constructive conversations. FIG0 and JS(p, q τ) where p is a uniform distribution around zero and q τ (x) = p(x − τ). It is clear that JS divergence does not provide a usable gradient when distributions are supported on non-overlapping domains. Following the example by Arjovsky et al. BID0 and later Kolouri et al. BID18 here we show a simple example comparing the Jensen-Shannon divergence with the Wasserstein distance. First note that the Jensen-Shannon divergence is defined as, DISPLAYFORM0 where KL(p, q) = X p(x)log(DISPLAYFORM1)dx is the Kullback-Leibler divergence. Now consider the following densities, p(x) be a uniform distribution around zero and let q τ (x) = p(x − τ) be a shifted version of the p. FIG7 show W 1 (p, q τ) and JS(p, q τ) as a function of τ. As can be seen the JS divergence fails to provide a useful gradient when the distributions are supported on non-overlapping domains. To maximize (minimize) the similarity (dissimilarity) between p Z and q Z, we can write: argmax φ Z p Z (z)log(q Z (z))dz = Z X p X (x)δ(z − φ(x))log(q Z (z))dxdz = X p X (x)log(q Z (φ(x)))dx where we replaced p Z with Eq.. Furthermore, it is straightforward to show: DISPLAYFORM0 Here we calculate a Radon slice of the empirical distribution p X (x) = we have: DISPLAYFORM0 Simple manifold learning experiment Figure 7 demonstrates the of SWAE with random initializations to embed a 2D manifold in R 3 to a 2D uniform distribution. The following text walks you through the implementation of our Sliced Wasserstein Autoencoders (SWAE).To run this notebook you'll require the following packages:In BID20: #Visualize the z samples plt. FIG0
"Generative modeling with no need for adversarial training"
1,110
scitldr
The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the level of demonstrators' expertise is unknown. We propose a new IL paradigm called Variational Imitation Learning with Diverse-quality demonstrations (VILD), where we explicitly model the level of demonstrators' expertise with a probabilistic graphical model and estimate it along with a reward function. We show that a naive estimation approach is not suitable to large state and action spaces, and fix this issue by using a variational approach that can be easily implemented using existing reinforcement learning methods. Experiments on continuous-control benchmarks demonstrate that VILD outperforms state-of-the-art methods. Our work enables scalable and data-efficient IL under more realistic settings than before. The goal of sequential decision making is to learn a policy that makes good decisions . As an important branch of sequential decision making, imitation learning (IL) aims to learn such a policy from demonstrations (i.e., sequences of decisions) collected from experts. However, high-quality demonstrations can be difficult to obtain in reality, since such experts may not always be available and sometimes are too costly . This is especially true when the quality of decisions depends on specific domain-knowledge not typically available to amateurs; e.g., in applications such as robot control and autonomous driving . In practice, demonstrations are often diverse in quality, since it is cheaper to collect demonstrations from mixed demonstrators, containing both experts and amateurs . Unfortunately, IL in such settings tends to perform poorly, since low-quality demonstrations often negatively affect the performance of IL methods . For example, amateurs' demonstrations for robotics can be cheaply collected via a robot simulation , but such demonstrations may cause damages to the robot which is catastrophic in the real-world . Similarly, demonstrations for autonomous driving can be collected from drivers in public roads , which may contain traffic-accident demonstrations. Learning a self-driving car from these low-quality demonstrations may cause traffic accidents. When the level of demonstrators' expertise is known, multi-modal IL (MM-IL) can be used to learn a good policy with diverse-quality demonstrations; ). Specifically, MM-IL aims to learn a multi-modal policy, where each mode of the policy represents the decision making of each demonstrator. When knowing the level of demonstrators' expertise, good policies can be obtained by selecting modes that correspond to the decision making of high-expertise demonstrators. However, in practice, it is difficult to truly determine the level of demonstrators' expertise beforehand. Without knowing the level of expertise, it is difficult to distinguish the decision making of experts and amateurs, and learning a good policy is challenging. To overcome the issue of MM-IL, pioneer works have proposed to estimate the quality of each demonstration using auxiliary information from experts (; ;). inferred the demonstration quality using similarities between diverse-quality demonstrations and high-quality demonstrations, where the latter are collected in a small number from experts. In contrast, proposed to estimate the demonstration quality using a small number of demonstrations with confidence scores. Namely, the score value given by an expert is proportion to the demonstration quality. Similarly, the demonstration quality can be estimated by ranked demonstrations, where ranking from an expert is evaluated due to the relative quality . To sum up, these methods rely on auxiliary information from experts, namely high-quality demonstrations, confidence scores, and ranking. In practice, these pieces of information can be scarce or noisy, which leads to a poor performance of these methods. In this paper, we consider a novel but realistic setting of IL where only diverse-quality demonstrations are available. Meanwhile, the level of demonstrators' expertise and auxiliary information from experts are fully absent. To tackle this challenging setting, we propose a new learning paradigm called variational imitation learning with diverse-quality demonstrations (VILD). The central idea of VILD is to model the level of demonstrators' expertise via a probabilistic graphical model, and learn it along with a reward function that represents an intention of expert's decision making. To scale up our model for large state and action spaces, we leverage the variational approach , which can be implemented using reinforcement learning (RL) for flexibility . To further improve data-efficiency of VILD when learning the reward function, we utilize importance sampling (IS) to re-weight a sampling distribution according to the estimated level of demonstrators' expertise. Experiments on continuous-control benchmarks and real-world crowdsourced demonstrations denote that: 1) VILD is robust against diverse-quality demonstrations and outperforms existing methods significantly. 2) VILD with IS is data-efficient, since it learns the policy using a less number of transition samples. Before delving into our main contribution, we first give the minimum about RL and IL. Then, we formulate a new setting in IL called diverse-quality demonstrations, discuss its challenge, and reveal the deficiency of existing methods. Reinforcement learning. Reinforcement learning (RL) aims to learn an optimal policy of a sequential decision making problem, which is often mathematically formulated as a Markov decision process (MDP) . We consider a finite-horizon MDP with continuous state and action spaces defined by a tuple M " pS, A, pps 1 |s, aq, p 1 ps 1 q, rps, aqq with a state s t P S Ď R ds, an action a t P A Ď R da, an initial state density p 1 ps 1 q, a transition probability density pps t`1 |s t, a t q, and a reward function r: SˆA Þ Ñ R, where the subscript t P t1,..., T u denotes the time step. A sequence of states and actions, ps 1:T, a 1:T q, is called a trajectory. A decision making of an agent is determined by a policy πpa t |s t q, which is a conditional probability density of action given state. RL seeks for an optimal policy π ‹ pa t |s t q which maximizes the expected cumulative reward: E pπps 1:T,a 1:T q rΣ T t"1 rps t, a t qs, where p π ps 1:T, a 1:T q " p 1 ps 1 qΠ T t"1 pps t`1 |s t, a t qπpa t |s t q is a trajectory probability density induced by π. RL has shown great successes recently, especially when combined with deep neural networks . However, a major limitation of RL is that it relies on the reward function which may be unavailable in practice . Imitation learning. To address the above limitation of RL, imitation learning (IL) was proposed . Without using the reward function, IL aims to learn the optimal policy from demonstrations that encode information about the optimal policy. A common assumption in most IL methods is that, demonstrations are collected by K ě 1 demonstrators who execute actions a t drawn from π ‹ pa t |s t q for every states s t. A graphical model describing this data collection process is depicted in Figure 1 (a), where a random variable k P t1,..., Ku denotes each demonstrator's identification number and ppkq denotes the probability of collecting a demonstration from the k-th demonstrator. Under this assumption, demonstrations tps 1:T, a 1:T, kq n u N n"1 (i.e., observed random variables in Figure 1 (a)) are called expert demonstrations and are regarded to be drawn independently from a probability density p ‹ ps 1:T, a 1:T qppkq " ppkqp 1 ps 1 qΠ T t"1 pps t`1 |s t, a t qπ ‹ pa t |s t q. We note that k does not affect the trajectory density p ‹ ps 1:T, a 1:T q and can be omitted. We assume a common assumption that p 1 ps 1 q and pps t`1 |s t, a t q are unknown but we can sample states from them. IL has shown great successes in benchmark settings (; ;). However, practical applications of IL in the real-world is relatively few . One of the main reasons is that most IL methods aim to learn with expert demonstrations. In practice, such demonstrations are often too costly to obtain due to a limited number of experts, and Figure 1: Graphical models describe expert demonstrations and diverse-quality demonstrations. Shaded and unshaded nodes indicate observed and unobserved random variables, respectively. Plate notations indicate that the sampling process is repeated for N times. s t P S is a state with transition densities pps t`1 |s t, a t q, a t P A is an action with density π ‹ pa t |s t q, u t P A is a noisy action with density ppu t |s t, a t, kq, and k P t1,..., Ku is an identification number with distribution ppkq. even when we obtain them, the number of demonstrations is often too few to accurately learn the optimal policy (; ;). New setting in IL: Diverse-quality demonstrations. To improve practicality of IL, we consider a new learning paradigm called IL with diverse-quality demonstrations, where demonstrations are collected from demonstrators with different level of expertise. Compared to expert demonstrations, diverse-quality demonstrations can be collected more cheaply, e.g., via crowdsourcing . The graphical model in Figure 1 (b) depicts the process of collecting such demonstrations from K ą 1 demonstrators. Formally, we select the k-th demonstrator according to a distribution ppkq. After selecting k, for each time step t, the k-th demonstrator observes state s t and samples action a t using π ‹ pa t |s t q. However, the demonstrator may not execute a t in the MDP if this demonstrator is not expertised. Instead, he/she may sample an action u t P A with another probability density ppu t |s t, a t, kq and execute it. Then, the next state s t`1 is observed with a probability density pps t`1 |s t, u t q, and the demonstrator continues making decision until time step T. We repeat this process for N times to collect diverse-quality demonstrations D d " tps 1:T, u 1:T, kq n u N n"1. These demonstrations are regarded to be drawn independently from a probability density We refer to ppu t |s t, a t, kq as a noisy policy of the k-th demonstrator, since it is used to execute a noisy action u t. Our goal is to learn the optimal policy π ‹ using diverse-quality demonstrations D d. Note that Eq. can be described equivalently by using a marginal density πpu t |s t, kq " ş A π ‹ pa t |s t qppu t |s t, a t, kqda t and removing a t from the graphical model. However, we explicitly write a t as above to emphasize the dependency between π ‹ pa t |s t q and ppu t |s t, a t, kq. This emphasis will be made more clear in Section 3.1 when we describe our choice of model. The deficiency of existing methods. We conjecture that existing IL methods are not suitable to learn with diverse-quality demonstrations according to p d. Specifically, these methods always treat observed demonstrations as if they were drawn from p ‹. By comparing p ‹ and p d, we can see that existing methods would learn πpu t |s t q such that πpu t |s t q « Σ ‹ pa t |s t qppu t |s t, a t, kqda t. In other words, they learn a policy that averages over decisions of all demonstrators. This would be problematic when amateurs are present, as averaged decisions of all demonstrators would be highly different from those of all experts. Worse yet, state distributions of amateurs and experts tend to be highly different, which often leads to the unstable learning: The learned policy oscillated between well-performed policy and poorly-performed policy. For these reasons, we believe that existing methods tend to learn a policy that achieves average performances, and are not suitable for handling the setting of diverse-quality demonstrations. This section presents VILD, namely a robust method for tackling the challenge from diverse-quality demonstrations. Specifically, we build a probabilistic model that explicitly describes the level of demonstrators' expertise and a reward function (Section 3.1), and estimate its parameters by a variational approach (Section 3.2), which can be implemented easily by RL (Section 3.3). We also improve data-efficiency by using importance sampling (Section 3.4). Mathematical derivations are provided in Appendix A. This section describes a model which enables estimating the level of demonstrators' expertise. We first describe a naive model, whose parameters can be estimated trivially via supervised learning, but suffers from the issue of compounding error. Then, we describe our proposed model, which avoids the issue of the naive model by learning a reward function. Naive model. Based on p d, one of the simplest models to handle diverse-quality demonstrations is p θ,ω ps 1:T, u 1:T, kq " ppkqpps 1 qΠ T t"1 pps t`1 |s t, u t q ş A π θ pa t |s t qp ω pu t |s t, a t, kqda t, where π θ and p ω are learned to estimate the optimal policy and the noisy policy, respectively. The parameters θ and ω can be learned by minimizing the Kullback-Leibler (KL) divergence from the data distribution to the model. This naive model can be regarded as an extension of a model proposed by for handling diverse-quality data in supervised learning. The main advantage of this naive model is that its parameters can be estimated trivially via supervised learning. However, this native model suffers from the issue of compounding error and tends to perform poorly. Specifically, supervised-learning methods assume that data distributions during training and testing are identical. However, data distributions during training and testing are different in IL, since data distributions depend on policies . A discrepancy of data distributions causes compounding errors during testing, where prediction errors increase further in future predictions. Due to this issue, supervised-learning methods often perform poorly in IL . The issue becomes even worse with diverse-quality demonstrations, since data distributions of different demonstrators tend to be highly different. For these reasons, this naive model is not suitable for our setting. Proposed model. To avoid the issue of compounding error, our method utilizes the inverse RL (IRL) approach , where we aim to learn a reward function from diverse-quality demonstrations 1. IL problems can be solved by a combination of IRL and RL, where we learn a reward function by IRL and then learn a policy from the reward function by RL. This combination avoids the issue of compounding error, since the policy is learned by RL which generalizes to states not presented in demonstrations . Specifically, our proposed model is based on a model of maximum entropy IRL (MaxEnt-IRL) . Briefly speaking, MaxEnt-IRL learns a reward function from expert demonstrations by using a model p φ ps 1:T, a 1:T q 9 pps 1 qΠ T t"1 p 1 ps t`1 |s t, a t q exppr φ ps t, a t qq. Based on this model, we propose to learn the reward function and the level of expertise by a model p φ,ω ps 1:T, u 1:T, kq 9 ppkqp 1 ps 1 q where φ and ω are parameters. We denote a normalization term of this model by Z φ,ω. By comparing the proposed model p φ,ω to the data distribution p d, the reward parameter φ should be learned so that the cumulative reward is proportion to a joint probability density of actions given by the optimal policy, i.e., exppΣ T t"1 r φ ps t, a t qq 9 Π T t"1 π ‹ pa t |s t q. In other words, the cumulative reward is large for trajectories induced by the optimal policy. Therefore, the optimal policy can be learned by maximizing the cumulative reward. Meanwhile, the density p ω pu t |s t, a t, kq is learned to estimate the noisy policy ppu t |s t, a t, kq. In the remainder, we refer to ω as an expertise parameter. To learn parameters of this model, we propose to minimize the KL divergence from the data distribution to the model: min φ,ω KLpp d ps 1:T, u 1:T |kqppkq||p φ,ω ps 1:T, u 1:T, kqq. By rearranging terms and ignoring constant terms, minimizing this KL divergence is equivalent to solving an optimization problem max φ,ω f pφ, ωq´gpφ, ωq, where f pφ, ωq " E p d ps 1:T,u 1:T |kqppkq rΣ T t"1 logp ş A exppr φ ps t, a t qqp ω pu t |s t, a t, kqda t qs and gpφ, ωq " log Z φ,ω. To solve this optimization, we need to compute the integrals over both state space S and action space A. Computing these integrals is feasible for small state and action spaces, but is infeasible for large state and action spaces. To scale up our model to MDPs with large state and action spaces, we leverage a variational approach in the followings. The central idea of the variational approach is to lower-bound an integral by the Jensen inequality and a variational distribution . The main benefit of the variational approach is that the integral can be indirectly computed via the lower-bound, given an optimal variational distribution. However, finding the optimal distribution often requires solving a sub-optimization problem. Before we proceed, notice that f pφ, ωq´gpφ, ωq is not a joint concave function of the integrals, and this prohibits using the Jensen inequality. However, we can separately lower-bound f and g by the Jensen inequality, since they are concave functions of their corresponding integrals. Specifically, let l φ,ω ps t, a t, u t, kq " r φ ps t, a t q`log p ω pu t |s t, a t, kq. By using a variational distribution q ψ pa t |s t, u t, kq with parameter ψ, we obtain an inequality f pφ, ωq ě Fpφ, ω, ψq, where and H t pq ψ q "´E q ψ pat|st,ut,kq rlog q ψ pa t |s t, u t, kqs. It is trivial to verify that the equality f pφ, ωq " max ψ Fpφ, ω, ψq holds , where the maximizer ψ ‹ of the lowerbound yields q ψ ‹ pa t |s t, u t, kq 9 exppl φ,ω ps t, a t, u t, kqq. Therefore, the function f pφ, ωq can be substituted by max ψ Fpφ, ω, ψq. Meanwhile, by using a variational distribution q θ pa t, u t |s t, kq with parameter θ, we obtain an inequality gpφ, ωq ě Gpφ, ω, θq, where and r q θ ps 1:T, u 1:T, a 1:T, kq " ppkqp 1 ps 1 qΠ T t"1 pps t`1 |s t, u t qq θ pa t, u t |s t, kq. The lower-bound G resembles an objective function of maximum entropy RL (MaxEnt-RL) . By using the optimality of MaxEnt-RL , we have an equality gpφ, ωq " max θ Gpφ, ω, θq. Therefore, the function gpφ, ωq can be substituted by max θ Gpφ, ω, θq. By using these lower-bounds, we have that max φ,ω f pφ, ωq´gpφ, ωq " max φ,ω,ψ Fpφ, ω, ψqḿ ax θ Gpφ, ω, θq " max φ,ω,ψ min θ Fpφ, ω, ψq´Gpφ, ω, θq. Solving the max-min problem is often feasible even for large state and action spaces, since Fpφ, ω, ψq and Gpφ, ω, θq are defined as an expectation and can be optimized straightforwardly. Nevertheless, in practice, we represent the variational distributions by parameterized functions, and iteratively solve the sub-optimization (w.r.t. ψ and θ) by stochastic optimization methods. However, in this scenario, the equalities f pφ, ωq " max ψ Fpφ, ω, ψq and gpφ, ωq " max θ Gpφ, ω, θq may not hold for two reasons. First, the optimal variational distributions may not be in the space of our parameterized functions. Second, stochastic optimization methods may yield local solutions. Nonetheless, when the variational distributions are represented by deep neural networks, the obtained variational distributions are often reasonably accurate and the equalities approximately hold . In practice, we are required to specify models for q θ and p ω. We propose to use q θ pa t, u t |s t, kq " q θ pa t |s t qN pu t |a t, Σq and p ω pu t |s t, a t, kq " N pu t |a t, C ω pkqq. As shown below, the choice for q θ pa t, u t |s t, kq enables us to solve the sub-optimization w.r.t. θ by using RL with reward function r φ. Meanwhile, the choice for p ω pu t |s t, a t, kq incorporates our prior assumption that the noisy policy tends to Gaussian, which is a reasonable assumption for actual human motor behavior (van). Under these model specifications, solving max φ,ω,ψ min θ Fpφ, ω, ψq´Gpφ, ω, θq is equivalent to solving max φ,ω,ψ min θ Hpφ, ω, ψ, θq, where Here, r q θ ps 1:T, a 1:T q " p 1 ps 1 qΠ T t"1 ş A pps t`1 |s t, u t qN pu t |a t, Σqdu t q θ pa t |s t q is a noisy trajectory density induced by a policy q θ pa t |s t q, where N pu t |a t, Σq can be regarded as an approximation of the noisy policy in Figure 1(b). Minimizing H w.r.t. θ resembles solving a MaxEnt-RL problem with a reward function r φ ps t, a t q, except that trajectories are collected according to the noisy trajectory density. In other words, this minimization problem can be solved using RL, and q θ pa t |s t q can be regarded as an approximation of the optimal policy. The hyper-parameter Σ determines the quality of this approximation: smaller value of Σ gives a better approximation. Therefore, by choosing a reasonably small value of Σ, solving the max-min problem in Eq. yields a reward function r φ ps t, a t q and a policy q θ pa t |s t q. This policy imitates the optimal policy, which is the goal of IL. The model specification for p ω incorporates our prior assumption about the noisy policy. Namely, p ω pu t |s t, a t, kq " N pu t |a t, C ω pkqq assumes that the noisy policy tends to Gaussian, where C ω pkq gives an estimated expertise of the k-th demonstrator: High-expertise demonstrators have small C ω pkq and vice-versa for low-expertise demonstrators. Note that VILD is not restricted to this choice. Different choices of p ω incorporate different prior assumptions. For example, a Laplace distribution incorporates a prior assumption about demonstrators who tend to execute outlier actions . In such a case, the squared error in H is replaced by the absolute error (see Appendix A.3). It should be mentioned that q ψ pa t |s t, u t, kq maximizes the immediate reward and minimizes the weighted squared error between u t and a t. The trade-off between the reward and squared-error is determined by C ω pkq. Specifically, for demonstrators with a small C ω pkq (i.e., high-expertise demonstrators), the squared error has a large magnitude and q ψ tends to minimize the squared error. Meanwhile, for demonstrators with a large value of C ω pkq (i.e., low-expertise demonstrators), the squared error has a small magnitude and q ψ tends to maximize the immediate reward. We implement VILD with deep neural networks where we iteratively update φ, ω, and ψ by stochastic gradient methods, and update θ by policy gradient methods. A pseudo-code of VILD and implementation details are given in Appendix B. In our implementation, we include a regularization term Lpωq " T E ppkq rlog |C´1 ω pkq|s{2, to penalize large value of C ω pkq. Without this regularization, C ω pkq can be overly large which makes learning degenerate. We note that H already includes such a penalty via the trace term: E ppkq rTrpC´1 ω pkqΣqs. However, the strength of this penalty tends to be too small, since we choose Σ to be small. VILD requires variable k to be given along with demonstrations. However, There is no need for this variable to be provided by experts. When k is not given, a simple strategy is to set k " n and K " N . In other words, this strategy assumes that there is a one-to-one mapping between demonstration and demonstrator. We apply this strategy in our experiments with real-world demonstrations. To improve the convergence rate of VILD when updating φ, we use importance sampling (IS). Specifically, by analyzing the gradient ∇ φ H " ∇ φ tE p d ps 1:T,u 1:T |kqppkq rΣ T t"1 E q ψ pat|st,ut,kq rr φ ps t, a t qssÉ r q θ ps 1:T,a 1:T q rΣ T t"1 r φ ps t, a t qsu, we can see that the reward function is updated to maximize the expected cumulative reward obtained by demonstrators and q ψ, while minimizing the expected cumulative reward obtained by q θ. However, low-quality demonstrations often yield low reward values. For this reason, stochastic gradients estimated by these demonstrations tend to be uninformative, which leads to slow convergence and poor data-efficiency. To avoid estimating such uninformative gradients, we use IS to estimate gradients using high-quality demonstrations which are sampled with high probability. Briefly, IS is a technique for estimating an expectation over a distribution by using samples from a different distribution . For VILD, we propose to sample k from a distributionppkq 9 }vecpC´1 ω pkqq} 1. This distribution assigns high probabilities to demonstrators with high estimated level of expertise (i.e., demonstrators with a small C ω pkq). With this distribution, the estimated gradients tend to be more informative which leads to a faster convergence. To reduce a sampling bias, we use a truncated importance weight: wpkq " minpppkq{ppkq, 1q , which leads to an IS gradient: ∇ φ H IS " ∇ φ tE p d ps 1:T,u 1:T |kqppkq rwpkqΣ T t"1 E q ψ pat|st,ut,kq rr φ ps t, a t qssÉ r q θ ps 1:T,a 1:T q rΣ T t"1 r φ ps t, a t qsu. Computing wpkq requires ppkq, which can be estimated accurately since k is a discrete random variable. For simplicity, we assume that ppkq is a uniform distribution. In this section, we will discuss a related area of supervised learning with diverse-quality data. Besides, we will discuss existing IL methods that use the variational approach. Supervised learning with diverse-quality data. In supervised learning, diverse-quality data has been extensively studied, e.g. learning with noisy labels . This task assumes that human labelers may assign incorrect labels to training inputs. With such labelers, the obtained dataset consists of high-quality data with correct labels and low-quality data with incorrect labels. To handle this setting, many methods were proposed . The most related methods are probabilistic models, which aim to infer correct labels and the level of labelers' expertise . proposed a method based on a two-coin model which enables estimating the correct labels and level of expertise. proposed a method based on weighted loss functions, where the weight is determined by the estimated labels and level of expertise. Methods for supervised learning with diverse-quality data can be leveraged to learn a policy in our setting. However, they tend to perform poorly due to the issue of compounding error, as discussed previously in Section 3.1. Variational approach in IL. The variational approach has been previously utilized in IL to perform MM-IL and reduce over-fitting. Specifically, MM-IL aims to learn a multi-modal policy from diverse demonstrations collected by many experts, where each mode of the policy represents decision making of each expert 2. A multi-modal policy is commonly represented by a contextdependent policy, where each context represents each mode of the policy. The variational approach has been used to learn such contexts, i.e., by learning a variational auto-encoder and maximizing a variational lower-bound of mutual information ). Meanwhile, variational information bottleneck (VIB) has been used to reduce over-fitting in IL . Specifically, VIB aims to compress information flow by minimizing a variational bound of mutual information. This compression filters irrelevant signals, which leads to less over-fitting. Unlike these existing works, we utilize the variational approach to aid computing integrals in large state-action spaces, but not for learning a variational auto-encoder or optimizing a variational bound of mutual information. In this section, we experimentally evaluate the performance of VILD (with and without IS) in continuous-control benchmarks and real-world crowdsourced demonstrations. For benchmarks, we use four continuous-control tasks from OpenAI gym with demonstrations from a pre-trained RL agent. For real-world demonstrations, we use a robosuite reaching task with demonstrations from real-world crowdsourcing platform . Performance is evaluated using a cumulative ground-truth reward along trajectories (i.e., higher is better) , and this cumulative reward is computed using test trajectories generated by learned policies (i.e., q θ pa t |s t q). We use 10 test trajectories for the benchmark tasks, and use 100 test trajectories for the robosuite reaching task. Note that we use a larger number of test trajectories due to high variability of initial states in the robosuite reaching task. We repeat experiments for 5 trials with different random seeds and report the mean and standard error. Baseline. We compare VILD against GAIL , AIRL , VAIL , MaxEnt-IRL , and InfoGAIL. These are online IL methods which collect transition samples to learn policies. We use trust region policy optimization (TRPO) to update policies, except for the Humanoid task where we use soft actor-critic (SAC) . For InfoGAIL, we report the performance averaged over uniformly sampled contexts, as well as the performance with the best context chosen during testing. Data generation. To generate demonstrations from π ‹ (pre-trained by TRPO) according to Figure 1(b), we use two types of noisy policy ppu t |a t, s t, kq: Gaussian noisy policy: N pu t |a t, σ 2 k Iq and time-signal-dependent (TSD) noisy policy: N pu t |a t, diagpb k ptqˆ}a t } 1 {d a qq, where b k ptq is sampled from a noise process. We use K " 10 demonstrators with different σ k and noise processes for b k ptq. Each demonstrator generates trajectories with approximately T " 1000 time steps. The number of state-action pairs in each dataset is approximately 10000. Notice that for TSD, the noise variance depends on time and magnitude of actions. This characteristic of TSD has been observed in human motor control (van). More details of data generation are given in Appendix C. Results against online IL methods. Figure 2 shows learning curves of VILD and existing methods against the number of transition samples in HalfCheetah and Ant 3, whereas Table 1 reports the performance achieved in the last 100 iterations. Clearly, VILD with IS overall outperforms existing methods in terms of both data-efficiency and final performance, i.e., VILD with IS learns better policies using less numbers of transition samples. VILD without IS tends to outperform existing methods in terms of the final performance. However, it is less data-efficient when compared to VILD with IS, except on Humanoid with the Gaussian noisy policy, where VILD without IS tends to perform better than VILD with IS. We conjecture that this is because IS slightly biases gradient estimation, which may have a negative effect on the performance. Nonetheless, the overall good performance of VILD with IS suggests that it is an effective method to handle diverse-quality demonstrations. On the contrary, existing methods perform poorly as expected, except on the Humanoid task. For the Humanoid task, VILD tends to perform the best in terms of the mean performance. Nonetheless, all methods except GAIL achieve statistically comparable performance according to t-test. This is perhaps because amateurs in this task perform relatively well compared to amateurs in other tasks, as seen from demonstrators' performance given in Table 2 and 3 (Appendix C). Since amateurs perform relatively well, demonstrations from these amateurs do not severely affect the performance of IL methods in this task when compared to the other tasks. We found that InfoGAIL, which learns a context-dependent policy, may achieve good performance when the policy is conditioned on specific contexts. For instance, InfoGAIL (best context) performs quite well in the Walker2d task with the TSD noisy policy (the learning curves are provided in Figure 7 (b)). However, as shown in Figure 10, its performance varies across contexts and is quite poor on average when using contexts from a uniform distribution. These support our conjecture that MM-IL methods are not suitable for our setting where the level of demonstrators' expertise is absent. It can be seen that VILD without IS performs better for the Gaussian noisy policy when compared to the TSD noisy policy. This is because the model of VILD is correctly specified for the Gaussian noisy policy, but the model is incorrectly specified for the TSD noisy policy; misspecified model indeed leads to the reduction in performance. Nonetheless, VILD with IS still performs well for both types of noisy policy. This is perhaps because negative effects of a misspecified model are not too severe for learning expertise parameters, which are required to compute r ppkq. We also conduct the following evaluations. Due to space limitation, figures are given in Appendix D. Results against offline IL methods. We compare VILD against offline IL methods based on supervised learning, namely behavior cloning (BC) , Co-Teaching which is based on a method for learning with noisy labels , and BC from diverse-quality demonstrations (BC-D) which optimizes the naive model described in Section 3.1. Results in Figure 8 show that these methods perform worse than VILD overall; BC performs the worst since it severely suffers from both the compounding error and low-quality demonstrations. Compared to BC, BC-D and Co-teaching are quite robust against low-quality demonstrations, but they still perform worse than VILD with IS. Accuracy of estimated expertise parameter. To evaluate accuracy of estimated expertise parameter, we compare the ground-truth value of σ k under the Gaussian noisy policy against the learned covariance C ω pkq. Figure 9 shows that VILD learns an accurate ranking of demonstrators' expertise. The values of these parameters are also quite accurate compared to the ground-truth, except for demonstrators with low-level of expertise. A reason for this phenomena is that low-quality demonstrations are highly dissimilar, which makes learning the expertise more challenging. In this experiment, we evaluate the robustness of VILD against real-world demonstrations. Specifically, we conduct an experiment using real-world demonstrations collected by a robotic crowdsourcing platform . The public datasets were collected in the robosuite environment for object-manipulation tasks such as assembly tasks . In our experiment, we consider a reaching task, where demonstrations come from clipped assembly tasks when the robot's end-effector contacts the target object. We uses N " 10 demonstrations whose length are approximately T " 500 and set K " 10. The number of state-action pairs in a demonstration dataset is approximately 5000. For VILD, we apply the log-sigmoid function to the reward function, which improves the performance in this task. More details of the experimental setting are provided in Appendix C.2. Figure 3 shows the performance of all methods, except VILD without IS and VAIL. We do not evaluate VILD without IS and VAIL since IS improves the performance and VAIL is comparable to GAIL. It can be seen that VILD with IS performs better than GAIL, AIRL, and MaxEnt-IRL. VILD also performs better than InfoGAIL in terms of the final performance; InfoGAIL learns faster in the early stage of learning, but its performance saturates and VILD eventually outperforms InfoGAIL. These experimental show that VILD is more robust against real-world demonstrations with diversequality when compared to existing state-of-the-art methods. An example of trajectory generated by VILD's policy is shown in Figure 5. Figure 4 shows the performance of InfoGAIL with different context variables z. We can see that InfoGAIL performs well when the policy is conditioned on specific contexts, e.g., z " 7. Indeed, the best context during testing can improve the performance of InfoGAIL. The effectiveness of such an approach is demonstrated in Figure 3, where InfoGAIL (best context) performs very well. However, InfoGAIL (best context) is less practical than VILD, since choosing the best context requires an expert to evaluate the performance of all contexts. In contrast, the performance of VILD does not depend on contexts, since VILD does not learn a context-dependent policy. Moreover, the performance of InfoGAIL (best context) is quite unstable, and it is still outperformed by VILD in terms of the final performance. In this paper, we explored a practical setting in IL where demonstrations have diverse-quality. We showed the deficiency of existing methods, and proposed a robust method called VILD, which learns both the reward function and the level of demonstrators' expertise by using the variational approach. Empirical demonstrated that our work enables scalable and data-efficient IL under this practical setting. In future, we will explore other approaches to efficiently estimate parameters of the proposed model except the variational approach. We will also explore approaches to handle model misspecification, i.e., scenarios where the noisy policy differs from the model p ω. Specifically, we will explore more flexible models of p ω such as neural networks, as well as using the tempered posterior approach (Grünwald & van) to improve robustness of our model. This section derives the lower-bounds of f pφ, ωq and gpφ, ωq presented in the paper. We also derive the objective function Hpφ, ω, ψ, θq of VILD. Let l φ,ω pst, at, ut, kq " r φ pst, atq`log pωput|st, at, kq, we have that f pφ, ωq ", where ftpφ, ωq " log ş A exp pl φ,ω pst, at, ut, kqq dat. By using a variational distribution q ψ pat|st, ut, kq with parameter ψ, we can bound ftpφ, ωq from below by using the Jensen inequality as follows: at " E q ψ pa t |s t,u t,kq rl φ,ω pst, at, ut, kq´log q ψ pat|st, ut, kqs Then, by using the linearity of expectation, we obtain the lower-bound of f pφ, ωq as follows: To verify that f pφ, ωq " max ψ Fpφ, ω, ψq, we maximize Ftpφ, ω, ψq w.r.t. q ψ under the constraint that q ψ is a valid probability density, i.e., q ψ pat|st, ut, kq ą 0 and ş A q ψ pat|st, ut, kqdat " 1. By setting the derivative of Ftpφ, ω, ψq w.r.t. q ψ to zero, we obtain q ψ pat|st, ut, kq " exp pl φ,ω pst, at, ut, kq´1q " exp pl φ,ω pst, at, ut, kqq ş A exp pl φ,ω pst, at, ut, kqq dat, where the last line follows from the constraint ş A q ψ pat|st, ut, kqdat " 1. To show that this is indeed the maximizer, we substitute q ψ ‹ pat|st, ut, kq " expplps t,a t,u t,kqq ş A expplps t,a t,u t,kqqda t into Ftpφ, ω, ψq: This equality verifies that ftpφ, ωq " max ψ Ftpφ, ω, ψq. Finally, by using the linearity of expectation, we have that f pφ, ωq " max ψ Fpφ, ω, ψq. Next, we derive the lower-bound of gpφ, ωq presented in the paper. We first derive a trivial lower-bound using a general variational distribution over trajectories and reveal its issues. Then, we derive a lower-bound presented in the paper by using a structured variational distribution. Recall that the function gpφ, ωq " log Z φ,ω is gpφ, ωq " log¨K ppst`1|st, utq exp plpst, at, ut, kqq ds1:T du1:T da1:T‹ ‚. Lower-bound via a variational distribution A lower-bound of g can be obtained by using a variational distribution s q β ps1:T, u1:T, a1:T, kq with parameter β. We note that this variational distribution allows any dependency between the random variables s1:T, u1:T, a1:T, and k. By using this distribution, we have a lower-bound gpφ, ωq " log˜K ppst`1|st, utq exp pl φ,ω pst, at, ut, kqq s q β ps1:T, u1:T, a1:T, kq s q β ps1:T, u1:T, a1:T, kq ds1:T du1:T da1:Tȩ E s q β ps 1:T,u 1:T,a 1:T,kq « log ppkqp1ps1q`T ÿ t"1 tlog ppst`1|st, utq`l φ,ω pst, at, ut, kqú log s q β ps1:T, u1:T, a1:T, kq The main issue of using this lower-bound is that, s Gpφ, ω, βq can be computed or approximated only when we have an access to the transition probability ppst`1|st, utq. In many practical tasks, the transition probability is unknown and needs to be estimated. However, estimating the transition probability for large state and action spaces is known to be highly challenging . For these reasons, this lower-bound is not suitable for our method. Lower-bound via a structured variational distribution To avoid the above issue, we use the structure variational approach , where the key idea is to pre-define conditional dependency to ease computation. Specifically, we use a variational distribution q θ pat, ut|st, kq with parameter θ and define dependencies between states according to the transition probability of an MDP. With this variational distribution, we lower-bound g as follows: ppst`1|st, utq exp pl φ,ω pst, at, ut, kqq q θ pat, ut|st, kq q θ pat, ut|st, kq ds1:T du1:T da1:Tȩ E r q θ ps 1:T,u 1:T,a 1:T,kq where r q θ ps1:T, u1:T, a1:T, kq " ppkqp1ps1qΠ T t"1 ppst`1|st, utqq θ pat, ut|st, kq. The optimal variational distribution q θ ‹ pat, ut|st, kq can be founded by maximizing Gpφ, ω, θq w.r.t. q θ. Solving this maximization problem is identical to solving a maximum entropy RL (MaxEnt-RL) problem for an MDP defined by a tuple M " pSˆN`, AˆA, pps 1, |s, uqI k"k 1, p1ps1qppk1q, l φ,ω ps, a, u, kqq. Specifically, this MDP is defined with a state variable pst, ktq P SˆN, an action variable pat, utq P AˆA, a transition probability density ppst`1, |st, utqI k t "k t`1, an initial state density p1ps1qppk1q, and a reward function l φ,ω pst, at, ut, kq. Here, I a"b is the indicator function which equals to 1 if a " b and 0 otherwise. By adopting the optimality of MaxEnt-RL , we have gpφ, ωq " max θ Gpφ, ω, θq, where the optimal variational distribution is The functions Q and V are soft-value functions defined as Qpst, k, at, utq " l φ,ω pst, at, ut, kq`E pps t`1 |s t,u t q rV pst`1, kqs, V pst, kq " log This section derives the objective function Hpφ, ω, ψ, θq from Fpφ, ω, ψq´Gpφ, ω, θq. Specifically, we substitute the models pωput|st, at, kq " N put|at, Cωpkqq and q θ pat, ut|st, kq " q θ pat|stqN put|at, Σq. We also give an example when using a Laplace distribution for pωput|st, at, kq instead of the Gaussian distribution. First, we substitute q θ pat, ut|st, kq " q θ pat|stqN put|at, Σq into G: Gpφ, ω, θq " E r q θ ps 1:T,u 1:T,a 1:T,kq where c1 is a constant corresponding to the log-normalization term of the Gaussian distribution. Next, by using the re-parameterization trick, we rewrite r q θ ps1:T, u1:T, a1:T, kq as r q θ ps1:T, u1:T, a1:T, kq " ppkqp1ps1q where we use ut " at`Σ 1{2 t with t " N p t|0, Iq. With this, the expectation of Σ T t"1}ut´at} 2 Σ´1 over r q θ ps1:T, u1:T, a1:T, kq can be written as E r q θ ps 1:T,u 1:T,a 1:T,kq ff " E r q θ ps 1:T,u 1:T,a 1:T,kq ff " E r q θ ps 1:T,u 1:T,a 1:T,kq which is a constant. Then, the quantity G can be expressed as Gpφ, ω, θq " E r q θ ps 1:T,u 1:T,a 1:T,kq By ignoring the constant, the optimization problem max φ,ω,ψ min θ Fpφ, ω, ψq´Gpφ, ω, θq is equivalent to E q ψ pa t |s t,u t,kq rl φ,ω pst, at, ut, kq´log q ψ pat|st, ut, kqs f E r q θ ps 1:T,u 1:T,a 1:T,kq Our next step is to substitute pωput|st, at, kq by our choice of model. First, let us consider a Gaussian distribution pωput|st, at, kq " N put|at, Cωpst, kqq, where the covariance depends on state. With this model, the second term in Eq. is given by E r q θ ps 1:T,u 1:T,a 1:T,kq where c2 "´d a 2 log 2π is a constant. By using the reparameterization trick, we write the expectation of Σ ff. Using this equality, the second term in Eq. is given by E r q θ ps 1:T,u 1:T,a 1:T,kq Maximizing this quantity w.r.t. θ has an implication as follows: q θ pat|stq maximizes the expected cumulative reward while avoiding states that are difficult for demonstrators. Specifically, a large value of E ppkq rlog |Cωpst, kq|s indicates that demonstrators have a low level of expertise for state st on average, given by our estimated covariance. In other words, this state is difficult to accurately execute optimal actions for all demonstrators on averages. Since the policy q θ pat|stq should minimize E ppkq rlog |Cωpst, kq|s, the policy should avoid states that are difficult for demonstrators. We expect that this property may improve exploration-exploitation trade-off in IL. Nonetheless, we leave an investigation of this property for future work, since this is not in the scope of the paper. In this paper, we specify that the covariance does not depend on state: Cωpst, kq " Cωpkq. This model specification enables us to simplify Eq. as follows: where r q θ ps1:T, a1:T q " p1ps1q ppst`1|st, utqN put|at, Σqdutq θ pat|stq. The last line follows from the quadratic form identity: Next, we substitute pωput|st, at, kq " N put|at, Cωpkqq into the first term of Eq.. Lastly, by ignoring constants, Eq. is equivalent to max φ,ω,ψ min θ Hpφ, ω, ψ, θq, where This concludes the derivation of VILD. As mentioned, other distributions beside the Gaussian distribution can be used for pω. For instance, let us consider a multivariate-independent Laplace distribution: pωput|st, at, kq " Π where a division of vector by vector denotes element-wise division. The Laplace distribution has heavier tails when compared to the Gaussian distribution, which makes the Laplace distribution more suitable for modeling demonstrators who tend to execute outlier actions. By using the Laplace distribution for pωput|st, at, kq, we obtain an objective We can see that differences between HLap and H are the absolute error and scaling of the trace term. We implement VILD using the PyTorch deep learning framework. For all function approximators, we use neural networks with 2 hidden-layers of 100 tanh units, except for the Humanoid task and the robosuite reaching task Update q ψ by an estimate of ∇ ψ Hpφ, ω, ψ, θq. Update p ω by an estimate of ∇ ω Hpφ, ω, ψ, θq`∇ ω Lpωq. Update r φ by an estimate of ∇ φ H IS pφ, ω, ψ, θq. Update q θ by an RL method (e.g., TRPO or SAC) with reward function r φ. where we use neural networks with 2 hidden-layers of 100 relu units. We optimize parameters φ, ω, and ψ by Adam with step-size 3ˆ10´4, β1 " 0.9, β2 " 0.999 and mini-batch size 256. To optimize the policy parameter θ, we use trust region policy optimization (TRPO) with batch size 1000, except on the Humanoid task where we use soft actor-critic (SAC) with mini-batch size 256. Note that TRPO is an on-policy RL method that uses only trajectories collected by the current policy, while SAC is an off-policy RL method that use trajectories collected by previous policies. On-policy methods are generally more stable than off-policy methods, while off-policy methods are generally more data-efficient . We use SAC for Humanoid mainly due to its high data-efficiency. When SAC is used, we also use trajectories collected by previous policies to approximate the expectation over the trajectory densityq θ ps1:T, a1:T q. For the distribution pωput|st, at, kq " N put|at, Cωpkqq, we use diagonal covariances Cωpkq " diagpc k q, where ω " tc k u K k"1 and c k P R dà are parameter vectors to be learned. For the distribution q ψ pat|st, ut, kq, we use a Gaussian distribution with diagonal covariance, where the mean and logarithm of the standard deviation are the outputs of neural networks. Since k is a discrete variable, we represent q ψ pat|st, ut, kq by neural networks that have K output heads and take input vectors pst, utq; The k-th output head corresponds to (the mean and log-standard-deviation of) q ψ pat|st, ut, kq. We also pre-train the mean function of q ψ pat|st, ut, kq, by performing least-squares regression for 1000 gradient steps with target value ut. This pre-training is done to obtain reasonable initial predictions. For the policy q θ pat|stq, we use a Gaussian policy with diagonal covariance, where the mean and logarithm of the standard deviation are outputs of neural networks. We use Σ " 10´8I in experiments. To control exploration-exploitation trade-off, we use an entropy coefficient α " 0.0001 in TRPO. In SAC, the value of α is optimized so that the policy has a certain value of entropy, as described by. Note that including α in VILD is equivalent to rescaling quantities in the model by α, i.e., exppr φ pst, atq{αq and ppωput|st, at, kqq 1 α . A discount factor 0 ă γ ă 1 may be included similarly, and we use γ " 0.99 in experiments. For all methods, we regularize the reward/discriminator function by the gradient penalty with coefficient 10, since it was previously shown to improve performance of generative adversarial learning methods. For methods that learn a reward function, namely VILD, AIRL, and MaxEnt-IRL, we apply a sigmoid function to the output of a reward network to bound reward values. We found that without the bounds, reward values of the agent can be highly negative in the early stage of learning, which makes RL methods prematurely converge to poor policies. An explanation of this phenomenon is that, in MDPs with large state and action spaces, distribution of demonstrations and distribution of agent's trajectories are not overlapped in the early stage of learning. In such a scenario, it is trivial to learn a reward function which tends to positive-infinity values for demonstrations and negative-infinity values for agent's trajectories. While the gradient penalty regularizer slightly remedies this issue, we found that the regularizer alone is insufficient to prevent this scenario. Moreover, for VILD, it is beneficial to bound the reward function to control a trade-off between the immediate reward and the squared error when optimizing ψ. A pseudo-code of VILD with IS is given in Algorithm 1, where the reward parameter is updated by IS gradient in line 8. For VILD without IS, the reward parameter is instead updated by an estimate of ∇ φ Hpφ, ω, ψ, θq. The regularizer Lpωq " T E ppkq rlog |C´1 ω pkq|s{2 penalizes large value of Cωpkq. A source-code of our implementation will be publicly available. In this section, we describe experimental settings and data generation. We also give brief reviews of methods compared against VILD in the experiments. For the benchmark experiment in Section 5.1, we evaluate VILD on four continuous-control benchmark tasks from OpenAI gym platform with the Mujoco physics simulator: HalfCheetah, Ant, Walker2d, and Humanoid. To obtain the optimal policy for generating demonstrations, we use the ground-truth reward function of each task to pre-train π ‹ with TRPO. We generate diverse-quality demonstrations by using K " 10 demonstrators according to the graphical model in Figure 1(b). We consider two types of the noisy policy pput|st, at, kq: a Gaussian noisy policy and a time-signal-dependent (TSD) noisy policy. Gaussian noisy policy. We use a Gaussian noisy policy N put|at, σ 2 k Iq with a constant covariance. The value of σ k for each of the 10 demonstrators is 0.01, 0.05, 0.1, 0.25, 0.4, 0.6, 0.7, 0.8, 0.9 and 1.0, respectively. Note that our model assumption on pω corresponds to this Gaussian noisy policy. Table 2 shows the performance of demonstrators (in terms of cumulative ground-truth rewards) with this Gaussian noisy policy. A random policy π0 is an initial policy neural network for learning; The network weights are initialized such that the magnitude of actions is small. Note that this initialization scheme is a common practice in deep RL . TSD noisy policy. To make learning more challenging, we generate demonstrations according to a noise characteristic of human motor control, where a magnitude of noises is proportion to a magnitude of actions and increases with execution time (van). Specifically, we generate demonstrations using a Gaussian distribution N put|at, diagpb k ptqˆ}at}1{daqq, where the covariance is proportion to the magnitude of action and depends on time steps andd enotes an element-wise product. We call this policy time-signaldependent (TSD) noisy policy. Here, b k ptq is a sample of a noise process whose noise variance increases over time, as shown in Figure 6. We obtain this noise process for the k-th demonstrator by reversing Ornstein-Uhlenbeck (OU) processes with parameters θ " 0.15 and σ " σ k 4. The value of σ k for each demonstrator is 0.01, 0.05, 0.1, 0.25, 0.4, 0.6, 0.7, 0.8, 0.9, and 1.0, respectively. Table 3 shows the performance of demonstrators with this TSD noisy policy. Learning from demonstrations generated by TSD is challenging; The Gaussian model of pω cannot perfectly model the TSD noisy policy, since the ground-truth variance is a function of actions and time steps. For the real-world data experiment in Section 5.2, we use a robot control task from the robosuite environment and a crowdsourced demonstration dataset from 5. These demonstrations are collected for object-manipulation tasks such as assembly tasks. These object-manipulation tasks require the agent to perform three subtasks: reaching, picking, and placing. In our preliminary experiments, none of IL methods successfully learns object-manipulation policies, since the agent often fails at picking the object. We expect that a hierarchical policy is necessary to perform these manipulation tasks, due to the hierarchical structure (i.e., subtasks) of these tasks. Since hierarchical IL is not in the scope of this paper, we consider the subtask of reaching where non-hierarchical policies suffice. We leave an extension of VILD to hierarchical policy for future work. In this experiment, we consider the subtask of reaching, which is still challenging for IL due to diverse quality of crowdsourced demonstrations. To obtain reaching demonstrations from the original object-manipulation demonstrations (we use the SawyerNutAssemblyRound dataset), we terminate demonstrations after the robot's end-effector contacts the target object. After applying such a termination procedure, the dataset used in this experiment consists of 10 randomly chosen demonstrations (N " 10) whose length T is approximately 500 time steps. The number of state-action pairs in this demonstration dataset is approximately 5000. Since we do not know the actual number of demonstrators that collected these N " 10 demonstrations, we use the strategy described in Section 3.3; we set K " N and k " n. We use true states of the robot and do not use visual observations. Since the reaching task does not require picking the object, we disable the gripper control command of the robot. The state space of this task is S Ď R 44, and the action space of this task is A Ď R 7. Figure 11 shows three examples of demonstrations used in this experiment. We can notice the differences in qualities of demonstrations, e.g., demonstration 3 is better than demonstration 2 since the robot reaches the object faster. The performance of learned policies are evaluated using a reward function whose values are inverse proportion to the distance between the object and the end-effector (i.e., small distance yields high reward). We repeat the experiment for 5 trials using the same dataset and report the average performance (undiscounted cumulative rewards). For each trial, we generate 100 test trajectories for evaluating the performance. Note that the number of test trajectories in this experiment is larger than that in the benchmark experiments. This is because the initial states of this reaching task is much more varied than those in benchmark tasks. We do not evaluate VILD without IS and VAIL, since in benchmarks VILD with IS performs better than VILD without IS and VAIL is comparable to GAIL. For all methods, we use neural networks with 2 hidden-layers of 100 relu units. We update policy parameters by TRPO with the same hyper-parameters as the benchmark experiments. We pre-train the mean of Gaussian policies for all methods by behavior cloning (i.e., we apply 1000 gradient descent steps of least-squares regression). To pre-train InfoGAIL which learns a context-dependent policy, we use the variable k as context for pre-training. For VILD, we apply the log-sigmoid function to the reward function. Specifically, we parameterize the reward function as r φ ps, aq " log D φ ps, aq where D φ ps, aq " exppd φ ps,aqq exppd φ ps,aqq`1 and d φ: SˆA Ñ R. We also apply a substitution´log D φ ps, aq Ñ logp1´D φ ps, aqq, which is a common practice in GAN literature . By doing so, we obtain an objective of VILD that closely resembles the objective of GAIL: We use this variant of VILD in this experiment since it performs better than VILD with the standard reward function. Although we omit the IS distribution in this equation for clarity, we use IS in this experiment. Here, we briefly review methods compared against VILD in our experiments. We firstly review online IL methods, which learn a policy by RL and require additional transition samples from MDPs. MaxEnt-IRL. Maximum (causal) entropy IRL (MaxEnt-IRL) is a well-known IRL method. The original derivation of the method is based on the maximum entropy principle but for causal interactions, and uses a linear-in-parameter reward function: r φ pst, atq " φ J bpst, atq with a basis function b. Here, we consider an alternative derivation which is applicable to nonlinear reward function . Briefly speaking, MaxEnt-IRL learns a reward parameter by minimizing a KL divergence from a data distribution p ‹ ps1:T, a1:T q to a model p φ ps1:T, a1:T q " T t"1 ppst`1|st, atq exppr φ pst, atq{αq, where Z φ is the normalization term. Minimizing this KL divergence is equivalent to solving max φ E p ‹ ps 1:T,a 1:T q " Σ T t"1 r φ pst, atq ‰´l og Z φ . To compute log Z φ, we can use the importance sampling approach or the variational approache as done in VILD. The latter leads to a max-min problem where q θ ps1:T, a1:T q " p1ps1qΠ T t"1 ppst`1|st, atqq θ pat|stq. The policy q θ pat|stq maximizes the learned reward function and is the solution of IL. As we mentioned, the proposed model in VILD is based on the model of MaxEnt-IRL. By comparing the max-min problem of MaxEnt-IRL and the max-min problem of VILD, we can see that the main difference are the variational distribution q ψ and the noisy policy model pω. If we assume that q ψ and pω are Dirac delta functions: q ψ pat|st, ut, kq " δa t "u t and pωput|at, st, kq " δu t "a t, then the max-min problem of VILD reduces to the max-min problem of MaxEnt-IRL. In other words, if we assume that all demonstrators execute the optimal policy and have an equal level of expertise, then VILD reduces to MaxEnt-IRL. GAIL. Generative adversarial IL (GAIL) performs occupancy measure matching via generative adversarial networks (GAN) to learn the optimal policy from expert demonstrations. Specifically, GAIL finds a parameterized policy π θ such that the occupancy measure ρπ θ ps, aq of π θ is similar to the occupancy measure ρ π ‹ ps, aq of π ‹. Here, ρπps, aq " E pπ ps 1:T,a 1:T q rΣ T t"0 δpst´s, at´aqs is the state-action occupancy measure of π and satisfies the equality E pπ ps 1:T,a 1:T q rΣ T t"1 rpst, atqs " ť SˆA ρπps, aqrps, aqdsda " Eπ rrps, aqs. To measure the similarity, GAIL uses the Jensen-Shannon divergence, which is estimated and minimized by the following generative-adversarial training objective: where D φ ps, aq " exppd φ ps,aqq exppd φ ps,aqq`1 is called a discriminator. The minimization problem w.r.t. θ is achieved using RL with a reward function´logp1´D φ ps, aqq. AIRL. Adversarial IRL (AIRL) was proposed to overcome a limitation of GAIL regarding reward function: GAIL does not learn the expert reward function, since GAIL has D φ ps, aq " 0.5 at the saddle point for every states and actions. To overcome this limitation while taking advantage of generative-adversarial training, AIRL learns a reward function by solving where D φ ps, aq " exppr φ ps,aqq exppr φ ps,aqq`q θ pa|sq. The policy q θ pat|stq is learned by RL with a reward function r φ pst, atq. showed that the gradient of this objective w.r.t. φ is equivalent to the gradient of MaxEnt-IRL w.r.t. φ. The authors also proposed an approach to disentangle reward function, which leads to a better performance in transfer learning settings. Nonetheless, this disentangle approach is general and can be applied to other IRL methods, including MaxEnt-IRL and VILD. We do not evaluate AIRL with disentangle reward function. We note that, based on the relation between MaxEnt-IRL and VILD, we can extend VILD to use a training procedure of AIRL. Specifically, by applying the same derivation from MaxEnt-IRL to AIRL by , we can derive a variant of VILD which learns a reward parameter by solving max φ E p d ps 1:T,u 1:T |kqppkq rΣ T t"1 E q ψ pa t |s t,u t,kq rlog D φ ps, aqss`E r q θ ps 1:T,a 1:T q rΣ T t"1 logp1´D φ ps, aqqs. We do not evaluate this variant of VILD in our experiment. improves upon GAIL by using variational information bottleneck (VIB) . VIB aims to compress information flow by minimizing a variational bound of mutual information. This compression filters irrelevant signals, which leads to less over-fitting. To achieve this in GAIL, VAIL learns the discriminator D φ by an optimization problem where z is an encode vector, Epz|s, aq is an encoder, ppzq is a prior distribution of z, Ic is the target value of mutual information, and β ą 0 is a Lagrange multiplier. With this discriminator, the policy π θ pat|stq is learned by RL with a reward function´logp1´D φ pE Epz|s,aq rzsqq. It might be expected that the compression may make VAIL robust against diverse-quality demonstrations, since irrelevant signals in low-quality demonstrations are filtered out via the encoder. However, we find that this is not the case, and VAIL does not improve much upon GAIL in our experiments. This is perhaps because VAIL compress information from both demonstrators and agent's trajectories. Meanwhile in our setting, irrelevant signals are generated only by demonstrators. Therefore, the information bottleneck may also filter out relevant signals in agent's trajectories, which lead to poor performances. InfoGAIL. Information maximizing GAIL (InfoGAIL) is an extension of GAIL for learning a multi-modal policy in MM-IL. The key idea of InfoGAIL is to introduce a context variable z to the GAIL formulation and learn a context-dependent policy π θ pa|s, zq, where each context represents each mode of the multi-modal policy. To ensure that the context is not ignored during learning, InfoGAIL regularizes GAIL's objective so that a mutual information between contexts and state-action variables is maximized. This mutual information is indirectly maximized via maximizing a variational lower-bound of mutual information. By doing so, InfoGAIL solves a min-max problem min θ,Q max φ Eρ π ‹ rlog D φ ps, aqs`Eρ π θ rlogp1´D φ ps, aqq`α log π θ pa|s, zqs`λLpπ θ, Qq, where Lpπ θ, Qq " E ppzqπ θ pa|s,zq rlog Qpz|s, aq´log ppzqs is a lower-bound of mutual information, Qpz|s, aq is an encoder neural network, and ppzq is a prior distribution of contexts. In our experiment, the number of context z is set to be the number of demonstrators K. As discussed in Section 1, when knowing the level of demonstrators' expertise, we may choose contexts that correspond to high-expertise demonstrator. In other words, we may hand-craft the prior distribution ppzq so that a probability of contexts is proportion to the level of demonstrators' expertise. Nonetheless, for fair comparison, we do not use the oracle knowledge about the level of demonstrators' expertise, and set ppzq to be a uniform distribution. For the Humanoid task in our experiment, we use the Wasserstein-distance variant of InfoGAIL, since the Jensen-Shannon-divergence variant does not perform well in this task. Next, we review offline IL methods. These methods learn a policy based on supervised learning and do not require additional transition samples from MDPs. BC. Behavior cloning (BC) is perhaps the simplest IL method. BC treats an IL problem as a standard supervised learning problem and ignores dependency between states distributions and policy. For continuous action space, BC solves a least-square regression problem to learn a parameter θ of a deterministic policy π θ pstq: BC-D. BC with Diverse-quality demonstrations (BC-D) is a simple extension of BC for handling diversequality demonstrations. This method is based on the naive model in Section 3.1, and we consider it mainly for evaluation purpose. BC-D uses supervised learning to learn a policy parameter θ and expertise parameter ω of a model p θ,ω ps1:T, u1:T, kq " ppkqpps1qΣ To learn the parameters, we minimize the KL divergence from data distribution to the model. By using the variational approach to handle integration over the action space, BC-D solves an optimization problem max θ,ω,ν " log π θ pa t |s t qpω pu t |s t,a t,kq qν pa t |s t,u t,kq ıı, where qν pat|st, ut, kq is a variational distribution with parameters ν. We note that the model p θ,ω ps1:T, u1:T, kq of BC-D can be regarded as a regression-extension of the two-coin model proposed by for classification with noisy labels. Co-teaching. Co-teaching is the state-of-the-art method to perform classification with noisy labels. This method trains two neural networks such that mini-batch samples are exchanged under a small loss criteria. We extend this method to learn a policy by least-square regression. Specifically, let π θ 1 pstq and π θ 2 pstq be two neural networks representing policies, and ∇ θ Lpθ, Bq " ∇ θ Σ ps,aqPB }a´π θ psq} 2 2 be gradients of a least-square loss estimated by using a mini-batch B. The parameters θ1 and θ2 are updated by iterates: The mini-batch B θ 2 for updating θ1 is obtained such that B θ 2 incurs small loss when using prediction from π θ 2, i.e., B θ 2 " argmin B 1 Lpθ2, B 1 q. Similarly, the mini-batch B θ 1 for updating θ2 is obtained such that B θ 1 incurs small loss when using prediction from π θ 1. For evaluating the performance, we use the policy network π θ 1. Results against online IL methods. Figure 7 shows the learning curves of VILD and existing online IL methods against the number of transition samples. It can be seen that for both types of noisy policy, VILD with and without IS outperform existing methods overall, except on the Humanoid tasks where most methods achieve comparable performance. Results against offline IL methods. Figure 8 shows learning curves of offline IL methods, namely BC, BC-D, and Co-teaching. For comparison, the figure also shows the final performance of VILD with and without IS, according to Table 1. We can see that these offline methods do not perform well, especially on the highdimensional Humanoid task. The poor performance of these methods is due to the issues of compounding error and low-quality demonstrations. Specifically, BC performs the worst, since it suffers from both issues. Still, BC may learn well in the early stage of learning, but its performance sharply degrades, as seen in Ant and Walker2d. This phenomena can be explained as an empirical effect of memorization in deep neural networks . Namely, deep neural networks learn to remember samples with simple patterns first (i.e., high-quality demonstrations from experts), but as learning progresses the networks overfit to samples with difficult patterns (i.e., low-quality demonstrations from amateurs). Co-teaching is the-state-of-the-art method to avoid this effect, and we can see that it performs significantly better than BC. Meanwhile, BC-D, which learns the policy and level of demonstrators' expertise, also performs better than BC and is comparable to Co-teaching. Nonetheless, the performance of Co-teaching and BC-D is still much worse than VILD with IS. Accuracy of estimated expertise parameter. Figure 9 shows the estimated parameters ω " tc k u K k"1 of N put|at, diagpc k qq and the ground-truth variance tσ 2 k u K k"1 of the Gaussian noisy policy N put|at, σ 2 k Iq. The show that VILD learns an accurate ranking of the variance compared to the ground-truth. The values of these parameters are also quite accurate compared to the ground truth, except for demonstrators with low-levels of expertise. A possible reason for this phenomena is that low-quality demonstrations are highly dissimilar, which makes learning the expertise more challenging. We can also see that the difference between expertise parameters of VILD with IS and VILD without IS is small and negligible. InfoGAIL with different values of context. Figure 10 shows the learning curves of InfoGAIL across different values of context z. We can see that the performance of InfoGAIL depends on the context, i.e., there is a discrepancy between the best and worst performances of InfoGAIL. The discrepancy is clearer in the Walker2d task with the TSD noisy policy and in the robosuite reaching task (Figure 4). Table 4 reports the performance in the last iterations in the robosuite reaching task experiments. It can be observed that VILD with IS outperforms comparison methods in terms of the mean performance. (a) Demonstration number 1 (k " 1). (b) Demonstration number 2 (k " 2). (c) Demonstration number 3 (k " 3).
We propose an imitation learning method to learn from diverse-quality demonstrations collected by demonstrators with different level of expertise.
1,111
scitldr
With a growing number of available services, each having slightly different parameters, preconditions and effects, automated planning on general semantic services become highly relevant. However, most exiting planners only consider PDDL, or if they claim to use OWL-S, they usually translate it to PDDL, losing much of the semantics on the way. In this paper, we propose a new domain-independent heuristic based on a semantic distance that can be used by generic planning algorithms such as A* for automated planning of semantic services described with OWL-S. For the heuristic to include more relevant information we calculate the heuristic at runtime. Using this heuristic, we are able to produce better (fewer expanded states) in less time than with established techniques. We motivate our work by the need of a heuristic for AI planning. Since the search space of domain-independent planners for large problems becomes computationally intractable BID9 we need heuristics to guide our search through the state space. For domain-specific planners that have a special purpose (e.g., finding a route from one place to another for a GPS traffic guidance systems), a heuristic can easily be provided e.g. the Manhattan-Distance or the Euclidean distance. But for an agent which has the capability of creating general plans, these heuristics are not sufficient. This means it is impossible for our general purpose planner to create a problem specific heuristic at design time. Even reusing old ones like it is done for meta-heuristics or learning parameters of hyper-heuristics have only been successfully applied to simple problems BID20. Meta-heuristics or hyper-heuristics have an additional drawback: they need a learning phase to gather information about the problem to be solved. The calculation of the heuristic during runtime is motivated by the additional information available like the grounding information which could consist of concrete in-dividuals to abstract classes describing e.g. the parameters of a service. The creation of heuristics during runtime can lead to the encounter of new concepts used in an interface definition like a service description, which then lead us back to a fundamental question in AI research: How can AI make sense of new concepts? For heuristics this means interpreting the new concepts and adding information to classical heuristic approaches. A function H: state → R + is called heuristic (Russel and Norvig 2002, p. 92) and estimates the distance from a state to a given goal. We extend this definition of heuristic to H: service × state × goal → R + making the heuristic more dynamic since now it is able to adapt with changing goals and services. With that, the heuristic determines the usefulness of the given service in the current state regarding a current goal. This is done because if an alone state would be the information source for the heuristic, information like the service description would be lost. The interested reader is referred to (Pearl 1985) for a formal description of heuristics and their properties. During our analysis of this topic, we have found that understanding the described functionality of a service is an AI-hard task BID26. This is because interpretation of what a description creator might have understood the service to be, might not be entirely reflected in the description. Furthermore, the service can have multiple interpretations in different contexts. Here the context we defined is the additional information relevant to our problem. As an example strategy for problem-solving using a heuristic, we have selected planning. This means our context consists of the start and goal state which include a domain description. Starting from this setup, we need to evaluate if a capability is useful in the endeavour of finding a plan solving our problem. The approach presented in in Figure 1 is a goal-oriented heuristic at runtime utilizing the semantics of the goal and capability description. The heuristic is thought for a oneshop-planning problem, where it is expensive for the agent to try out services since we are looking at possible worldaltering capabilities, which means a learning phase should be kept as short as possible. This is done at runtime so that we know the goal we want to fulfill and can create a heuris- Figure 1: Abstract approach to a greedy heuristic tic reflecting the given problem. We do so by looking at the goal and the capability description we encounter to calculate the usefulness of a capability. Here the idea of the heuristic is to find useful capabilities to try in our search to the goal state, and reprobate others. The heuristic additionally estimates how much of the precondition is fulfilled to see which capabilities are more likely to be executed. These two evaluations of the service are then combined to create our heuristic. In this section we will first look at the state of the art of heuristics in Section 2. Then we create our goal oriented heuristic in Section 3, select a data set to test this heuristic on in Section 4 and discuss the in Section 5. We conclude this experiment in Section 6. The state of the art in general heuristics for the planning problem is limited. The main conference on AI Planning and heuristic search is the International Conference on Automated Planning and Scheduling (ICAPS) / Conference on Artificial Intelligence Planning Systems (AIPS) (ICA 2016). Here, starting from 1990, the last 28 years the community of AI planning has discussed the different approaches on problem solving.1 During that research effort, multiple specializations of the general planning domain have been identified. Most of them use some kind of translation of the semantic domain to classical STRIPS planning in e.g. PDDL, like OWLS-Xplain BID13 BID13 or Simplanner BID15, or agent-based approaches as proposed in BID0. A comprehensive overview can be found in the work of Markou and Refanidis BID18. Service composition approaches focus on Quality of Service, as in BID16, or use e.g. model checking 1 See www.icaps-conference.org for the proceedings.as in BID3.The different nature of a STRIPS-like planning problem without semantic and service planning address the problem of a search for a plan in different ways. Classical planners are highly optimized to solve problems like the 15-puzzle (Rat 1986) or the four-peg towers of Hanoi problem BID14. For semantic general-purpose planning, more general heuristics are needed. There are heuristics like the minimal step count to the goal, called a uniform action cost (Pearl 1985). These heuristics is equal to the step count if an action cost is equal to 1 BID12. If the uniform cost function of 1, the heuristic is admissible since it predicts that the action to be executed is always only one step form the goal. This is sometimes also called an "optimistic" heuristic. Greedy heuristics are those which count the overlap to the goal, thus measuring the usefulness of a service in how much of the goal it archives. This is a quite simple heuristic which performs well regarding its simplicity. Haslum and Geffner BID6 describe a greedy heuristic which is derived from the STRIPS planning problems, executing the services with the highest overlap of their add-list and the goal first. This heuristic is admissible, it can be used for all planning problems, and is the basis of the most successful planners according to the International Planning Competition (Helmert, Röger, and Karpas 2011). However, it only works for the relaxed problem when the service effect does not have a delete-list, and only the add-list is added to the current state to create a new state. The approach of BID6 has been extended by an optimization with an abstraction of the effects called Patterns BID7. Those patterns are then subtracted from each effect and the start and goal states, creating abstract states, which are then mapped to the state space through these patterns. The patterns represent subproblem of the original planning problem, which is already solved, to get the patterns. A pattern is a set of variable assignments which reoccurs in different states, e.g. start and the goal state, creating homomorphism abstractions. The drawback of those Pattern Database Heuristics (PDB) is the that they do not scale up to real-world problems BID11.The approach of BID11 again optimizes the of BID7 ) by adding a Causal Graph structure and the Domain transition graph to the PDB heuristics, ing in "causal graph structural patterns", which approximate the original problem but are more abstract. This abstraction in done with the SAS + formalization of the planning problems BID1, which has an additional restriction for the domain descriptions. The simplification SAS + has the effect that abstractions like those done by Katz et al. are possible. Here e.g. "Post-uniqueness" means that an effect is only given by at most one action. Additionally, the "Binariness" restriction demands that all state variables have exactly two possible values. With an open world assumption and a distributed service development we cannot fulfill those restrictions, thus these cannot be applied to our problem. Learning the domain structure by observing plans is still subject to research. Gregory and Lindsay BID5 propose a model of action cost, which is learned through the observation of plan traces. Even though the ing cost function can be used as a heuristic, its creation, the observation of executed plans, puts this heuristic into the runtime. This interleaving of the plan-and runtime is out of scope for this work because we want to study the understanding of services and measure the degree of understanding by their use in a plan, not the other way around. Despite this being a valid approach, the idea is a trial and error mechanisms of learning the usefulness of services. For certain services, this might be appropriate, but we restrict our domain to intellectual problems where deterministic actions are analyzed. The same argument can be applied with approaches leaning for other planning properties to plan optimality(Nedunuri, Cook, and Smith 2011). Other research on heuristic creation in uncertainty like BID17 is concentrated on numerical effects, which we neglect here. Another approach of guiding the search through the state space is called Landmarking BID10. Landmarks are facts which must hold true on the way towards reaching a goal. The two sub-problems of landmarks are: How to find landmarks BID21, and how to use the information given by a landmark to create a heuristic BID27. These landmarks then can be used to decompose the search problem and use a local search to iteratively search from landmark to landmark BID10.Using landmarks for creating a heuristic is done in the LAMA planner from Richter and Westphal, which performed well in the IPC 2008 BID10, counting fulfilled landmarks in contrast to unfulfilled ones. In BID23 they combine this greedy heuristic search with landmarks with preferred operators that take into account the usefulness of services by keeping them in a "preferred-operator queue". Those preferred operators are in consequence always tried first. Deciding which service is preferred is part of the heuristic. Again, the problem is formalized as a SAS + problem, which lets Richter et al. decide which service is a landmark (because its effect is unique). This is not given in our planning problem, thus this kind of heuristic needs adaption to be able to function with, e.g., the open world assumption. As a , the state of the art in generating heuristics is mainly based on the relaxation of an original problem by abstraction. Some of them use the domain description to structure the search space, others analyze services to identify landmarks which help to break down the search problem. But no heuristic found so far has used the semantics of the planning problem. Thus, those heuristics are often not applicable to general planning in real-world problems. In this work, we propose a new heuristic for service planning using the Semantic Distance between states. Since we are not considering services with different costs, the here looked at planning problem is a constraint satisfaction problem. As a planning algorithm, we used a variation of an A * algorithm, due to its theoretical properties (see (Pearl 1985) ). The variation from the basic A * is that the function g from the cost function to be minimized f = g + h, where h is the used heuristic and g is supposed to be the cost of the path so far, is selected as g = n−1 i=0 h(s i), where s 0 is the start state, s n is the current state, and all states s i are on the path to the current state. In order to determine the semantic distance between two states, we first have to perform a Semantic Decomposition of those states. We then use a Marker Passing algorithm to determine the semantic distance. At first, we will have a look a the semantic decomposition algorithm. The decomposition takes a word and looks up definitions and relations of this work in data sources like Wikipedia, WordNet or other dictionaries. The words related to the original word are then decomposed recursively until a predefined decomposition depth is reached. With that the connectionist interpretation of meaning is represented as a ing graph. The functions AddRelation and AddConcept are convenience methods for adding the relation and concepts into the semantic graph. The functions AddConcept(concept, decomposition) and AddRelation(relation, GetTargets(relation), decomposition) add the concepts or relations to the graph which represents our decomposition. AddConcept adds the given concept to the graph nodes and AddRelation adds the relation between the concept its targets to the relations of the graph. We now have a look at how such decomposition can be created and how automatisms might help. We identified the steps for a decomposition as described in the recursive Algorithm 1. The algorithm takes as input the concept that is return decomposition subject to the decomposition. As a successful decomposition will always build a graph, the semantic primes are the termination criterion for the recursion. The Algorithm 1 reads as follows:Line 1 initializes the semantic graph which we will build up during this algorithm and which represents the at the end. Line 2 to 28 represents the recursive function which is called on all decomposed concepts. This function adds the decomposition to the semantic graph initialized in Line 1. Which is called until the decomposition depth is reached or all concepts have been decomposed into semantic primes. We will build a hierarchical structure made up of concepts also referred to as lexical units. Those concepts include a lexical representation, the textual representation of a lexeme and a decomposition. Line 3 checks if the concepts have been already decomposed or if the decomposition depth is reached. The decomposition depth is a parameter of the decomposition, which restricts the decomposition to an amount of relations to which the decomposition extends. The second part stops the decomposition of decomposing the same concepts over and over again. Additionally, the decomposition stops here, if a synonym of the concept has been decomposed previously. This is because if a synonym has been decomposed previously, its synonyms are added to the decomposition as well. Thus this synonym, which is supposed to get decomposed now, is already part of the decomposition and is not decomposed again. Line 4 takes the concepts to decompose and normalizes them. Here the inflection is removed, revealing the stem of a concept. Furthermore a concept includes all its inflections (all concepts which can be created by applying grammatical forms to a concept like 'eating', 'ate', 'eaten'), all lexical paradigms for this concept (all concepts rooting from the same word stem like to 'dine', 'dinner') and all sub-categorization frames (like the valence which is the amount of parameters like 'ask', 'ask X', 'ask X for Y'). We remove this kind of inflection because we are interested in the concepts described by a word, not its relation to other words. We can integrate syntactic information into the graph by adding syntax relations and nodes. For this reduction, we use the linguistic process of Lemmatization.2 The function Normalization in Algorithm 1 Line 4 hides this normalization of a concept. Line 5 gets all the relations of the concept from the used dictionaries. This means we are looking through all our dictionaries and look up all the semantic relations we can find and remember them for later processing. Line 6 likewise looks up the definitions of the concept in all available dictionaries. Lines 8 to 10 check whether the concept itself is a semantic prime. If this is the case, the prime is added to the decomposition, the decomposition is finished for this concept and returned. This hides technical optimizations like that we check for synonyms of primes as well to make the search a bit broader. At the same time, we simplified the stop word removal here. Stop words represent words which can be ignored, taken from natural language processing theory BID25. These are mostly words with little semantic meaning like, e.g.,'a','an' or'the'. Those nodes are removed and are not further decomposed. Lines 11 to 18 handle the relation of the concept we are decomposing. Here all relations are added to the decomposition as a relation between concepts. Then all concepts which are connected by those relations are recursively decomposed. Lines 19 to 25 decompose the definitions. Each definition is a list of concepts which get decomposed again. The definition is connected to the definiendum via a "definition" relations. This Marker Passing algorithm is a generalization of the algorithm described by F. Crestani (Crestani 1997, Figure 5, p. 461). Crestani describes the Marker Passing in four steps: Pre-adjustment, spreading, post-adjustment and termination condition evaluation. This is quite general and can in inaccurate interpretations of the algorithm. Consequently, we introduce a more precise description of the algorithm by breaking the activation down into multiple steps without losing generality. Crestani's algorithm is based on the following principle: Starting from a start activation, a concept has a threshold (seen as an upper limit of activation in a node to decide if the node is activated), with each incoming activation the activation level of the node builds up. If the threshold is reached, the node is selected as activated and is spreading in the next spreading step. This means that the node passes all its markers on to its neighbors. This step is repeated until a termination condition is reached BID2 M ← afterSend(M, srcC) 14: end for 15: return M Algorithm 2 describes our extension of the spreading activation algorithm of Crestani BID2.The algorithm defines two maps pulse out and pulse in, which hold the markers passed during a pulse. The function " ∪ ←" describes the insertion of the remaining tuple into the appropriate set of, e.g., all markers of the current pulse (in contrast to replacing them). In line 3 of Algorithm 2 we add the of the outF unction in the form of (Relation × Edge × M arkers) * to the map pulse out, where for each edge the markers are sorted. We separate the Algorithm 2 into four blocks each consisting of one loop: Lines 2 -4: All passing concepts activate their outfunction and the to the current pulse stored in the variable pulse out. This is the input for the edge functions of the appropriate relations of the next step. Lines 5 -9: Each marker passed by the current pulse is given to the appropriate relation it is passed to, and this relation activates its edge-function. The of the edgefunction is added to the pulse which is used as input for the in-functions of the targets of this relations. Lines 10 -12: Concepts that are targets of the relations passing markers are given the markers passed to them and activate their in-function. Lines 13 -15: The after-send-function is activated to fix the markers on the source concepts if needed. With this marker passing, we then can set start markers e.g. onto the start state and the goal state and analyze how the pass to services. As depicted in Figure 1 our goal-oriented heuristic is composed of two parts: The closeness to the start state and the closeness to the goal state. DISPLAYFORM0 Here the set S denotes all services, S 0 denotes the start state and G describes the goal state. U F is the usefulness of the given state w.r.t. the goal, and E its executability, and w 1,2 are their weights. The marker information and the detailed parameters, like termination condition or weight configurations of the marker passing algorithms, can be found in BID4.We selected these two measurements for our heuristic, because if we leave out one of the aspects two effects happen:Goal overcommitment If we only look at the usefulness the services fulfilling subgoals will be tried first. Even though the probability of them being at the end of the plan is higher. This means if we are not talking about a planning problem which is trivial because all service in the plan is independent, that one or more service needs to be executed to enable this useful service. By only looking at the usefulness the search will always try those service first. Low hanging fruits If we only look the executability, services which are executable are always tried first. This is good at the beginning of the planning process because reaching the goal is less probable at this point. But the more service is executed, the more service preconditions become enabled and all of them are tried first. To reach the goal then becomes like a breadth-first-search, where all service possible are tried before we get closer to the goal. This argumentation leads us to introduce two weights w 1,2 which can be adapted depending on how far the search has progressed towards the goal. In the beginning, the executability should, in consequence, be highly weighted and become less important the closer to the goal the search progresses. This is inverse for the usefulness. Both parts use the same kind of mechanism to check whether a fact is fulfilled (in the precondition or effect of a service) which is given by the goal or start state. To check this fulfillment we extract the predicates from the service precondition (effects) and the start (goal) axioms and their arguments and compare them. The comparison is done in two ways: first, for the predicates, we separate the word included in the predicate e.g. "IsBookedFor(Flight x, Customer c)" become the predicate "is booked for". Since this resembles a sentence, the sentence similarity measure d sen is used to compare predicates with a sentence similarity measure based on the semantic similarity presented in BID4. Second we compare the arguments with the same semantic similarity measure d sem. The of those both similarity is then aggregated. The aggregation is done in the following way:The main difference of the sentence similarity measure d sen to the semantic distance measure d sem is that the marker carries the information from which sentence they started out from. This information is used in the interpretation of the markers in the way described in Equation 3. DISPLAYFORM1 Where s 1 and s 2 are two lists of concepts (the sentences) and Ξ represents the activation of the set of concepts in both sentences. StartMarkers is the set of initially placed markers. The is the marked graph after the marker passing has finished passing markers. |s 1 | + |s 2 | − |s 1 ∪ s 2 | |s 1 | + |s 2 | The AvgActivation gets the average of the activation of all markers of all concepts that are activated by both sentences. In Equation 3 we calculate the concepts present in both sentences plus the average activation of the concepts activated by markers of both sentences normalized by the total activation present after the initial marking. The ing similarity is then again normalized to the interval from 0 to 1.With Equation 3 we calculate the similarity of two sentences by calculating the ratio of equivalent words in both sentences in Ξ. This means that if the two sentences are equivalent, then Ξ becomes 1. To this ratio, we add the normalized average activation of all concepts activated by markers of both sentences. This captures that if concepts are semantically closer together, then more markers of both sentences, carrying more activation exist. In extreme cases, this value can become larger than one, which makes a normalization to the interval of zero to one of the necessary. These measures, d sen and d sem, are used to calculate the Distance between two states, which in turn is used to calculate the usefulness UF and executability E. DISPLAYFORM0 Here we sum up the maximal weighted name and arguments match over the set of subgoals. Thus our service gets a usefulness of 1 if it fulfills all subgoals. The argument matching follows the same structure: The arguments of the goal predicate are matched to the arguments of the effect predicate. The maximal match is then summed up over all predicates of the goal. This means we are collecting all predicates of the goal which are semantically close to the effect of the services. The semantic closeness is calculated in parts: with the predicate name and its arguments. These two parts are weighted to define their influence on the overall heuristic . This is done because we want to maximize the argument matches and maximize the number of effects the service can fulfill for the given goal. The d sen and the argument matches are then weighted with weights w prd, w arg determining how much influence the different similarities have in the overall . For the comparison of predicates two kinds of similarity measures are used:Predicate comparison is done with the sentence similarity measure d sen based on the semantic similarity measure proposed in BID4. This is because a predicate mostly describes verbs and their form, direction and if they are passive or active. Our example "is booked for" is a typical use case for a predicate in ontologies. Argument comparison is done with the semantic similarity measure d sem proposed in see BID4. Here the arguments are compared and the maximum is summed up. This makes the argument comparison independent of argument order. The is then normalized w.r.t. the number of predicates in the goal. Here we can see that H(S, Z) becomes 1.0 for a service fulfilling all predicates of the goal, and 0.0 when none of the effects fulfill anything from the goal. The executability (E) then is calculated with a similar measure then the usefulness: DISPLAYFORM1 The same measure as for the usefulness can be applied for the evaluation of services to the extent of their fulfilled preconditions, meaning that service with unsatisfied precondition will be avoided. The effect of this heuristic is that the search algorithm of our planner now will try all services fulfilling parts of the goal first if the weight on the usefulness is high, and try executable service first if the weight of executability is higher. Now we need an example problem to test our heuristic upon. This will be discussed in the Section 4 next. There is one dataset which uses semantic service descriptions we could find, called the Secure Agent-Based Pervasive Computing (Scallop) domain 3. It has a collection of 21 services in the domain of health, air travel, and medical transport, and includes a set of 24 ontologies building up the domain model. Here, the scenario in focus is the medical transport of victims to a medical facility mostly by airplane and some ground transport. For technical compatibility we have translated the services and the domain to OWL-S 1.2.The problem to be solved is to transport a patient from one place to a hospital. This includes finding the nearest airport from which one can fly to an airport that is close to a hospital. In addition, transport from and to the airports has to be organized. To book a flight, a flight account has to be created, a fitting flight has to be found, and the flight needs to be booked. After the flight a ground transport to the nearest hospital needs to be organized. Having done this, the goal of our example problem is reached. We created a start and end state of this domain in which a victim has to be transported to a medical destination. The goal state consist of 64 axiomatic facts which need to be fulfilled to reach the goal state. Since the start and goal states have a large overlap, FIG0 shows both states combined. Here the red states are states from the start state, and blue nodes are from the goal state; the gray nodes are found in both states. 4 For readability, the top-most "owl:Thing" class is omitted, and subclass relations are shown as dashed lines and individuals with dotted lines. The overall domain is modeled in an ontology, describing the individuals and their relations. This ontology is too big to be displayed as a whole here, but the interested reader can download it from the Scallop project website. The initial state has declares multiple facts about the domain, e.g. the transports available in our domain, where we see that Vehicle Transports and Flights are the two possible transports. We want to be at a certain location at a certain time. All the modeling around those facts is necessary because we have to make sure all individual are available so that we can evaluate a potential execution of a service and with that reason upon the effect of this service. The goal state consist of the information we want to see fulfilled. We start out by declaring the individuals we need in FIG0 with the blue nodes. This models our'patient zero' who needs the "RequiredTreatmenat" that can be provided at a certain hospital. As part of the goal we state that we wish the flight to be booked for "Patient 0", who owns the credit card the flight can be booked with and that the flight arrives at our target destination at the desired time. Since we need a valid account to book a flight, we want that "Patient 0" has a personal account to transport our patient and to book the relevant flights and transports from and to the airports. Next we model the individuals that are needed to reach our destination, as well as the departure and arrival airport. When starting to plan, the start and goal state already overlap with 57 out of 64 axioms in the goal state, thus the plan to be made has to fulfill seven more axioms to reach 3 http://www.dfki.de/scallops/ 4 A node not appearing in the goal does not have to mean that that node is to be deleted, but just that it is not relevant for the goal. the goal state. The optimal plan for this problem includes 4 steps: two requests for flight information for departure and arrival time, creating a flight account, and booking the flights. The planning problem, with its 21 services, which have continuous inputs, like dates, has an infinite search space. Since the goal state is specified as a set of OWL axioms, there is a multitude of states which subsume our goal state. All of them are considered a success. The next section will elaborate on the of our planner with the different heuristics. The evaluation is run using the data set described in Section 4. To measure the performance, we count the extended nodes during the search for the goal state. The'gold standard' (described in Section 2) for general purpose heuristics is still a Greedy heuristic (Pearl 1985), which checks the overlap of the effects of a service with the facts wanted in the goal state, always selecting the service with the highest overlap. Additionally we compare our with the Uniform Cost distribution, where each execution of a service is calculated to cost 1 abstract cost measure. With that admissible heuristic, A * finds an optimal solution. The Random heuristic has been added to the comparison as a baseline. TAB3 shows the average of ten runs. The four columns of TAB3 describe the mean µ and standard derivation σ of the experiment of the steps and time. A step here is an extended state during the search for the goal state. Thus less looked at states means a more directed search and less effort. Column µ steps indicates how many states the search had to extend on average to find the goal state, and column µ time describes how much time one search has taken on average in seconds. The average has been created over 10 runs.5 All heuristics are tested on the same start and goal state, thus there is no difference in the planning problem state space the search had to traverse. The variation of the performance is rooted in the random selection of services if two services are ranked with the same usefulness. The greedy heuristic is seen as the'gold standard' for general purpose planning. It is used in most best-first search problems. As the name suggest, the services with the most overlap with the goal (the "best" ones) are tried first. This can lead to the that in each state of the state space, the same "best" services are tried over and over again. The evaluation of the goal overlap does take about as much time as the semantic heuristic, as can be seen by comparing how many steps on average are looked at, and how much time is spent. Here we have an average step calculation time of 6.0 seconds for greedy, which is close to the 5.2 seconds the marker passing heuristic spends on each state. The standard derivation from the mean can be explained with the random selection of services with equal usefulness. The Uniform Cost function with the same usefulness for all services creates a breadth-first search in an A * algorithm. Here we can see that the standard derivation of the steps reduces to 2.6. This is because we are looking at all service in one state before progressing to the next; it does not matter in which order we look at the services. The Random cost heuristic is the baseline to beat: If we are worse then this, our heuristic creates more confusion then it guides the search. In addition, this is used to ground the overall speed of our heuristics. This can be done, because the creation of a random number does almost consume as few resources as the Uniform Cost heuristic and does not give any information to the search. Here the standard derivation of steps is less then one, because the random heuristic does not speed up the search. As expected, the random heuristic has to look at the most state in the search space (see TAB3) which is at a mean of 53.9. Of course, the relation between looking at more state and taking more time correlates. Thus, while the evaluation of the precondition and the instantiation of the effect take time, a uniform cost heuristic (which has no cost to calculate the heuristic) is still inefficient because each explored service takes some time. The use of semantics in the marker passing heuristic reduces the standard derivation from the mean, which means that we gather more useful information then by just comparing the overlap with the goal. The standard derivation from the mean is due to the random selection of services with the same heuristic value, which means that the heuristic is still not precise enough. This might change if grounded actions would be analyzed. The problems used in academia are mostly formalized in PDDL with few semantics, e.g. input and output parameter type class hierarchies. Additionally, the problems are made'hard' by scaling the problem up, e.g. by extending the 15-puzzle to an n-puzzle. In those toy domains, the services available are domain specific and are mostly necessary to solve the problem. Thus, the planning task is not a task of using the right services but rather to bring them in the right order. In contrast, in the general planning problem, the right services have to be selected to establish a domain. This domain includes the relevant services that help the agent to reach its goal. This concludes our experiments. We will now take a step back and look at our with some mental distance. Since the heuristic always returns a value between zero and one, it seems still admissible because until the goal is reached, at least one facts remains unsatisfied and thus at least one service need to be executed. In addition the optimal selection of the weights w 1 and w 2 has to be analyzed further in future work. The heuristic seems to benefit of using semantic information, but it remains to be shown that the effort describing the services with additional semantic description is worth the effort. Further, we plan to use the executability as cost and adopt the goal by removing fulfilled subgoals for the calculation of the heuristic. Also, we are currently implementing a fastforward planner by removing the reasoning for consistent states during the search.
Describing a semantic heuristics which builds upon an OWL-S service description and uses word and sentence distance measures to evaluate the usefulness of services for a given goal.
1,112
scitldr
In colored graphs, node classes are often associated with either their neighbors class or with information not incorporated in the graph associated with each node. We here propose that node classes are also associated with topological features of the nodes. We use this association to improve Graph machine learning in general and specifically, Graph Convolutional Networks (GCN). First, we show that even in the absence of any external information on nodes, a good accuracy can be obtained on the prediction of the node class using either topological features, or using the neighbors class as an input to a GCN. This accuracy is slightly less than the one that can be obtained using content based GCN. Secondly, we show that explicitly adding the topology as an input to the GCN does not improve the accuracy when combined with external information on nodes. However, adding an additional adjacency matrix with edges between distant nodes with similar topology to the GCN does significantly improve its accuracy, leading to better than all state of the art methods in multiple datasets. One of the central assumptions in node classification tasks is that neighboring nodes have similar classes (; ; b;). This has been extensively used in node classification tasks (; a). Such approaches are now often denoted as graph neural networks (i.e. machine learning where the input is a graph/network) (; ;). Four main approaches have been proposed to take advantage of a graph in machine learning: • Regularize the output requiring that neighboring nodes either in the graph or in its projection have similar classes. • Use the graph to propagate labels and learn the best propagation. • Use the graph to project the nodes to real valued vectors and use those for supervised or unsupervised learning. • Use Graph Convolutional Networks (GCN) for convolutions on the input of a node and its neighbors. Regularization or graph partitioning methods include among others partitioning the graphs based on the eigenvalues of the Laplacian (assuming that nodes with the same partition have similar classes). The Laplacian of a graph is: L = D −A, where D is a diagonal matrix, with the sum of each row in the diagonal and A is the adjacency matrix. This Laplacian is often weighted by multiplying it on the left and the right by D to normalize for the degree . Other works have used variants of this idea, each using smoothness and graph distance differently . An alternative approach is to use quadratic penalty with fixed labels for seed nodes (; a). Multiple diffusion and information propagation models have also been proposed either through explicit diffusion, or through the projection of nodes into real valued vectors . For example, DeepWalk , where a truncated random walk is performed on nodes. It then uses these sentences as an input to skipgram to compute a projection of each word into R N, maximizing the sentence probability. also uses random walks combined with negative sampling. uses a translation of subgraphs to hash functions for a similar task in the context of molecule classifications. A very similar approach was presented by (Node2Vec) by projecting nodes minimizing the distance of neighbored nodes in a truncated random walk. The DNGR model uses random walk to compute the mutual information between points (the PPMI-positive pointwise mutual information), and then a SVD decomposition to project into space. PPMI was used for word representations in and is a sparse high dimensional representation. Another possible approach is the projection of the graph (often using the Laplacian eigenvectors), and the usage of the projection for classification (and not only for a smoothness based regularization), where either the graph itself is used (in such a case, the eigenvectors themselves are used) or an input to the graph is used. In such a case, a convolution with these eigenvectors was used . A Multi-Dimensional-Scaling (MDS) projection of the points in the graphs was also used for a similar goal . Alternative approaches were inspired again by word embedding methods such as word2vec. These methods use the graph to define a context in relation to which the node embedding is constructed. When the data includes only the graph, the embeddings are used as features and fed into existing predictors . These methods can be thought of as propagating features rather than labels. defines local features to translate each node to a features vector and use those to predict classes. Recently, Kipfs and collaborators, in a seminal work, proposed a simplification of spectral based convolutions , and instead use a two-layer approach, which can be summarized as: whereà is a normalized adjacency matrix: They test their work on multiple graphs with labeled nodes including CiteSeer, Cora, Pubmed, and Nell. Convolution approaches can also be used with the graph as a filter on the input. Most such convolutions are spectral (use the Laplacian eigenvectors). However, recent methods are based on random filters. Those include among others: which defines predetermined convolutions with powers of the adjacency matrix and then combines these powers using learned weights to maximize the classification precision of either the full graph or the classification of nodes. provide a multi-level graph convolution with pooling, where at each stage nodes are merged into clusters using agglomerative clustering methods, and combine it with a pooling method to represent the different resolution of images. This has been extended (; to different convolutional kernels (mainly spectral, but also diffusion-based kernels) and the classification of images, using ImageNet (see for a detailed review of all convolution methods). Vandergheynst and collaborators mainly use polynomial convolution in the spectral domain. Similar formalisms were used to study not only single snapshots, but also with recurrent networks time series of graphs, mainly again in image analysis . Over the last 3 years, over 1,500 extensions and applications of GCN have been published in combination with many other learning methods, including among many others combinations of GCN with recurrent neural networks , with GANs and with active learning . GCNs capture dependencies of nodes' features. However, current techniques consider only local neighborhoods. Thus, long-range dependencies can only be captured when these operations are applied repeatedly, propagating signals progressively through the data. To catch long-range dependencies, proposed stacking multiple layers of GCN. While this is possible in theory, it has never been successfully applied. In practice, GCN models work the best with 2-3 layers (; ; Veličković et al., 2017; ;). proposed using NGCN train multiple instances of GCNs over different distances regions. While this led to good performance, it is highly inefficient and does not scale to long distances (as the number of models scales linearly with the desired length). However, long range correlations can be obtained from a different direction. Recently, a correlation has been shown between the topological attributes (e.g. degree, centrality, clustering coefficient...) of nodes and their class (; ; ; ;). Inspired by the improvement of non-local operations in a variety of tasks in the field of computer vision , we propose a novel non-local operation for GCN, based on the topology of the graph. Our operation is generic and can be implemented with every GCN to capture long-range dependencies, allowing information propagation to distant nodes. There are several advantages of using non-local operations: (a) In contrast to the standard local convolution layer, non-local operations capture long-range dependencies directly by computing interactions between any two nodes, regardless of their positional distance; (b) As we show in experiments, non-local operations are efficient and achieve their best even with only a few layers; (c) Finally, our non-local convolution can be easily combined with other graph convolution techniques (e.g. GCN, GAT). We here propose the following contributions of nodes topology to graph-based machine learning. First, we show that in the absence of external information, node topology can be used to predict the class of nodes using a feed-forward network. The topology of a node is represented by a vector of attributes of each node, including among others, its degree, the frequency of different sub-graphs around it and its centrality. We then show that this can be translated to GCN through an input representing the number of first and second neighbors belonging to each class in the training set. Finally, we show in the context of GCN, that it is better to add an additional adjacency matrix representing the similarity between node topologies to the GCN than actually adding the topology of the nodes as an input. GCN and Graph Attention Networks (GAT) with this additional adjacency matrix produce accuracies better than all state of the art methods on the Cora, Pubmed, and CiteSeer Datasets. , we used four well-known citation network datasets: PubMed, CiteSeer and CORA , as well as the extended version of CORA from Bojchevski & Günnemann, denoted as CORA-Full, and two co-authorship networks: Coauthor CS, Coauthor Physics. Descriptions of these datasets, as well as statistics, can be found in Appendix A.1. We used the standard GCN model developed by or the GAT (Veličković et al., 2017). Each GCN layer is defined as in Eq. 1, whereà is defined above, X n is the input from the previous layer, and W n are the weights of the current layer. In GAT, each layer may contain multiple heads. A GAT head is a linear combination of nodes' features followed by a non linear function: where h j is a set of features for node j, W is a weight matrix, σ is a non linear function, and α i,j are the normalized attention coefficients. Attention coefficients are calculated for each pair of connected nodes to be: where a is a single layer feed forward network, and W is weight matrix. The extensions we propose to the model come from either changing the input of the model or from alteringÃ. The following modifications were considered: • Topology based GCN (T-GCN). We extend graph convolution operation to propagate information through distant neighbors with similar topological features. We construct a dual graph with the same nodes as the original graph, but different edges representing the topological similarity of nodes. Nodes with similar topology are connected with an undirected edge. There are many ways to construct those topological edges. Here we chose to present each node as a R N vector of topological attributes and connect each node to its k most similar nodes (see Appendix A.3 for the full description of attributes used to define the topology of a node). The T-GCN includes two GCN layers performed simultaneously on the input (external features). One GCN uses the regular adjacency matrix. The second uses the dual graph. These two outputs are then concatenated to serve as an input for the next layer (typically, a standard GCN on the original graph). The network's structure is illustrated at Fig 1. • Topology based GAT (T-GAT). The same as T-GCN, with GAT layers instead of GCN layers (Eq. 2 instead of Eq.1). We have also tested the following two alternative methods to use the topology. However, both produce lower accuracies than the standard GCN. • Asymmetric GCN (A-GCN). We incorporate the direction of directed networks by taking the adjacency matrix (asymmetric in directed graph) and concatenate its transpose to it creating a 2n × n matrix: The dimension of the output of each layer is:, which in turn is passed to the next layer following a rearrangement of the output by splitting and concatenating it to change dimensions from -2N × O n to N × 2O n. For more details about the parameters see Appendix A.2. Multiple inputs were tested in these configurations as detailed in Appendix A.3. • Combined GCN (C-GCN): This model includes two input types: a topology features matrix and an external features matrix (in the Cora and Citeseer case, the bag-of-words features). First, we pass the data matrix through a GCN layer, which leads to a 2n × L 1 output. The two inputs (topology and external features) are then concatenated following a rearrangement of the processed data matrix by splitting in dimension 0 and concatenating in the dimension 12n × L 1 → n × 2L 1. Following the concatenation, an n × (2L 1 + T) matrix is obtained, which is passed forward to the A-GCN layers. The following layers are as above in the A-GCN. For more details about the parameters see Appendix A.2. Multiple inputs were tested in these configurations as detailed in appendix A.3. As proposed by and Veličković et al., we used one set of hyper-parameters for Cora, and used the hyper-parameters optimized for Pubmed for all other networks: For the Cora dataset we used the following parameters. In the T-GCN, we used 1 hidden layer of size 32 for each graph (original and dual). For the T-GAT we chose 16 internal nodes for the regular and 8 internal nodes for the dual graph. We also chose 8 heads (8 independent attention mechanisms, see Veličković et al. for more details) for both operations at the first layer, and 1 head for the last layer (same as the original GAT). For the other datasets, we used the optimal parameters found for PubMed: 1 hidden layer with a size of 64 + 16 for T-GCN, and 16 + 16 features for T-GAT. We used 16 heads on the original operation, 8 heads at the dual operation at the first layer, and 8 heads at the last layer (same as the original GAT). The first layer activation function was ReLU for T-GCN, and TanH for T-GAT (except for T-GAT for Cora which we used also ReLU). SoftMax was performed on the last layer of all models. Note that for T-GAT, the external features were normalized and GAT heads were concatenated on the first layer, and averaged on the last layer (same as the original GAT). See a summarized table of all parameters in Appendix A.2. To test that neighbor class and the node self-topology (as shall be further defined) are correlated with the node class, we performed two tests. We first computed the relative frequency of classes in neighbors, given the class of the node: p(neighbor has class i | current node has class j) (Fig 2 lower plots). In the absence of correlations, one would expect a flat value, while an absolute correlation would produce an identity matrix. In the Cora or Citeseer networks, the mass of the diagonal is 60 % of the mass (compared with an expected 15 %). To test for the relation between node topology and class, we computed the average value of multiple topological features (Appendix A.3) for nodes with a given class (in the current context manuscripts belonging to a certain field). Except for the betweenness centrality, the only topological features correlated with class were 3 and 4 small scale motif frequencies. To test for that, a Kruskal Wallis non-parametric test was performed to test for the relation between the node class (manuscript field) and the distribution of features, Over sixty different small scale motifs are associated with the node class (Fig 2 upper plots). To test that topology and information propagation can be used to classify node classes, we introduced the topological features above, and the number of neighbors belonging to the training set with a given class as input to a Feed Forward-Network (see Appendix A.4). These two types of information by themselves can be used to classify the class of nodes quite precisely (see Appendix 4). Upper left plot Average of each topological feature for nodes belonging to a given class. All values were stacked and normalized to 1. An equal distribution would produce equally divided columns. The upper group are 4 node subgraph frequencies, and the lower group (separated by empty row) are 3 node motifs. Except for the centrality, no other node feature had a a significant KW p value after multiple measurements correction. Lower plots -Correlation of CiteSeer and Cora manuscript class with the neighboring manuscript class. The color in each row represents the fraction of nodes neighboring a given class that belong to all possible classes. One can clearly see the dominating diagonal representing the fact that neighboring nodes tend to have the same color. Since the main topological factors correlated with the class are small scale motif, we propose an alternative method to test their contribution to the classification. In order to avoid the explicit computation of sub graph frequencies, which can be computationally expensive, an indirect computation of the topology can be proposed. A simple way to describe such local features is though operations on products of the adjacency matrix. For example, the number of triangles i → j, i → k and j → k, are the and combination of A and A * A (Fig 4). Thus, instead of explicitly computing such features, one can use as input to the FFN combinations of these products on a one hot representation of the training set class. Formally, let us define for node i, the vector v i, where v j i is the number of neighbors of node i that are in the training set and are of class j, and V is the matrix of all vectors v i. To this, we add a last constant value to the vector, as shall be explained. We then use different combinations of A × V, A T × V, A × A T × V etc. as inputs to an FFN (see Methods). When these products are applied to the last row (a constant value), they simply count sub-graphs. However, when multiplied by the other component, the sub-graphs composed of a specific class are counted (Fig 4 B). The accuracy obtained for such products outperforms the only explicit topological measures, or information propagation (Fig 4 upper plot). Given the correlation between the node's topology and class, we tested whether adding the topology by itself or as an additional input to the BOW would increase the prediction accuracy of the node class for the Cora and Citeseer networks. We have tested both symmetric and asymmetric GCN (see description above), and either topological input by itself or combined with the BOW. Within the topological input, we tested three alternatives: the number of first and second neighbors belonging to each class in the training set, the topological features of each node or their combination. Under review as a conference paper at ICLR 2020 Figure 3: Average accuracy obtained with 50 random splits as a function of training size (starting from 5%). Validation and test sets were split evenly. Upper plots are for Cora and lower plots for CiteSeer. In the right plots, we compared our asymmetric model with the standard GCN in the absence of external information. Different types of topological features were tested as input (see Results). One can clearly see that the neighbors feature is the best input (performed almost as good as external information), and the standard GCN is better than the A-GCN. In the left plots, we compared the standard GCN with T-GCN and C-GCN (with different types of topological features), where the input is BOW. The C-GCN does not perform well with all three types, and the T-GCN is always equal to or better than the standard GCN. As expected, over all tested training set fractions, the models with the BOW outperform the ones without it. Within the models without BOW, ignoring the edge direction, and using only the number of neighbors in each class is better than any other combination. Moreover, combining BOW with topology as input only reduces the accuracy (Fig 3). Still, it is interesting to note that the accuracy without the BOW is not so far from the accuracy with the BOW at high training set fraction (Fig 3). For example, the accuracy for Cora with train size of 55% is 87.9% with BOW, and 85% with only neighbors features as input. Since adding topology as an input did not improve the accuracy, we tested whether using the topology to propagate information between distant nodes based on the similarity of topological attributes helps. We compared our topology-based convolution (T-GCN) to state of the art models on multiple standard real-world networks (see Models and Data). In Cora, CiteSeer, and PubMed networks, we used the previously published split of train-test , and for Cora-Full and the co-authorship networks we took 20 labeled nodes per class as the training set, 30 nodes per class as the validation set, and the rest as the test set (same as). We repeated the experiment 100 times and report the average accuracy over all trials. In each trial, we split the data randomly (except for the standard splits where the train is fixed). Parameters for all models can be found in Appendix A.2. All baselines were copied from the related paper. Furthermore, we report the of GCN and GAT (Veličković et al., 2017) using our own implementation, written using pytorch (which produce slightly lower than the published for the same architecture). to fairly evaluate GAT, we used 500 epochs for training. These are the base implementation used for T-GCN and T-GAT. The summary of the is presented in table 1 One can clearly see that the T-GCN and T-GAT outperform all other models in Cora, Pubmed, and Physics (Table 1). Moreover, the current comparison was performed using the original split in Table 1: Results -average accuracy over 100 trials. For Cora, CiteSeer, and PubMed we used the standard splits as. For Cora-Full, Physics, and CS we used 20 × #Classes random nodes as train, and 30 × #Classes for validation . For Cora-Full and Physics we reported the T-GAT of only 20 and 10 trials accordingly since they were tested on the CPU Method CiteSeer Cora PubMed Physics CS Cora Full DCNN 71.1 81.3 ----Planetoid 64.7 75.7 77.2 ---ChebNet 69 CiteSeer, Cora, and Pubmed. We have tested random splits to check the performance of the T-GCN. Indeed, in the random split, the T-GCN always has a higher accuracy than the GCN (Fig 3) even in the CiteSeer dataset, with a difference that can reach up to 3.3%. Convolution methods to aggregate information from multiple distances are among the leading image classification methods. In images, most of these convolutions are symmetric and sometimes isotropic around each point. However, in contrast with images that are typically overlaid on a 2D lattice, graphs have a complex topology. This topology is highly informative of the properties of nodes and edges , and can thus be used to classify their classes. This complex topology can be combined with convolutional networks to improve their accuracy. In undirected graphs, the topology can often be captured by a distance maintaining projection into R N, using unsupervised methods, such as the classical MDS , or supervised methods to minimize the distance between nodes with similar classes in the training set . In directed graphs, a more complex topology emerges from the asymmetry between incoming and outgoing edges (i.e., the distance between node i and node j differs from the distance between node j and node i), creating a distribution of subgraphs around each node often denoted sub-graph motifs . Such motifs have been reported to be associated with both single node/edge attributes as well as whole-graph attributes . We have here shown that in a manuscript assignment task, the topology around each node is indeed associated with the manuscript class. In order to combine topological information with information propagation, we proposed a novel GCN where the fraction of second neighbors belonging to each class is used as an input, and the class of the node is compared to the softmax output of the node. This method can indeed produce a high classification accuracy, but less than the one obtained using a BOW input. Moreover, explicitly combining the topology as an input with the BOW reduces the accuracy. However, using the topology to add new edges between nodes with similar topological features actually significantly improves performance in most studied datasets. This suggests that the topology is better used to correlate between the class of distant nodes than to be actually used as an input. The presented here are a combination of information propagation and topology-based classification. While each of these two elements was previously reported, their combination into a single coherent GCN based classifier provides a novel content independent method to classify nodes. With the current ever-increasing concerns about privacy, new content independent methods for node classification become essential. The citation networks contain scientific papers divided into classes by their research field. Edges describe citations in the data set. BOW is also available to describe each publication in the dataset. BOW can be either a 1/0 vector or a TF/IDF weighted word vector for PubMed. Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge 3. Here, nodes are authors, that are connected by an edge if they co-authored a paper, node features represent paper keywords for each authors papers, and class labels indicate the most active fields of study for each author. Here are the parameters used for each of the models. For T-GCN and T-GAT the parameters were optimized for PubMed (as observed by and Veličković et al. ) except for Cora data for set which we used slightly different parameters (denotes as T-GCN Cora, and T-GAT Cora). The parameters are summarized in Table 3. In all models, the activation function of the last layer is Softmax. The activation function of the first layer is presented In Table 3. Hidden size X+Y means size of X for the original GCN operator and Y for the GCN on the dual graph. The two outputs are concatenated to a total of X+Y size. GAT heads X,Y,Z means X heads for the original GAT operator, and Y heads for the GAT on the dual graph. Z is the number of heads in the last layer. See Models And Data for more details. Our goal is to use the graph structure to classify node colors. Hence, we compute features that are only based on the graph structure, ignoring any external content associated with each node. Those features are used to convert nodes into the appropriate network attribute vector (NAV) . Following is a list of attributes used. Note that other attributes may have been used with probably similar . • Degree -number of in and out (in case of directed graphs) edges. •. Betweenness is a centrality measure of a vertex. It is defined by the numbers of shortest paths from all vertices that pass through the vertex. • Closeness Centrality. Closeness is a centrality measure of a vertex. It is defined as the average length of the shortest path between the vertex and all other vertices in the graph. • Distance distribution. We compute the distribution of distances from each node to all other nodes using a Djekstra algorithm , and then use the first and second moments of this distribution. • Flow . We define the flow measure of a node as the ratio between the undirected and directed distances between the node and all other nodes. • Attraction . Attraction Basin hierarchy is the comparison between the weighted fraction of the network that can be reached from each vertex with the weighted fraction of the network from which the vertex can be reached. • Motifs Network motifs are small connected sub-graphs. We use an extension of the algorithm to calculate motifs. For each node, we compute the frequency of each motif where this node participates. • K-cores . A K-core is a maximal subgraph that contains vertices of degree k or more. Equivalently, it is the subgraph of G formed by repeatedly deleting all nodes of degree less than k. • Louvain community detection algorithm. The Louvain algorithm is a community detection algorithm. The algorithm works by optimization of modularity, which has a scale value between -1 to 1. Neighbors Feature. We also used a feature of the training set labels. We summed for each node the number of neighbors belonging to each class in the training set. The sum was represented as a vector of sums (e.g. if a node has 10 neighbors, only three of which are in the training set, with two belonging to the first class, and one belonging to the third class, the vector would be [2, 0, 1, ..]). The sum was performed on first and second neighbors producing a vector of twice the number of classes. In a directed graph we calculated two features, one for In neighbors and the second for Out neighbors. The in Figure 2 are produced through a feed-forward network with two internal layers of sizes 300 and 100 internal nodes and an output layer with the number of possible classifications (7 and 6 in CiteSeer and Cora, respectively). The nonlinearities were Relus in the internal layers and a linear function in the output layer. An L2 regularization of 0.2 was used for all layers and a 10% drop in our rate. The loss function was a categorical cross-entropy as implemented in Keras with a TensorFlow backend., when each node is classified using the distribution of classes in its neighbors (dashed), or when products of the neighbors and different combinations of the adjacency matrix are used (full line). Lower plot. Example of sub-graph frequency through adjacency matrix products. Given the graph plotted on the left, the appropriate adjacency matrix (A) and a division of the nodes into green and blue nodes, one can count the number of feed-forward motifs (x → y and
Topology-Based Graph Convolutional Network (GCN)
1,113
scitldr
Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization. What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing? In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors. We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer. Finally, we offer theoretical justifications for each . Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system. Over the last two decades, both the anatomic and functional organization of the fly and mouse olfactory systems have been mapped to excruciating detail, affording knowledge of how odors are processed along the entirety of olfactory pathway. In both model organisms, the layout of the olfactory system is two layers deep and comprises of a compression layer and an expansion layer. Olfactory perception is initiated by the recognition of odorants by a large repertoire of receptors in the sensory epithelium . In fruit flies, individual olfactory receptor neurons (ORNs) express only one of 50 different olfactory receptors (ORs), and all neurons (10 on average) that express the same receptor converge with precision onto a unique set of 2-4 projection neurons (PNs) through a specialized structure known as an olfactory glomerulus . This layout establishes a one-to-one mapping between ORs and PNs. Information is then conveyed to an expansion layer of 2,500 Kenyon Cells (KCs) through sparse and random connectivity to support a high dimensional representation of odor information before it is classified by 20 read-out neurons, the mushroom body output neurons (MBONs). Experiments reveal that synaptic plasticity at the KC-MBON synapse is necessary and causal in odor learning. The only major differences between the circuits of mice and flies appear to be numerical. Whereas the fly olfactory system consists of 50 ORs, 50 glomeruli, and 2500 KCs, the mouse olfactory system consists of 1500 ORs, 1500 glomeruli, and 1 million piriform neurons. The fact that evolution has evolved to hardwire the same architecture in flies, mice, and multiple other organisms suggests that such an architecture is optimal for the general task of odor sensing. Although we have a detailed anatomy of the olfactory system in both flies and mice, it is unclear why certain features are optimal for odor sensing. In particular, 1) why does every ORN express a single OR, 2) why is information preserved through a one-to-one mapping between ORs and PNs, and 3) why is connectivity onto the expansion layer sparse and random ? To study optimal circuit design, we use a goal-driven approach to train an artificial neural network to classify odors and then analyze the anatomical and functional structures that emerge after training. This approach has recently been used to study the functional profiles of the ventral stream in visual object processing . The simplicity of the fly olfactory circuit and the exhaustive knowledge that we have of its anatomy provides constraints that can be used to gain insight into evolutionary design. We use a simple odor classification task that maps odors to classes (100 total, 4 shown, Figure 1a). To generate this dataset, we first generated 100 odor prototypes. Each prototype activates 50 olfactory receptors, and the activation of each receptor is sampled independently from a uniform distribution drawn between 0 and 1. For all odors, the ground-truth class is set to be its closest odor prototype as measured by Euclidean distance in the ORN space. The training set consists of 1 million odors, and the validation consists of 8192 odors, and each odor is sampled the same way as the odor prototypes. Such a task mimics the evolutionary drive for organisms to distinguish between dissimilar odors and to generalize between similar odors. The networks connections are modified during training to classify odors according to this exact mapping (Figure 1b). We used standard training techniques based on stochastic gradient descent. This form of training can be thought of as evolving a circuit architecture in silico. We modeled the olfactory system as a layered feed-forward network with each layer corresponding to 500 ORNs, 50 PNs, 2500 KCs, and 100 class neurons, in this order. Connections between each layer represent synaptic strengths, and the activities of neurons in our network represent firing rates. The 500 ORNs are subdivided into 10 ORN duplicates that ex- press the same OR for all 50 unique ORs. MBONs, the readout neurons of the olfactory system, are simplified into class neurons, and each outcome is represented by the activation of a single class neuron. For simplicity, this architecture omits several biological structures, including interneurons, a realistic readout, and an additional pathway. Connections between neurons in each layer have no initial structure. After training, the network performs odor classification with 83% accuracy. We analyzed the network structure and observed that connections between ORNs and PNs appear to resemble the convergence of ORNs expressing the same OR onto glomeruli in the olfactory system. All ORNs that express the same OR project onto a unique PN, and PNs sample from a single type of ORN (Figure 1c). We quantify the extent that PNs pool from a single type of ORNs using a simple but stringent metric called Glomeruli Score (GloScore). A maximal GloScore of 1 means that PNs sample exclusively from ORNs expressing the same OR, whereas a score of 0 means that PNs sample from multiple ORs with the same connection weight. During training, the GloScore of the model ORN-PN connectivity quickly approaches 1. Every KC is initially connected to all 50 PNs. While glomeruli are forming, we observe that these connections sparsen (Figure 1d). We measured the KC input degree (the number of strong PN connections for each KC neuron) after training and observed an average input degree of 7. This is striking as it matches the input degree derived from exhaustive anatomical tracing studies (Figure 1e,f). Theories suggest that random connectivity support highdimensional representations that enhance the ability of downstream readout neurons to learn associations, much as in theories of cerebellar cortex . We also observe that KCs evolved to randomly sample from PNs after training. We thus calculated the average correlation between the input connections of every pair of KCs, and found that correlations quickly decrease to approach that of randomly shuffled connectivity during training. A more stringent analysis revealed that KCs sample uniformly from all PNs and KCs in our network are not connected to any preferential pairs of PNs, similar to what has observed in the wiring diagram of fruit flies . To ensure the robustness of these , we performed an extensive hyperparameters sweep, exploring the impact of learning rate, the number of KCs, dropout rate, batch normalization, and input noise. The quantitative are robust to all but one hyperparameter. Decreasing the learning rate eventually leads to a non-separation of weak and strong weights. Therefore, we used the highest learning rate that allows for classification accuracy to exceed 50% (chance is 1%). Moreover, these were also robust to the addition of biologically realistic network motifs, such as normalization in the PN and KC layers. In a network with exclusively excitatory connections, glomeruli emerge (Figure 1c). In a network with both excitatory and inhibitory connections, PNs mix from multiple ORs (Figure 2a). Irrespective of mixing, accuracy is maintained (Figure 2b). Moreover, the average Pearson's correlation between the activities of different PNs with and without mixing were close to zero, suggesting that connections to PNs evolved to preserve odor information. However, by minimizing correlation the network loses out on an opportunity to increase the dimensionality of its odor representation. We hypothesize that a network must preserve information if information can be expanded downstream. Conversely, a network with far less than 2500 KCs cannot expand and should bias the PN layer to mix ORs and increase dimensionality. We thus trained networks with variable numbers of KCs while keeping the numbers of ORNs and PNs fixed at 500 and 50, respectively. Indeed, as we decreased the number of KCs, GloScore decreases as PNs begin to sample from multiple ORs (Figure 2c). We further note that there is only a marginal benefit in performance with more than 2500 KCs (Figure 2d), which is the number of KCs within each hemisphere of the mushroom body. We further predict that PNs will mix if given the resources to do so. We varied the number of PNs while keeping the numbers of ORNs and KCs fixed at 500 and 2500, respectively. Indeed, when the number of PNs exceeds the number of unique OR types, we observe that excess PNs receive mixed OR input (Figure 2e). Moreover, classification accuracy saturates, implying that having more than 50 PNs does not aid task performance (Figure 2f). When there are less than 50 PNs, information flow is bottlenecked, and mixing occurs to ensure that all ORs are represented (Figure 2e). Together, these argue that the glomeruli representation is only optimal when expansion occurs downstream. We further added input noise sampled from a normal distribution onto ORNs, and found that the formation of glomeruli is minimally dependent on input noise. When trained on the standard classification task, the input degree of the expansion layer (KCs) settles at K ≈ 7 (Figure 1f). To predict the connectivity of arbitrarily-sized olfactory systems, we trained networks with various numbers of ORs, ranging from 50 to 400. For each network, we quantified the input degree of the expansion layer after training (Figure 3a, plus signs). The can be well fitted by a power law function, K ∼ N 0.71, where K is the optimal expansion layer input degree for a system of N unique ORs. This prediction is consistent with experimental estimates from anatomical studies in mouse, K ≈ 40 − 100, N ≈ 1, 000 and fruit fly K ≈ 7, N ≈ 50 , (Figure 3a, x signs). (Figure 3a, gray), and more specifically, K ≈ 8 for N = 1, 000, which is far lower than expected. We hypothesized that an input degree ofK ∼ N 0.71 did not emerge solely to minimize classification loss. Instead, this power-law maximizes robustness to perturbations in connection weights. This hypothesis is inspired by findings that stochastic gradient descent, due to its stochastic nature, not only maximizes classification performance but also finds flat minima where the loss changes more gradually when connection weights are perturbed . This robustness can be interpreted both as robustness against variability in synaptic transmission and also against variability between individuals of the same species. To quantify robustness, we measured the average angle between odor representations in the expansion layer before (y) and after perturbation (y + ∆) to connection weights (Figure 3b). The perturbation angle is proportional to the amount of mis-classification that happens as a of weight perturbation, so a smaller angle corresponds to a more robust network. When N=50 and K explicitly varied from 1 to 30, we found that the perturbation angle is indeed minimized at around K = 7 (Figure 3c), in agreement with our hypothesis. For a network with N unique ORs, we can numerically search for the optimal K that maximizes robustness (minimizes perturbation angle). We found that the optimal K derived from maximal robustness (Figure 3d, red line) matches closely with from direct training of networks of various N (Figure 3d, plus signs). While the We trained artificial neural networks using stochastic gradient descent to classify odors. We found that glomeruli emerged in PN layer, and sparse random connectivity emerges in the PN to KC connections. We then explored the sufficient conditions that enabled these features to emerge. We found that the formation of glomeruli did not depend on input noise but rather on the existence of an expansion layer downstream. In addition, we found that an expansion layer with a synaptic degree of 7 endows the olfactory system with robustness by allowing for large tolerances in synaptic efficacies without affecting task performance. Our work offers a framework to explain the Optimal K predicted by maximal weight robustness (red) and direct training (plus signs). The line is a power-law fit of the red dots.
Artificial neural networks evolved the same structures present in the olfactory systems of flies and mice after being trained to classify odors
1,114
scitldr
The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation. We develop HarmonicGAN to learn bi-directional translations between the source and the target domains. With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained. Distance metrics defined on two types of features including histogram and CNN are exploited. Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability. We show experimental in a number of applications including medical imaging, object transfiguration, and semantic labeling. We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases. Image-to-image translation BID15 aims to learn a mapping from a source domain to a target domain. As a significant and challenging task in computer vision, image-to-image translation benefits many vision and graphics tasks, such as realistic image synthesis BID15 BID41, medical image generation BID39 BID9, and domain adaptation BID13. Given a pair of training images with detailed pixel-to-pixel correspondences between the source and the target, image-to-image translation can be cast as a regression problem using e.g. Fully Convolutional Neural Networks (FCNs) BID23 by minimizing e.g. the per-pixel prediction loss. Recently, approaches using rich generative models based on Generative Adaptive Networks (GANs) BID11 BID27 BID0 have achieved astonishing success. The main benefit of introducing GANs BID11 to image-to-image translation BID15 is to attain additional image-level (often through patches) feedback about the overall quality of the translation, and information which is not directly accessible through the per-pixel regression objective. The method by BID15 is able to generate high-quality images, but it requires paired training data which is difficult to collect and often does not exist. To perform translation without paired data, circularity-based approaches BID41 BID17 BID37 have been proposed to learn translations from a set to another set, using a circularity constraint to establish relationships between the source and target domains and forcing the generated from a sample in the source domain to map back and generate the original sample. The original image-to-image translation problem BID15 ) is supervised in pixel-level, whereas the unpaired image-to-image translation task BID41 ) is considered unsupervised, with pixel-level supervision absent but with adversarial supervision at the image-level (in the target domain) present. By using a cycled regression for the pixel-level prediction (source→target→source) plus a term for the adversarial difference between the transferred images and the target images, CycleGAN is able to successfully, in many cases, train a translation model without paired source→target supervision. However, lacking a mechanism to enforce regularity in the translation creates problems like in Fig To combat the above issue, in this paper we look at the problem of unpaired image-to-image translation from a manifold learning perspective BID33 BID28. Intuitively, the problem can be alleviated by introducing a regularization term in the translation, encouraging similar contents (based on textures or semantics) in the same image to undergo similar translations/transformations. A common principle in manifold learning is to preserve local distances after the unfolding: forcing neighboring (similar) samples in the original space to be neighbors in the new space. The same principle has been applied to graph-based semisupervised learning BID44 where harmonic functions with graph Laplacians BID45 BID2 are used to obtain regularized labels of unlabeled data points. During the translation/transformation, some domain-specific attributes are changed, such as the colors, texture, and semantics of certain image regions. Although there is no supervised information for these changes, certain consistency during the transformation is desirable, meaning that for image contents similar in the source space should also be similar in the target space. Inspired by graphbased semi-supervised learning BID45 BID44, we introduce smoothness terms to unpaired image-to-image translation BID41 by providing a stronger regularization for the translation/transformation between the source and target domains, aiming to exploit the "manifold structure" of the source and target domains. For a pair of similar samples (two different locations in an image; one can think of them as two patches although the receptive fields of CNN are quite large), we add the smoothness term to minimize a weighted distance of the corresponding locations in the target image. Note that two spatially distant samples might be neighbors in the feature space. We name our algorithm HarmonicGAN as it behaves harmonically along with the circularity and adversarial constraints to learn a pair of dual translations between the source and target domains, as shown in FIG0. Distance metrics defined on two alternative features are adopted: a low-level soft RGB histograms; and CNN (VGG) features with pre-trained semantics. We conduct experiments in a number of applications, showing that in each of them our method outperforms existing methods quantitatively, qualitatively, and with user studies. For a medical imaging task BID6 that was recently calling attention to a major CycleGAN failure case (learning to accidentally add/remove tumors in an MRI image translation task), our proposed method provides a large improvement over CycleGAN, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases. CONTRIBUTIONS 1. We introduce smooth regularization over the graph for unpaired image-to-image translation to attain harmonic translations.2. When building an end-to-end learning pipeline, we adopt two alternative types of feature measures to compute the weight matrix for the graph Laplacian, one based on a soft histogram BID35 and another based on semantic CNN (VGG) features BID31.3. We show that this method in significantly improved consistency for transformations. With experiments on multiple translation tasks, we demonstrate that HarmonicGAN outperforms the state-of-the-art. As discussed in the introduction, the general image-to-image translation task in the deep learning era was pioneered by BID15, but there are prior works such as image analogies BID12 ) that aim at a similar goal, along with other exemplar-based methods BID10 BID8 BID1. After BID15, a series of other works have also exploited pixel-level reconstruction constraints to build connections between the source and target domains BID34. The image-to-image translation framework BID15 ) is very powerful but it requires a sufficient amount of training data with paired source to target images, which are often laborious to obtain in the general tasks such as labeling BID23, synthesis BID5, and style transfer BID14.Unpaired image-to-image translation frameworks BID41 BID22 BID29 BID17 such as CycleGAN remove the requirement of having detailed pixellevel supervision. In CycleGAN this is achieved by enforcing a bi-directional prediction from source to target and target back to source, with an adversarial penalty in the translated images in the target domain. Similar unsupervised circularity-based approaches BID17 BID37 have also been developed. The CycleGAN family models BID41 point to an exciting direction of unsupervised approaches but they also create artifacts in many applications. As shown in FIG1, one reason for this is that the circularity constraint in CycleGAN lacks the straightforward description of the target domain, so it may change the inherent properties of the original samples and generate unexpected which are inconsistent at different image locations. These failures have been prominently explored in recent works, showing that CycleGAN BID41 ) may add or remove tumors accidentally in cross-modal medical image synthesis BID6, and that in the task of natural image transfiguration, e.g. from a horse to zebra, regions in the may also be translated into a zebra-like texture ) (see FIG0).Here we propose HarmonicGAN that introduces a smoothness term into the CycleGAN framework to enforce a regularized translation, enforcing similar image content in the source space to also be similar in the target space. We follow the general design principle in manifold learning BID33 BID28 and the development of harmonic functions in the graph-based semi-supervised learning literature BID45 BID2 BID44 BID36. There has been previous work, DistanceGAN BID3, in which distance preservation was also implemented. However, DistanceGAN differs from HarmonicGAN in motivation, formulation, implementation, and performance. The primary motivation of DistanceGAN demonstrates an alternative loss term for the per-pixel difference in CycleGAN.In HarmonicGAN, we observe that the cycled per-pixel loss is effective and we aim to make the translation harmonic by introducing additional regularization. The smoothness term acts as a graph Laplacian imposed on all pairs of samples (using random samples in implementation). In the experimental , we show that the artifacts in CycleGAN are still present in DistanceGAN, whereas HarmonicGAN provides a significant boost to the performance of CycleGAN.In addition, it is worth mentioning that the smoothness term proposed here is quite different from the binary term used in the Conditional Random Fields literature BID20 BID19, either fully supervised BID4 BID40 or weakly-supervised BID32 BID21. The two differ in output space (multi-class label vs. highdimensional features), mathematical formulation (a joint conditional probably for the neighboring labels vs. a Laplacian function over the graph), application domain (image labeling vs. image translation), effectiveness (boundary smoothing vs. manifold structure preserving), and FORMULA7 the role in the overall algorithm (post-processing effect with relatively small improvement vs. large-area error correction). Following the basic formulation in CycleGAN BID41, for the source domain X and the target domain Y, we consider unpaired training samples {x k} N k=1 where x k ∈ X, and DISPLAYFORM0 where y k ∈ Y. The goal of image-to-image translation is to learn a pair of dual mappings, including forward mapping G: X → Y and backward mapping F: Y → X. Two discriminators D X and D Y are adopted in BID41 to distinguish between real images and generated images. In particular, the discriminator D X aims to distinguish real image {x} from the generated image DISPLAYFORM1 Therefore, the objective of adversarial constraint is applied in both source and target domains, expressed in BID41 as: DISPLAYFORM2 and DISPLAYFORM3 For notational simplicity, we denote the GAN loss as DISPLAYFORM4 Since the data in the two domains are unpaired, a circularity constraint is introduced in BID41 to establish relationships between X and Y. The circularity constraint enforces that G and F are a pair of inverse mappings, and that the translated sample can be mapped back to the original sample. The circularity constraint contains consistencies in two aspects: the forward cycle DISPLAYFORM5 Thus, the circularity constraint is formulated as BID41: DISPLAYFORM6 Here we rewrite the overall objective in BID41 to minimize as: DISPLAYFORM7 where the weights λ GAN and λ cyc control the importance of the corresponding objectives. The full objective of circularity-based approach contains adversarial constraints and a circularity constraint. The adversarial constraints ensure the generated samples are in the distribution of the source or target domain, but ignore the relationship between the input and output of the forward or backward translations. The circularity constraint establishes connections between the source and target domain by forcing the forward and backward translations to be the inverse of each other. However, CycleGAN has limitations: as shown in FIG1, the circular projection might perfectly match the input, and the translated image might look very well like a real one, but the translated image may not maintain the inherent property of the input and contain a large artifact that is not connected to the input. Here we propose a smoothness term to enforce a stronger correlation between the source and target domains that focuses on providing similarity-consistency between image patches during the translation. The smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. We define the set consisting of individual image patches as the nodes of the graph G. x i is referred to as the feature vector of the i-th image patch in x ∈ X. For the image set X, we define the set that consists of individual samples (image patches) of source image set X as S = {x(i), i = 1..M } where M is the total number of the samples/patches. An affinity measure (similarity) computed on image patch x(i) and image patch x(j), w ij (X) (a scalar), defines the edge on the graph G of S. The smoothness term acts as a graph Laplacian imposed on all pairs of image patches. Therefore, we define a smoothness term over the graph as DISPLAYFORM0 where BID45 defines the affinity between two patches x(i) and x(j) based on their distances (e.g. measured on histogram or CNN features). Dist[G( y)(i), G(y)(j)] defines the distance between two image patches after translation at the same locations. In implementation, we first normalize the features to the scale of and then use the L1 distance of normalized features as the Dist function (for both histogram and CNN features). Similarly, we define a smoothness term for the backward part as DISPLAYFORM1 DISPLAYFORM2 The combined loss for the smoothness thus becomes smoothness term provides a stronger similarity-consistency between patches to maintain inherent properties of the images. DISPLAYFORM3 Combining Eqn. FORMULA7 and Eqn., the overall objective for our proposed HarmonicGAN under the smoothness constraint becomes DISPLAYFORM4 Similar to the graph-based semi-supervised learning definition BID45 BID44, the solution to Eqn. leads to a harmonic function. The optimization process during training obtains: DISPLAYFORM5 The effectiveness of the smoothness term of Eqn. FORMULA11 is evident. In Fig. 4, we show (using t-SNE BID24) that the local neighborhood structure is being preserved by HarmonicGAN, whereas CycleGAN in two similar patches being far apart after translation. In the smoothness constraint, the similarity of a pair of patches is measured on the features for each patch (sample point). All the patches in an image form a graph. Here we adopt two types of features: a low-level soft histogram, and pre-trained CNN (VGG) features that carry semantic information. Soft histogram features are lightweight and easy to implement but without much semantic information; VGG requires an additional CNN network but carries more semantics. We first design a weight matrix based on simple low-level RGB histograms. To make the end-to-end learning system work, it is crucial to make the computation of gradient in the histograms derivable. We adopt a soft histogram representation proposed in BID35 but fix the means and the bin size. This histogram representation is differentiable and its gradient is back-propagateable. This soft histogram function contains a family of linear basis functions ψ b, b = 1,..., B, where B is the number of bins in the histogram. As x i represents the i-th patch in image domain X, for each pixel j in x i, ψ b (x i (j)) represents pixel j voting for the b-th bin, expressed as: DISPLAYFORM0 where µ b and w b are the center and width of the b-th bin. The representation of x i in the RGB space is the linear combination of linear basis functions on all the pixels in x i, expressed as: DISPLAYFORM1 where φ h is the RGB histogram feature, b is the index of dimension of the RGB histogram representation, and j represents any pixel in the patch x i. The RGB histogram representation φ h (X, i) of x i is a B-dimensional vector. For some domains we instead use semantic features to acquire higher-level representations of patches. The semantic representations are extracted from a pre-trained Convolutional Neural Network (CNN). The CNN encodes semantically relevant features from training on a large-scale dataset. It extracts semantic information of local patches in the image through multiple pooling or stride operators. Each point in the feature maps of the CNN is a semantic descriptor of the corresponding image patch. Additionally, the semantic features learned from the CNN are differentiable and the CNN can be integrated into HarmonicGAN and be trained end-to-end. We instantiate the semantic feature φ s as a pre-trained CNN model e.g. VGGNet BID30. In implementation, we select the layer 4 3 after ReLU from VGG-16 network for computing the semantic features. We evaluate the proposed method on three different applications: medical imaging, semantic labeling, and object transfiguration. We compare against several unpaired image-to-image translation methods: CycleGAN BID41, DiscoGAN BID17, DistanceGAN BID3, and UNIT BID22. We also provide two user studies as well as qualitative . The appendix provides additional and analysis. Medical imaging. This task evaluates cross-modal medical image synthesis, Flair ↔ T1. The models are trained on the BRATS dataset BID26 which contains paired MRI data to allow quantitative evaluation. Similar to the previous work BID6, we use a training set of 1400 image slices (50% healthy and 50% tumors) and a test set of 300, and use their unpaired training scenario. We adopt the Mean Absolute Error (MAE) and the Mean Squared Error (MSE) between the generated images and the real images to evaluate the reconstruction errors, and further use the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) to evaluate the reconstruction quality of generated images. Semantic labeling. We also test our method on the labels ↔ photos task using the Cityscapes dataset BID7 under the unpaired setting as in the original CycleGAN paper. For quantitative evaluation, in line with previous work, for labels → photos we adopt the "FCN score" BID15, which evaluates how interpretable the generated photos are according to a semantic segmentation algorithm. For photos → labels, we use the standard segmentation metrics, including per-pixel accuracy, per-class accuracy, and mean class Intersection-Over-Union (Class IOU).Object transfiguration. Finally, we test our method on the horse ↔ zebra task using the standard CycleGAN dataset (2401 training images, 260 test images). This task does not have a quantitative evaluation measure, so we instead provide a user study together with qualitative . We apply the proposed smoothness term on the framework of CycleGAN BID41. Similar with CycleGAN, we adopt the architecture of BID16 as the generator and the PatchGAN BID15 as the discriminator. The log likelihood objective in the original GAN is replaced with a least-squared loss BID25 for more stable training. We resize the input images to the size of 256 × 256. For the histogram feature, we equally split the RGB range of to 16 bins, each with a range of 16. Images are divided into non-overlapping patches of 8 × 8 and the histogram feature is computed on each patch. For the semantic feature, we adopt a VGG network pre-trained on ImageNet to obtain semantic features. We select the feature map of layer relu4 3 in VGG. The loss weights are set as λ GAN = λ Smooth = 1, λ cyc = 10. Following CycleGAN, we adopt the Adam optimizer BID18 with a learning rate of 0.0002. The learning rate is fixed for the first 100 epochs and linearly decayed to zero over the next 100 epochs. Medical imaging. Table 1 shows the reconstruction performance on medical image synthesis, Flair ↔ T1. The proposed method yields a large improvement over CycleGAN, showing lower MAE and MSE reconstruction losses, and higher PSNR and SSIM reconstruction scores, highlighting the significance of the proposed smoothness regularization. HarmonicGAN based on histogram and VGG features shows similar performance; the reconstruction losses of histogram-based HarmonicGAN are slightly lower than the VGG-based one in Flair → T1, while they are slightly higher in T1 → Flair, indicating that both low-level RGB values and high-level CNN features can represent the inherent property of medical images well and help to maintain the smoothness-consistency of samples. Table 1: Reconstruction evaluation of cross-modal medical image synthesis on the BRATS dataset. Semantic labeling. We report semantic labeling in TAB1. The proposed method using VGG features yields a 3% improvement in Pixel Accuracy in translation scores for photo ↔ label and also shows stable improvements in other metrics, clearly outperforming all competing methods. The performance using a histogram is slightly lower than CycleGAN; we hypothesize that the reason is that the objects in photos have a large intra-class variance and inter-class similarity in appearance, e.g. cars have different colors, while vegetation and terrain have similar colors, thus the regularization of the RGB histogram is not appropriate to extract the inherent property of photos. DISPLAYFORM0 Medical imaging. We randomly selected 100 images from BRATS test set. For each image, we showed one radiologist the real ground truth image, followed by images generated by CycleGAN, DistanceGAN and HarmonicGAN (different order for each image set to avoid bias). The radiologist was told to evaluate similarity by how likely they would lead to the same clinical diagnosis, and was asked to rate similarity of the generation methods on a Likert scale from 1 to 5 (1 is not similar at all, 5 is exactly same). Results are in shown in TAB2. In 95% of cases, the radiologist preferred images generated by our method over the competing methods, and the average Likert score was 4.00 compared to 1.68 for CycleGAN, confirming that our generated images are significantly better. This is significant as it confirms that we solve the issue presented in a recent paper BID6 showing that CycleGAN can learn to accidentally add/remove tumors in images. Object transfiguration. We evaluate our algorithm on horse ↔ zebra with a human perceptual study. We randomly selected 50 images from the horse2zebra test set and showed the input images and three generated images from CycleGAN, DistanceGAN and HarmonicGAN (with generated images in random order). 10 participants were asked to score the generated images on a Likert scale from 1 to 5 (as above). As shown in TAB3, the participants give the highest score to the proposed method (in 72% of cases), significantly more often than CycleGAN (in 28% of cases). Additionally, the average Likert score of our method was 3.60, outperforming 3.16 of CycleGAN and 1.08 of DistanceGAN, indicating that our method generates better . Medical imaging. Object transfiguration. FIG4 shows a qualitative comparison of our method on the horse ↔ zebra task. We observe that we correct several problems in CycleGAN, including not changing the and performing more complete transformations. More and analysis are shown in FIG6 and FIG0. We introduce a smoothness term over the sample graph to enforce smoothness-consistency between the source and target domains. We have shown that by introducing additional regularization to enforce consistent mappings during the image-to-image translation, the inherent self-consistency property of samples can be maintained. Through a set of quantitative, qualitative and user studies, we have demonstrated that this in a significant improvement over the current state-of-the-art methods in a number of applications including medical imaging, object transfiguration, and semantic labeling. In a medical imaging task in particular our method provides a very significant improvement over CycleGAN. They show different motivations and formulations. The distance constraint aims to preserve the distance between samples in the mapping in a direct way, so it minimizes the expectation of differences between distances in two domains. The distance constraint in DistanceGAN is not doing a graph-based Laplacian to explicitly enforce smoothness. In contrast, the smoothness constraint is designed from a graph Laplacian to build the similarity-consistency between image patches. Thus, the smoothness constraint uses the affinity between two patches as weight to measure the similarityconsistency between two domains. The whole idea is based on manifold learning. The smoothness term defines a Laplacian ∆ = D − W, where W is our weight matrix and D is a diagonal matrix with D i = j w ij, thus, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. They are different in implementation. The smoothness constraint in HarmonicGAN is computed on image patches while the distance constraint in DistanceGAN is computed on whole image samples. Therefore, the smoothness constraint is fine-grained compared to the distance constraint. Moreover, the distances in DistanceGAN is directly computed from the samples in each domain. They scale the distances with the precomputed means and stds of two domains to reduce the effect of the gap between two domains. Differently, the smoothness constraint in HarmonicGAN is measured on the features (Histogram or CNN features) of each patch, which maps samples in two domains into the same feature space and removes the gap between two domains. They show different . FIG5 shows the qualitative of CycleGAN, DistanceGAN and the proposed HarmonicGAN on the BRATS dataset. As shown in FIG5, the problem of randomly adding/removing tumors in the translation of CycleGAN is still present in the of Distance-GAN, while HarmonicGAN can correct the location of tumors. Table 1 shows the quantitative on the whole test set, which also yields the same . The of DistanceGAN on four metrics are even worse than CycleGAN, while HarmonicGAN yields a large improvement over CycleGAN. There are some fundamental differences between the CRF literature and our work. They differ in output space, mathematical formulation, application domain, effectiveness, and the role in the over-all algorithm. The similarity between CRF and HarmonicGAN lies the adoption of a regularization term: a binary term in the CRF case and a Laplacian term in HarmonicGAN.The smoothness term in HarmonicGAN is not about obtaining'smoother' images/labels in the translated domain, as seen in the experiments; instead, HarmonicGAN is about preserving the overall integrity of the translation itself for the image manifold. This is the main reason for the large improvement of HarmonicGAN over CycleGAN.To further demonstrate the difference of HarmonicGAN and CRF, we perform an experiment applying the pairwise regularization of CRFs to the CycleGAN framework. For each pixel of the generated image, we compute the unary term and binary term with its 8 neighbors, and then minimize the objective function of CRF. The are shown in TAB6. The pairwise regularization of CRF is unable to handle the problem of CycleGAN illustrated in FIG0. What's worse, using the pairwise regularization may over-smooth the boundary of generated images, which in extra artifacts. In contrast, HarmonicGAN aims at preserving similarity from the overall view of the image manifold, and can thus exploit similarity-consistency of the generated images, rather than over-smooth the boundary. human to a zebra-like texture. In contrast, HarmonicGAN does better in region and achieves an improvement in some regions of of the human (Putin's face), but it still fails on the human body. We hypothesize this is because the semantic features used by HarmonicGAN have not been trained on humans without a shirt. to apple, facade to label, label to facade, aerial to map, map to aerial, summer to winter, winter to summer.
Smooth regularization over sample graph for unpaired image-to-image translation results in significantly improved consistency
1,115
scitldr
Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction on a variety of challenging datasets. Various image understanding tasks, such as semantic segmentation BID3 and human pose/action recognition BID29 BID33, have been shown to benefit from 3D scene information. A common approach to reconstructing 3D geometry is by multiview stereo, which infers depth based on point correspondences among a set of unstructured images BID10;. To solve for these correspondences, conventional techniques employ photometric consistency constraints on local image patches. Such photo-consistency constraints, though effective in many instances, can be unreliable in scenes containing textureless and reflective regions. Recently, convolutional neural networks (CNNs) have demonstrated some capacity to address this issue by leveraging semantic information inferred from the scene. The most promising of these methods employ a traditional stereo matching pipeline, which involves computation of matching cost volumes, cost aggregation, and disparity estimation BID5; BID19; BID14; BID0. Some are designed for binocular stereo BID31; BID19; BID0 and cannot readily be extended to multiple views. The CNN-based techniques for multiview processing BID5; BID14 both follow the plane-sweep approach, but require plane-sweep volumes as input to their networks. As a , they are not end-to-end systems that can be trained from input images to disparity maps. In this paper, we present Deep Plane Sweep Network (DPSNet), an end-to-end CNN framework for robust multiview stereo. In contrast to previous methods that employ the plane-sweep approach BID14; BID5, DPSNet fully models the plane-sweep process, including construction of plane-sweep cost volumes, within the network. This is made possible through the use of a differentiable warping module inspired by spatial transformer networks BID17 to build the cost volumes. With the proposed network, plane-sweep stereo can be learned in an end-to-end fashion. Additionally, we introduce a cost aggregation module based on local cost-volume filtering BID26 for context-aware refinement of each cost slice. Through this cost-volume regularization, the effects of unreliable matches scattered within the cost volume are reduced considerably. With this end-to-end network for plane-sweep stereo and the proposed cost aggregation, we obtain state-of-the-art over several standard datasets. Ablation studies indicate that each of these technical contributions leads to appreciable improvements in reconstruction accuracy. CNN-based depth estimation has been studied for stereo matching, depth from single images, and multiview stereo. Recent work in these areas are briefly reviewed in the following. Stereo matching Methods for stereo matching address the particular case of depth estimation where the input is a pair of rectified images captured by a stereo rig. Various network structures have been introduced for this problem. BID38 present a Siamese network structure to compute matching costs based on the similarity of two image patches. The estimated initial depth is then refined by traditional cost aggregation and refinement as post-processing. BID24 directly stack several convolution and deconvolution layers upon the matching costs and train the network to minimize the distance between the estimates and ground truth. BID1 propose a CNN that estimates initial disparity and then refines it using both prior and posterior feature consistency in an end-to-end manner. BID19 leverage geometric knowledge in building a cost volume from deep feature representations. It also enables learning of contextual information in a 3D volume and regresses disparity in an end-to-end manner. BID0 introduce a pyramid pooling module for incorporating global contextual information into image features and a stacked hourglass 3D CNN to extend the regional support of contextual information. Depth from single images Similar to these stereo matching approaches, single-image methods extract CNN features to infer scene depths and perform refinements to increase depth accuracy. The first of these methods was introduced by BID4, which demonstrated that CNN features could be utilized for depth inference. Later, BID22 combined a superpixel-based conditional random field (CRF) to a CNN to improve the quality of depth estimates from single images. To facilitate training, recent studies; BID23; BID36 present an end-to-end learning pipeline that utilizes the task of view synthesis as supervision for single-view depth and camera pose estimation. These systems consist of a depth network and a pose estimation network which simultaneously train on sequential images with a loss computed from images warped to nearby views using the estimated depth. View synthesis has similarly been used as supervision by warping between stereo image pairs BID7; BID8. In contrast to these single-image works which employ warping as a component of view synthesis for self-supervised learning, our network computes warps with respect to multiple depth planes to produce plane-sweep cost volumes both for training and at test time. The cost volumes undergo further processing in the form of cost aggregation and regularization to improve the robustness of depth estimates. Multi-view stereo In multi-view stereo, depth is inferred from multiple input images acquired from arbitrary viewpoints. To solve this problem, some methods recover camera motion between the unstructured images but are designed to handle only two views BID31;. The DeMoN system BID31 consists of encoder-decoder networks for optical flow, depth/motion estimation, and depth refinement. By alternating between estimating optical flow and depth/motion, the network is forced to use both images in estimating depth, rather than resorting to single-image inference. perform monocular visual odometry in an unsupervised manner. In the training step, the use of stereo images with extrinsic parameters allows 3D depth estimation to be estimated with metric scale. Among networks that can handle an arbitrary number of views, camera parameters are assumed to be known or estimated by conventional geometric methods. BID18 introduce an endto-end learning framework based on a viewpoint-dependent voxel representation which implicitly encodes images and camera parameters. The voxel representation restricts the scene resolution that can be processed in practice due to limitations in GPU memory. BID16 formulate a geometric relationship between optical flow and depth to refine the estimated scene geometry, but is designed for image sequences with a very small baseline, i.e., an image burst from a handheld camera. BID14 compute a set of plane-sweep volumes using calibrated pose data as input for the network, which then predicts an initial depth feature using an encoder-decoder network. In the depth prediction step, they concatenate a reference image feature to the decoder input as an intra-feature aggregation, and cost volumes from each of the input images are aggregated by max-pooling to gather information for the multiview matching. Its estimated depth map is refined using a conventional CRF. By contrast, our proposed DPSNet is developed to be trained end-to-end from input images to the depth map. Moreover, it leverages conventional multiview stereo concepts by incorporating context-aware cost aggregation. Finally, we would like to refer the reader to the concurrent work by BID35 that also adopts differential warping to construct a multi-scale cost volume, then refined an initial depth map guided by a reference image feature. Our work is independent of this concurrent effort. Moreover, we make distinct contributions: We focus on dense depth estimation for a reference image in an end-to-end learning manner, different from BID35 which reconstructs the full 3D of objects. Our cost volume is constructed by concatenating input feature maps, which enables inference of accurate depth maps even with only two-view matching. Our work refines every cost slice by applying context features of a reference image, which is beneficial for alleviating coarsely scattered unreliable matches such as for large textureless regions. Our Deep Plane Sweep Network (DPSNet) is inspired by traditional multiview stereo practices for dense depth estimation and consists of four parts: feature extraction, cost volume generation, cost aggregation and depth map regression. The overall framework is shown in FIG1. We first pass a reference image and target images through seven convolutional layers (3 × 3 filters except for the first layer, which has a 7 × 7 filter) to encode them, and extract hierarchical contextual information from these images using a spatial pyramid pooling (SPP) module BID12 with four fixed-size average pooling blocks (16 × 16, 8 × 8, 4 × 4, 2 × 2). The multi-scale features extracted by SPP have been shown to be effective in many visual perception tasks such as visual recognition BID12, scene parsing BID39 and stereo matching BID14. After upsampling the hierarchical contextual information to the same size as the original feature map, we concatenate all the feature maps and pass them through 2D convolutional layers. This process yields 32-channel feature representations for all the input images, which are next used in building cost volumes. We propose to generate cost volumes for the multiview images by adopting traditional plane sweep stereo Collins (images for each pixel. In a similar manner to traditional plane sweep stereo, we construct a cost volume from an input image pair. To reduce the effects of image noise, multiple images can be utilized by averaging cost volumes for other pairs. For this cost volume generation network, we first set the number of virtual planes perpendicular to the z-axis of the reference viewpoint and uniformly sample them in the inverse-depth space as follows: DISPLAYFORM0 where L is the total number of depth labels and d min is the minimum scene depth as specified by the user. Then, we warp all the paired features F i, (i = 1, .., N), where i is an index of viewpoints and N is the total number of input views, into the coordinates of the reference feature (of size W idth × Height × CHannel) using pre-computed intrinsics K and extrinsic parameters consisting of a rotation matrix R i and a translation matrix t i of the i th camera: DISPLAYFORM1 where u,ũ l are the homogeneous coordinates of a pixel in the reference view and the projected coordinates onto the paired view, respectively. F il (u) denotes the warped features of the paired image through the l th virtual plane. Unlike the traditional plane sweeping method which utilizes a distance metric, we use a concatenation of features in learning a representation and carry this through to the cost volume as proposed in BID19. We obtain a 4D volume (W ×H ×2CH ×L) by concatenating the reference image features and the warped image features for all of the depth labels. In Eq., we assume that all images are captured by the same camera, but it can be directly extended to images with different intrinsics. For the warping process, we use a spatial transformer network BID17 for all hypothesis planes, which does not require any learnable parameters. In TAB4, we find that concatenating features improves performance over the absolute difference of the features. Given the 4D volume 1, our DPSNet learns a cost volume generation of size W × H × L by using a series of 3D convolutions on the concatenated features. All of the convolutional layers consist of 3 × 3 × 3 filters and residual blocks. In the training step, we only use one paired image (while the other is the reference image) to obtain the cost volume. In the testing step, we can use any number of paired images (N ≥ 1) by averaging all of the cost volumes. The key idea of cost aggregation BID26 is to regularize the noisy cost volume through edge-preserving filtering BID11 volume filtering, we introduce a context-aware cost aggregation method in our end-to-end learning process. The context network takes each slice of the cost volume and the reference image features extracted from the previous step, and then outputs the refined cost slice. We run the same process for all the cost slices. The final cost volume is then obtained by adding the initial and residual volumes as shown in FIG2.Here, we use dilated convolutions in the context network for cost aggregation to better exploit contextual information; BID37. The context network consists of seven convolutional layers with 3 × 3 filters, where each layer has a different receptive field (1, 2, 4, 8, 16, 1, and 1). We jointly learn all the parameters, including those of the context network. All cost slices are processed with shared weights of the context network. Then, we upsample the cost volume, whose size is equal to the feature size, to the original size of the images via bilinear interpolation. We find that this leads to moderate performance improvement as shown in TAB4. We regress continuous depth values using the method proposed in BID19. The probability of each label l is calculated from the predicted cost c l via the softmax operation σ(·). The predicted labell is computed as the sum of each label l weighted by its probability. With the predicted label, the depth is calculated from the number of labels L and minimum scene depth d min as follows:d DISPLAYFORM0 We set L and d min to 64 and 0.5, respectively. Let θ be the set of all the learnable parameters in our network, which includes feature extraction, cost volume generation and cost aggregation (plane sweep and depth regression have no learnable parameters). Letd,d denote the predicted depth from the initial and refined cost volumes, respectively, and let d gt be the corresponding supervision signal. The training loss is then formulated as DISPLAYFORM0 where | · | H denotes the Huber norm, referred to as SmoothL1Loss in PyTorch. The weight value λ for depth from the initial cost volume is set to 0.7. In the training procedure, we use image sequences, ground-truth depth maps for reference images, and the provided camera poses from public datasets, namely SUN3D, RGBD, and Scenes11 2. We train our model from scratch for 1200K iterations in total. All models were trained end-to-end with the ADAM optimizer (β 1 = 0.9, β 2 = 0.999). We use a batch size of 16 and set the learning rate to 2e−4 for all iterations. The training is performed with a customized version of PyTorch on four NVIDIA 1080Ti GPUs, which usually takes four days. A forward pass of the proposed network takes about 0.5 seconds for 2-view matching and an additional 0.25 seconds for every new frame matched (640 × 480 image resolution). In our evaluations, we use common quantitative measures of depth quality: absolute relative error (Abs Rel), absolute relative inverse error (Abs R-Inv), absolute difference error (Abs diff), square relative error (Sq Rel), root mean square error and its log scale (RMSE and RMSE log) and inlier ratios (δ < 1.25 i where i ∈ {1, 2, 3}). All are standard metrics used in a public benchmark suite 3.For our comparisons, we choose state-of-the-art methods for traditional geometry-based multiview stereo (COLMAP), depth from unstructured two-view stereo (DeMoN) BID31 and CNN-based multiview stereo (DeepMVS) BID14. We estimate the depth maps from two unstructured views using the test sets in MVS, SUN3D, RGBD and Scenes11, as done for DeMoN 4. The are reported in Table 1. Our DPSNet provides the best performance on nearly all of the measures. Of particular note, DPSNet accurately recovers scene depth in homogeneous regions as well as along object boundaries as exhibited in FIG3. DeMoN generally produces good depth estimates but often fails to reconstruct scene details such as the keyboard (third row) and fine structures (first, second and fourth rows). By contrast, DPSNet estimates accurate depth maps at those regions because the differential feature warping penalizes inaccurate reconstructions, playing a role similar to the left-right consistency check that has been used in stereo matching BID7. The first and third rows of FIG3 exhibit problems of COLMAP and DeepMVS in handling textureless regions. DPSNet instead produces accurate , courtesy of the cost aggregation network. For a more balanced comparison, we adopt measures used in BID14 as additional evaluation criteria: completeness, which is the percentage of pixels whose errors are below a certain threshold. geometry error, taking the L1 distance between the estimated disparity and the ground truth. photometry error, which is the L1 distance between the reference image and warped image using the estimated disparity map. The for COLMAP, DeMoN and DeepMVS are directly reported from BID14 in TAB1. In this experiment, we use the ETH3D dataset on which all methods are not trained. Following BID35, we take 5 images with 1152 × 864 resolution and set 192 depth labels based on ground-truth depth to obtain optimal for MVSNet. For the DPSNet , we use 4 views with 810 × 540 resolution and set 64 labels whose range is determined by the minimum depth values of the ground truth. In TAB1, our DPSNet shows the best performance overall among the all the comparison methods, except for filtered COLMAP. Although filtered COLMAP achieves the best performance, its completeness is only 71% and its unfiltered version shows a significant performance drop in all error metrics. On the other hand, our DPSNet with 100% completeness shows promising on all measures. We note that our DPSNet has a different purpose compared to COLMAP and MVSNet. COLMAP and MVSNet are designed for full 3D reconstruction with an effective outlier rejection process, while DPSNet aims to estimate a dense depth map for a reference view. An extensive ablation study was conducted to examine the effects of different components on DPSNet performance. We summarize the in TAB4.Cost Volume Generation In TAB4 (a) and (e), we compare the use of cost volumes generated using the traditional absolute difference BID2 and using the concatenation of features from the reference image and warped image. The absolute difference is widely used for depth label selection via a winner-take-all strategy. However, we observe that feature concatenation provides better performance in our network than the absolute difference. A possible reason is that the CNN may learn to extract 3D scene information from the tensor of stacked features. The tensor is fed into the CNN to produce an effective feature for depth estimation, which is then passed through our cost aggregation network for the initial depth refinement. Cost Aggregation For our cost aggregation sub-network, we compare DPSNet with and without it in TAB4 (e) and (b), respectively. It is shown that including the proposed cost aggregation leads to significant performance improvements. Examples of depth map refinement with the cost aggregation are displayed in FIG5.Our cost aggregation is also compared to using a stacked hourglass to aggregate feature information along the depth dimension as well as the spatial dimensions as done recently for stereo matching BID0. Although the stacked hourglass is shown in For further analysis of cost aggregation, we display slices of 3D cost volumes after the softmax operation (in Eq. FORMULA2) that span depth labels and the rows of the images. The cost slices in Figure 6 (c), (d) show that our feature-guided cost aggregation regularizes noisy cost slices while preserving edges well. The cleaner cost profiles that ensue from the cost aggregation lead to clearer and edge-preserving depth regression . As mentioned in a recent study BID13, a cost profile that gives confident estimates should have a single, distinct minimum (or maximum), while an ambiguous profile has multiple local minima or multiple adjacent labels with similar costs, making it hard to exactly localize the global minimum. Based on two quantitative confidence measures BID13 on cost volumes in TAB5, the proposed aggregation improves the reliability of the correct match corresponding to the minimum cost. Depth Label Sampling In the plane sweep procedure, depth labels can be sampled in either the depth domain or the inverse depth domain, which provides denser sampling in areas closer to a camera. TAB4 (d) and (e) show that uniform depth label sampling in the inverse depth domain produces more accurate depth maps in general. We examine the performance of DPSNet with respect to the number of input images. As displayed in FIG7, a greater number of images yields better , since cost volume noise is reduced through averaging over more images, and more viewpoints help to provide features from areas unseen in other views. FIG7 shows that adding input views aids in distinguishing object boundaries. Note that the performance improvement plateaus when seven or more images are used. Rectified Stereo Pair CNNs-based stereo matching methods have similarity to DPSNet, but differ from it in that correspondences are obtained by shifting learned features in BID24; BID19 BID30. The purpose of this study is to show readers that not only descriptor shift but also plane sweeping can be applied to rectified stereo matching. We apply DPSNet on the KITTI dataset, which provides rectified stereo pairs with a specific baseline. As shown in FIG8, although DPSNet is not designed to work on rectified stereo images, it produces reasonable . In particular, DPSNet fine-tuned on the KITTI dataset in TAB7 achieves performance similar to BID24 in terms of D1-all score, with 4.34% for all pixels and 4.05% for non-occluded pixels in the KITTI benchmark. We expect that the depth accuracy would improve if we were to adopt rectified stereo pair-specific strategies, such as the feature consistency check in BID1. We developed a multiview stereo network whose design is inspired by best practices of traditional non-learning-based techniques. The plane sweep algorithm is formulated as an end-to-end network via a differentiable construction of plane sweep cost volumes and by solving for depth as a multilabel classification problem. Moreover, we propose a context-aware cost aggregation method that leads to improved depth regression without any post-processing. With this incorporation of traditional multiview stereo schemes into a deep learning framework, state-of-the-art reconstruction are achieved on a variety of datasets. Directions exist for improving DPSNet. One is to integrate semantic instance segmentation into the cost aggregation, similar to the segment-based cost aggregation method of BID25. Another direction is to improve depth prediction by employing viewpoint selection in constructing cost volumes BID6, rather than by simply averaging the estimated cost volumes as currently done in DPSNet. Lastly, the proposed network requires pre-calibrated intrinsic and extrinsic parameters for reconstruction. Lifting this restriction by additionally estimating camera poses in an end-to-end learning framework is an important future challenge.
A convolution neural network for multi-view stereo matching whose design is inspired by best practices of traditional geometry-based approaches
1,116
scitldr
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned. Driven by real-world obstacles in health and disease requiring new drugs, treatments, and assays, the goal of biological sequence design is to identify new discrete sequences x which optimize some oracle, typically an experimentally-measured functional property f (x). This is a difficult black-box optimization problem over a combinatorially large search space in which function evaluation relies on slow and expensive wet-lab experiments. The setting induces unusual constraints in black-box optimization and reinforcement learning: large synchronous batches with few rounds total. The current gold standard for biomolecular design is directed evolution, which was recently recognized with a Nobel prize and is a form of randomized local search. Despite its impact, directed evolution is sample inefficient and relies on greedy hillclimbing to the optimal sequences. Recent work has demonstrated that machine-learning-guided optimization (Section 3) can find better sequences faster. Reinforcement learning (RL) provides a flexible framework for black-box optimization that can harness modern deep generative sequence models. This paper proposes a simple method for improving the sample efficiency of policy gradient methods such as PPO for black-box optimization by using surrogate models that are trained online to approximate f (x). Our method updates the policy's parameters using sequences x generated by the current policy π θ (x), but evaluated using a learned surrogate f w (x), instead of the true, but unknown, oracle reward function f (x). We learn the parameters of the reward model, w, simultaneously with the parameters of the policy. This is similar to other model-based RL methods, but simpler, since in the context of sequence optimization, the state-transition model is deterministic and known. Initially the learned reward model, f w (x), is unreliable, so we rely entirely on f (x) to assess sequences and update the policy. This allows a graceful fallback to PPO when the model is not effective. Over time, the reward model becomes more reliable and can be used as a cheap surrogate, similar to Bayesian optimization methods . We show empirically that cross-validation is an effective heuristic for assessing the model quality, which is simpler than the inference required by Bayesian optimization. We rigorously evaluate our method on three in-silico sequence design tasks that draw on experimental data to construct functions f (x) characteristic of real-world design problems: optimizing binding affinity of DNA sequences of length 8 (search space size 4 8); optimizing anti-microbial peptide sequences (search space size 20 50), and optimizing binary sequences where f (x) is defined by the energy of an Ising model for protein structure (search space size 20 50). These do not rely on wet lab experiments, and thus allow for large-scale benchmarking across a range of methods. We show that our DyNA-PPO method achieves higher cumulative reward for a given budget (measured in terms of number of calls to f (x)) than existing methods, such as standard PPO, various forms of the cross-entropy method, Bayesian optimization, and evolutionary search. In summary, our contributions are as follows: • We provide a model-based RL algorthm, DyNA-PPO, and demonstrate its effectiveness in performing sample efficient batched black-box function optimization. • We address model bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross validation. • We propose a visitation-based exploration bonus and show that it is more effective than entropy-regularization in identifying multiple local optima. • We present a new optimization task for benchmarking methods for biological sequence design based on protein energy Ising models. Let f (x) be the rounds of experiments and that B sequences can be measured per round. Let D n = {(x, f (x))} be the data acquired in round n with |D n | = B. For simplicity, we assume that the sequence length T is constant, but our approach based on generating sequences autoregressively easily generalizes to variable-length sequences. We formulate the design of a single sequence x as a Markov decision process M = (S, A, p, r) with state space S, action space A, transition function p, and reward function r. The state space S = ∪ t=1... T V t is the set of all possible sequence prefixes and A corresponds to the vocabulary V. A sequence is generated left to right. At time step t, the state s t = a 0,..., a t−1 corresponds to the t last tokens and the action a t ∈ A to the next token. The transition function p(s t + 1|s t) = s t a t is deterministic and corresponds to appending a t to s t. The reward r(s t, a t) is zero except at the last step T, where it corresponds to the functional measurement f (s T −1). For generating variable length sequences, we extend the vocabulary by a special end-of-sequence token and terminate sequence generation when this token is selected. We train a policy π θ (a t |s t) to optimize the expected sum of rewards: 1: Input: Number of experiment rounds N 2: Input: Number of model-based training rounds M 3: Input: Set of candidate models S = {f} 4: Input: Minimum model score τ for model-based training 5: Input: Policy π θ with initial parameters θ 6: for n = 1, 2,... N do 7: Collect samples Dn = {x, f (x)} using policy π θ 8: Train policy π θ on Dn 9: Fit candidate models f ∈ S on n i=1 Di and compute their score by cross-validation 10: Select the subset of models S ⊆ S with a R 2 score ≥ τ 11: if S is not empty then 12: for m = 1, 2,...M do 13: Sample a batch of sequences x from π θ and observe the reward Update π θ on {x, f (x)} 15: end for 16: end if 17: end for We use proximal policy optimization (PPO) with KL trust-region constraint , which we have found to be more stable and sample efficient than REINFORCE . We have also considered off-policy deep Q-learning and categorical distributional deep Q-learning , which are in principle more sample-efficient than on-policy learning using PPO since they can reuse samples multiple times. However, they performed worse than PPO in our experiments (Appendix C). We implement algorithms using the TF-Agents RL library . We employ autoregressive models with one fully-connected layer as policy and value networks since they are faster to train and outperformed recurrent networks in our experiments. At time step t, the network takes as input the W last characters a t−W,..., a t−1 that are one-hot encoded, where the context window size W is a hyper-parameter. To provide the network with information about the current position of the context window, it also receives the time step t, which is embedded using a sinusoidal positional encoding , and concatenated with the one-hot characters. The policy network outputs a distribution π θ (a t |s t) over next the token a t. The value network V (s t), which approximates the expected future reward for being in state s t, is used as a baseline to reduce the variance of stochastic estimates of equation 1 . Model-based RL learns a model of the environment that is used as a simulator to provide additional pseudo-observations. While model-free RL has been successful in domains where interaction with the environment is cheap, such as those where the environment is defined by a software program, its high sample complexity may be unrealistic for biological sequence design. In model-based RL, the MDP M = (S, A, p, r) is approximated by a model M = (S, A, p, r) with the same state space S and action space A as M (, Ch. 8). Since the transition function p is deterministic in our case, only the reward function r(s t, a t) needs to be approximated by r (s t, a t). Since r(s T, a T) is non-zero at the last step T and then corresponds to f (x) with x == s T −1, the problem reduces to approximating f (x). This can be done by supervised regression by fitting a regressor f (x) on the data ∪ r <=r D r from all previous rounds. We then use the ing model to collect additional observations (x, f (x)) and update the policy in a simulation phase, instead of only using observations (x, f (x)) from the the true environment, which are expensive to collect. We call our method DyNA-PPO since it is similar to the DYNA architecture and since can be used for DNA sequence design. Model-based RL provides the promise of improved sample efficiency when the model is accurate, but it can reduce performance if insufficient data are available for training a trustworthy model. In this case, the policy is prone to exploit regions where the model is inaccurate . To reap the benefit of model-based RL when the model is accurate and avoid reduced performance when it is not, we (i) automatically select the model from a set of candidate models of varying complexity, (ii) only use the selected model if it is accurate, and iii) stop model-based training as soon the the model uncertainty increases by a certain threshold. After each round of experiment, we fit each candidate model on all available data to estimate f (x) via supervised regression. We quantify model accuracy by the R 2 score, which we estimate by five-fold cross validation. If the R 2 score of all candidate model is below a pre-specified threshold τ, we do not perform modelbased training in that round. Otherwise, we build an ensemble model that includes all models with a score greater or equal than τ, and use the average prediction as reward for training the policy. We considered τ as a tunable hyper-parameter, were we found τ = 0.5 to be optimal for all problems (see figure 14 . By ignoring the model if it is inaccurate, we aim to prevent the policy from exploiting deficiencies of the model . We perform up to M model-based optimization rounds (see algorithm 1) and stop as soon as the model uncertainty increased by a certain factor relative to the model uncertainty at the first round (m = 1). This is motivated by the observation that the model uncertainty is strongly correlated with the unknown model error, and prevents from training the policy with inaccurate model predictions (see figure 12, 13) as soon as the model starts to explore regions on which the model was not trained on. For models, we consider nearest neighbor regression, Bayesian ridge regression, random forests, gradient boosting trees, Gaussian processes, and ensemble of deep neural networks. Within each model family, we additionally use cross-validation for tuning hyper-parameters, such as the number of trees, tree depth, kernels and kernel parameters, or number of hidden layers and units (see Appendix A.7 for details). By testing and optimizing the hyper-parameters of different models automatically, the model capacity can dynamically increase as data becomes available. In Bayesian optimization, non-parametric models such as Gaussian processes are popular regressors, and they also automatically grow model capacity as more data arrives . However, with Bayesian optimization there is no opportunity to ignore the regressor entirely if it is unreliable. Furthermore, Bayesian optimization relies on the ability to do (approximate) Bayesian inference, which in practice is sensitive to the choice of approximation and to hyper-parameter choice . Here, we have find it simpler to do cross-validation-based model selection. Overall, our method combines the positive attributes of both generative and discriminative approaches to sequence design. Our experiments do not compare to prior work on model-based RL, since these methods primarily focus on estimating a dynamics model for state transitions. Learning policies to generate diverse sequences is important because of several reasons. In many applications, f (x) is an in-vitro (an experiment taking place outside a living organism) surrogate for an in-vivo (reaction occurring inside a living organism) functional measurement that is even more expensive to evaluate than f (x). The in-vivo measurement may depend on properties that are correlated with f (x) and others that are not captured at all in-vitro, such as off-target effects or toxicity. Therefore, it is desirable for the optimization procedure to discover a diverse set of candidate optima, to improve the chance that a sequence satisfying the ultimate in-vivo criteria is found within this set. Here, diversity is a downstream metric, for which training the policy π θ (x) to maximize equation 1 will not necessarily yield good performance. For example, a high-quality policy can learn to always generate the same sequence x with a high value of f (x), and will therefore in zero diversity. An additional reason that diversity matters is that it yields a good exploration strategy, even for scenarios where optimizing equation 1 is sufficient. Finally, use of strategies that reward high-diversity policies can reduce the policies' tendency to generate exact duplicates. To increase sequence diversity, we employ a simple exploration reward bonus based on the density of proposed sequences, similar to existing exploration techniques based on state visitation frequency . Specifically, we define the final reward r T = f (x) − λ · dens (x), where dens (x) ∈ N + is the weighted number of sequences that have been proposed in previous rounds with a distance of less than away from x, where the weight decays linearly with the distance. This reward penalizes proposing similar sequences multiple times, where the strength of the penalty is controlled by λ. As a , the policy learns not to generate related sequences and hence explores the search space more effectively. We used the edit-distance as distance metric and tuned the distance radius, where setting > 0 improved exploration on high-dimensional problems (see figure 11). We also considered an alternative penalty based on the nearest neighbor distance of the proposed sequence to past sequences, which we found to be less effective (see figure 9). Recently, machine learning approaches have been shown to be effective in optimizing real-world DNA and protein sequences; de;;; ). Existing methods for biological sequence design fall into three broad categories: evolutionary search, optimization using discriminative models (e.g. Bayesian optimization), and optimization using generative models (e.g. the cross entropy method). Evolutionary approaches perform direct local search in the space of sequences. They include the aforementioned directed evolution and derivatives with application-specific mutation and recombination steps. Evolutionary approaches are appealing since they are simple and can easily incorporate human intuition into the design process, but generally suffer from low sample efficiency. Optimization methods based on discriminative models alternate between two steps: (i) using the data that have been collected so far to fit a regressor f w (x) to approximate f (x), and (ii) using f w (x) to define an acquisition function that is optimized to select the next batch of sequences. Recently, such an approach was used to optimize the binding affinity of IgG antibodies , where a neural network ensemble was used for f w (x). In general, optimizing the acquisition function is a non-trivial combinatorial optimization problem. employ activation maximization, where gradient-based optimization is performed on a continuous relaxation of the discrete search space. However, this requires f w (x) to be differentiable and optimization of a continuous relaxation is vulnerable to leaving the data manifold (cf. deep dream ). Bayesian optimization defines an acquisition function such as expected improvement based on the uncertainty of f w (x), which enables balancing exploration and exploitation. Gaussian processes (GPs) are commonly used for Bayesian black box optimization since they provide calibrated uncertainty estimates. for an overview. Unfortunately, GPs are hard to scale to large, high-dimensional datasets and are sensitive to the choice of hyperparameters. In response, recent work has performed continuous black box optimization in the latent space of a deep generative model (Gómez-). However, this approach requires a pre-trained model such as a variational autoencoder to obtain the latent embeddings. Our modelbased reinforcement learning approach is similar to these approaches in that we train a reinforcement learning policy to optimize a model f w (x). However, our policy is also trained directly on observations of f (x) and is able to resort to model-free training by automatically identifying if the model f w (x) is too inaccurate to be used as a simulation of f (x). investigate conditions in which an estimate of model generalization (their analysis uses validation accuracy) could justify model usage in such model-based policy optimization settings. propose using a cascade of classifiers, one per round, to guide sampling progressively better candidates. Optimization methods based on generative models seek to learn a distribution p θ (x) parameterized by θ that maximizes the expected value of f (x):. We note that this is the same form as variational optimization objectives, which allow the use of parameter-space evolutionary strategies (; ;). Variants of the cross entropy method (; a) optimize θ, by alternating two steps: (i) sampling x ∼ p θ (x) and evaluating f(x), and (ii) updating θ to maximize this expectation. Methods differ in how step (ii) is performed. For example, hillclimb-MLE performs maximum-likelihood training on the top k sequences from step (i). Similarly, Feedback GAN (FBGAN) uses samples whose target function value f (x) exceeds a fixed threshold for training a generative adversarial network . Design by Adaptive Sampling (DbAs) performs weighted MLE of variational autoencoders , where a sample's weight corresponds to the probability that f (x) is greater than a quantile cutoff under an noise model . In Brookes et al. (2019b), p θ (x) is further restricted to stay close to a prior distribution over sequences. An alternative approach for optimizing the above expectation is RL. While RL has been used for generating natural text , small molecules , and RNA se-quences that fold into a particular structure , we are not aware of applications of RL to optimizing DNA and protein sequences. DyNA PPO is related to existing work on model-based RL for sample efficient control (; ; ; ;), with the key difference that the state transition function is known and the reward function is unknown in our work, whereas most existing model-based RL approaches seek to model the state-transition function and consider the reward function as known. Prior work on sequence generation incorporates non-differentiable rewards, like BLEU in machine translation, via weighted maximum likelihood (MLE). introduce reward augmented MLE, while fine tune an MLE-pretrained model using actor-critic methods. Reinforcement learning has also been applied to solving combinatorial optimization problems (; ; ;). In this setting sample complexity is less important because evaluating f (x) only involves a fast software program. Recent work has proposed generative models of protein structures or generative models of amino acids conditional on protein structure . Such methods are outside of the scope of this paper's experiments, since they could only be used in experimental settings where protein structures, which are expensive to measure, are available. Finally, DNA and protein design differs from small molecule design (Griffiths & Hernández-; ; Gómez-; ; ;) in the following points: (i) the number of sequences measured in parallel in the lab is typically higher (hundred or thousands vs. dozens) due to the maturity of DNA synthesis and sequencing technology, (ii) the search space is a set of sequences instead of molecular graphs, which require specialized network architectures for both discriminative and generative models, and (iii) molecules must be optimized subject to the constraint that there is a set of reactions to synthesize them, whereas practically all DNA or protein sequences are synthesizable. In the next three sections, we compare DyNA-PPO to existing methods on three in-silico optimization problems that we designed in collaboration with life scientists to faithfully simulate the behavior of real wet-lab experiments, which would be cost prohibitive for a comprehensive methodological evaluation. Along the way, we present ablation experiments to help better understand the behavior of DyNA-PPO. We compare the performance of model-free policy optimization (PPO) and model-based optimization (DyNA-PPO) with the following methods that we discussed in Section 3. Further details for each method can be found in Appendix A: • RegEvolution: Local search based on regularized evolution , which has performed well on other black-box optimization tasks and can be seen as an instance of directed evolution. • DbAs: Cross-entropy optimization using variational autoencoders . • FBGAN: Cross entropy optimization using generative adversarial networks . • Bayesopt GP: Bayesian optimization using a Gau ssian process regressor and activation maximization as acquisition function solver. • Bayesopt ENN Bayesian optimization using an ensemble of neural network regressors and activation maximization as acquisition function solver. • Random: Guessing sequences uniformly at random. We quantify optimization performance by the cumulative maximum of f (x) for sequences proposed up to a given round. We quantify sequence diversity (Section 2.4) in terms of the mean pairwise hamming distance between the sequences proposed at each round. For problems with known optima, we also report the fraction of global optima found. We replicate experiments with 50 random seeds. depending on the number of inner policy optimization rounds using the surrogate model. Using 0 rounds corresponds to PPO training. Since the surrogate model is sufficiently accurate, it is useful to perform many rounds of updating the policy using it before querying f (x) again. Right: the R 2 of the surrogate model. Since it is always above the threshold for model-based training (0.5; dashed line), it is always used for training. We first consider synthetic black-box optimization problems based on the 3D structure of naturallyoccurring proteins. Ising models fit on sets of evolutionary-related protein sequences have been shown to be accurate predictors for proteins' 3D structure (; ; ; Sułkowska et al., 2012). We consider the inverse problem: given a protein, we seek to find the amino acid sequence that minimizes the energy of the Ising model parameterized by its structure. Optimizers are given a budget of 10 rounds with batch size 1000 and we consider sequences of length 50 (search space size 20 50). The functional form of the energy function is given in Appendix B.1. On the left of Figure 1 we consider the optimization trajectory for a representative protein and on the right we compare the best f (x) found for each method across a range of proteins. We find that DyNA-PPO considerably outperforms the other methods. We expect that this is because this synthetic reward landscape can be well-described by a model fit using few examples, which also explains the good performance of Bayesian optimization. On the left of Figure 2 we vary the number of inner-loop optimization rounds of the policy using interaction with the model-based simulated environment, where using 0 rounds corresponds to performing standard PPO. We find that with DyNA-PPO it is worthwhile to spend considerable resources updating the policy using the simulator: the best system performs the most steps. Doing so is possible because the surrogate model is high quality. On the right we plot the score of the regressor fit by automated model selection. Its high accuracy helps DyNA-PPO learn to generate high-quality sequences using very few evaluations of f (x). Table 1: Comparison of methods across transcription factor binding sites. Mean rank of methods across all 41 hold-out tasks of the transcription factor binding dataset. Ranks were computed within each task using the average of metrics across optimization rounds, and then averaged across tasks. The higher the rank the better. 7 is the maximum rank. DyNA-PPO outperforms the other methods on both optimization of f (x) and its ability to identify multiple well-separated local optima. Transcription factors are protein sequences that bind to DNA sequences and regulate their activity. , the binding affinity of numerous transcription factors was measured against all possible length-8 DNA sequences (V = 4). The ing dataset defines 158 different Right: performance of entropy regularization as a function of the regularization strength. The top row shows that PPO finds about 80% of local optima with a relatively mild density penalty of λ = 0.1, whereas only about 45% local optima are found when using entropy regularization. The bottom row shows that varying the density penalty enables to control the sequence diversity quantified by the mean pairwise hamming distance between sequences. discrete optimization tasks, where the goal of each task is to find a DNA sequence of length eight that maximizes the affinity towards one of the transcription factors. It is well suited for in-silico benchmarking since (i) it is exhaustive and thereby does not require estimating missing f (x) and (ii) the distinct local optima of all tasks are known and can be used to quantify exploration (see Appendix B.2 for details). The optimization methods are given a budget of 10 rounds with a batch size of B = 100 sequences and the search space size is 4 8. We use one task (CRX REF R1) for optimizing the hyper-parameters of all methods, and test performance on 41 heterogeneous hold-out tasks. as a function of the total number of sequences measured so far. We find that DyNA-PPO and PPO outperform all other methods in terms of both the cumulative maximum f (x) found as well as the fraction of local optima discovered. We also find that the diversity of proposed sequences quantified by the fraction of global optima found is high compared to other generative approaches. This shows that our method continues to explore the search space by proposing novel sequences instead of converging to a single sequence or a handful of sequences-a desired property as discussed in section 2.4. Across all tasks DyNA-PPO and PPO rank highest compared with other methods (Table 1). In Figures 4 and 5 we analyze the effects two key design decisions of DyNA-PPO: model-based training and an exploration bonus. We find that automated model selection automatically increases the complexity of the model, but that the models are not always accurate enough to be used for model-based training. This explains the relatively small improvement of DyNA-PPO over PPO. We also find that the exploration bonus outlined in Section 2.4 is more effective than entropy regularization in finding multiple local optima and promoting sequence diversity. Next, we seek to design antimicrobial peptides (AMPs). AMPs are relatively short (8 -75 amino acids) protein sequences (|V | = 20 amino acids) that are promising candidates against multi-resistant pathogens due to their wide range of antimicrobial activities. We use the dataset proposed by , which contains 6,760 unique AMP sequences and their antimicrobial activity towards multiple pathogens. We follow for preprocessing the dataset and generating non-AMP sequences as negative training samples. Unlike the transcription factor binding site dataset, we do not have wet-lab experiments for every sequence in the search space. Therefore, we fit random forest classifiers to predict if a sequence is antimicrobial towards a certain pathogen in the dataset (see Section B.3), and use the predicted probability as the functional measurement f (x) to optimize. Given the high accuracy of the classifiers (cross validated AUC 0.94 and 0.99), we expect that the reward landscape of f (x) is of realistic difficulty. We perform 8 rounds with a batch size 250 and restrict the sequence length to at most 50 characters (search space size 20 50). Figure 6 compares methods on C. alibicani. We find that model-based optimization using DyNA PPO enables finding high reward sequences in early rounds, though model-free PPO slightly surpasses the performance of DyNA PPO later on. Both DyNA PPO and PPO considerably outperform the other methods in terms of the maximum f (x) found. The density based exploration bonus prevents PPO and DyNA PPO from generating non-unique sequences (figure 11). Stopping modelbased training as soon as the model uncertainty increased by a certain factor prevents DyNA PPO from converging to a sub-optimal solution when performing many model-based optimization rounds. (figure 12,13). We have shown that RL is an attractive alternative to existing methods for designing DNA and protein sequences. We have proposed DyNA-PPO, a model-based extension of PPO with automatic model selection that improves sample efficiency, and incorporates a reward function that promotes exploration by penalizing identical sequences. By approximating an expensive wet-lab experiment with a surrogate model, we can perform many rounds of optimization in simulation. While this work has been focused on showing the benefit of DyNA-PPO for biological sequence design, we believe that the large-batch, low-round optimization setting described here may well be of general interest, and that model-based RL may be applicable in other domains such as agriculture, education, and economics. A IMPLEMENTATION DETAILS Regularized evolution is a variant of directed evolution that regularizes the search by keeping a fixed number of individuals alive as candidates for selection (analogous to death by aging). At each round, it generates a batch of child sequences by sampling two parent sequences per child from the population via tournament selection, i.e. selecting the fittest out of K randomly sampled individuals. It then performs crossover of the two parent sequences by copying the characters of one parent from left to right and randomly transitioning to transcribing from the other parent sequence with some crossover probability at each step. Child sequences are mutated by substituting characters independently by other characters with some substitution probability. For variable length sequences, we also allowed insertion and deletion mutations. As hyper-parameters, we tune the tournament size, substitution-, insertion-, and deletion-probabilities. A.2 MCMC AND SIMULATED ANNEALING MCMC and simulated annealing resemble evolution with no crossover, and selection only occurring between an individual and its parent. Beginning with a random population, each individual evolves as a single chain, with neighborhood structure defined by the mutation operator described in section A.1. We denote x and x as a parent and child sequence, respectively. A transition x → x is always accepted if the reward increases (f (x) > f (x)). Otherwise, the transition is accepted with some acceptance probability. For MCMC, the acceptance probability is f (x)/f (x), while for simulated annealing it is exp((f (x) − f (x))/T ) for some temperature T. A high temperature increases the likelihood of accepting a move that decreases the reward. The next mutation on the chain begins from x if the transition is rejected, and from x otherwise. We treated the temperature T as a tunable hyper-parameter in addition to the evolution hyper-parameters described in section A.1. We follow the methodology suggested by. Instead of using a constant threshold for selecting positive sequences as described in the original publication, we used a quantile cutoff, which does not depend on the absolute scale of f (x) and performed better in our experiments. As hyper-parameters, we tuned the quantile cutoff, learning rate, batch size, discriminator and generator training epochs, the gradient penalty weight, the Gumble softmax temperature, and the number of latent variables of the generator. We follow the methodology suggested by. As hyper-parameters, we optimized the quantile for selecting training samples, learning rate, batch size, training epochs, number of hidden units of the MLP generator and discriminator, and number of latent variables. The generative model is an variational autoencoder with a multi-layer perceptron decoder. We also considered DbAs with a LSTM as generative model, which performed slightly better than a VAE on the TfBind8 problem but worse on the PdbIsing and AMP problem (see figure 8). As regressors, we considered a Gaussian process (GP) with RBF kernel on one-hot features, and an ensemble of ten fully-connected neural networks with one fully connected layer and 128 hidden units. We used the regressor output to compute the expected improvement or posterior mean acquisition function, which we maximized by gradient ascent for a certain number of acquisition following. We took the ing B unique sequences with highest acquisition function value as sequences to measure in the next round. We tuned the length scale and variance of the RBF kernel, and the learning rate, batch size, and number of training epochs of the neural network ensemble. We further tuned the number of gradient ascent steps for activation maximization. A.6 PPO AND DYNA PPO We used the PPO implementation of the TF-Agents RL library . After each round, we trained trained the agent on the collected batch of sequences for a relatively high number of steps (about 72) since it ed in a performance increase compared with performing only a single training step. We used the adaptive KL trust region penalty, which performed slightly better than importance ratio clipping in our experiments. We used a policy and value network with one fully connected layer and 128 hidden units. Both networks take the W last generated characters and positions as input, which we padded at the beginning of the sequence. We set the context window W to the minimum of the total sequence length and 50. As hyperparameters, we tuned the learning rate, number of training steps, adaptive KL target, and entropy regularization. For DyNA PPO, we also tuned the maximum number of model-based optimization rounds M (see section 2.3). Automatic model selection optimizes the hyper-parameters of a set of candidate models by randomized search, and evaluates each hyper-parameter configuration by five-fold cross-validation using the R 2 score. To account for randomness in the R 2 score between models due to different crossvalidation splits, we used the same split for evaluating each of the models per round. We considered the following candidate models (implemented in scikit learn) and corresponding hyper-parameters: • KNeighborsRegressor: n neighbors • BayesianRidge: alpha 1, alpha 2, lambda 1, lamdba 2 • RandomForestRegressor: max depth, max features, n estimators • ExtraTreesRegressor: max depth, max features, n estimators • GradientBoostingRegressor: learning rate, max depth, n estimators • GaussianProcessRegressor: with RBF, RationalQuadratic, and Matern kernel We also considered an ensemble of 10 neural networks with two convolutional layers and one fully connected layer, and optimized the learning rate and number of training epochs. Given a protein in the Protein Data Bank , we compute the energy E(x) for sequence x as E(x) = i φ i (x i) + ij C ij φ(x i, x j), where x i refers to the character in the i-th position of sequence x. C ij is an indicator for whether the Cα atoms of the residues at positions i and j are separated by less than 6 Angstroms when the protein folds. φ(x i, x j) is a widely-used'pair potential' based on co-occurence probabilities derived from the structures of real-world proteins . The same 20 x 20 table of pair potentials is used at all positions in the sequence, and thus the difference in energy functions across proteins is dictated only by their differing contact map structure. We set the local term φ i (x i) to zero. In future work, it would be interesting to consider non-zero local terms. Our experiments consider a set of qualitatively-different proteins listed at the bottom-right of Figure 1. We identify the local optima using the same procedure as in Section B.2, except without accounting for reverse complements. We used the dataset described by , and min-max normalized binding affinities between zero and one. To reduce computational costs, we only considered the first replicate (REF R1) of each wild type transcription factor in the dataset, which ed in 41 optimization targets that we used for comparing optimizers as described in Section 4.2. We extracted local optima for each binding target as follows. First, we separated sequences into forward and reverse sequences by ordering sequences lexicographically and including each sequence in the set of forward sequences unless the set already contained its reverse complement. We then chose the 100 forward sequences with the highest binding affinity and clustered them using the hamming distance metric, where we determined the number of clusters by finding the number of PCA components required to explain 95% of variance. We then used the sequences with the highest reward per cluster and their reverse complement as local optima. We downloaded the dataset 1 provided by , and followed the paper for preprocessing sequences and generating non-AMP sequences as negative training samples. We additionally excluded sequences containing cysteine and sequences shorter than 15 or longer than 50 amino acids. We fit one classifier to predict if a sequence is antimicrobial towards either E.coli, S.aureus, P.aeruginosa, or B.subtilis, which we used for hyper-parameter tuning, and a second classifier for C. alibicani, which we used for hold-out evaluation. We used C. alibicani as hold-out target since its antimicrobial activity was least correlated with the activity of other pathogenes in the dataset with more than 1000 AMP sequences. We used random forest classifiers since they were more accurate (cross validated AUC 0.99 and 0.94) than alternative models such as k-nearest neighbors, Gaussian processes, or neural networks. Since sequences are variable-length, we padded them to the maximum sequence length of 50 and extended the vocabulary by an additional end of sequence token. Tokens after the fist end of sequence token were ignored when evaluating f (x). Protein Ising AMP DyNA-PPO is built on PPO, which we have found to outperform other policy-based and value-based RL methods in practice on our problems. In Figure 7 we contrast the performance of PPO , REINFORCE , deep Q-learning , and categorical distributional deep Q-learning on all problems considered in section 4. We find that PPO has better exploration properties than REINFORCE, which tends to converge too soon to a local optimum. The poor performance of DQN and CatDQN can be explained by the sparse reward (the reward is only non-zero at the terminal state), such that the Bellman error and training loss for updating the Q network are zero in most states. We also found the performance of DQN and CatDQN to be sensitive to the choice of the epsilon greedy rate and Boltzmann temperature for trading-off exploration and exploitation and increasing diversity. TF Bind Protein Ising AMP Figure 8: Comparison of additional baselines. We consider the performance of optimizers based on MCMC (section A.2). Such methods are known to be effective optimizers when evaluating the black box function is inexpensive, and thus many iterations of sampling can be performed. The focus of our experiments is on resource-constrained black box optimization. We find that their low sample efficiency makes them undesirable for biological sequence design. We also consider DbAS with a LSTM generative model instead of a VAE with multi-layer perceptron decoder to disentangle the choice of generative model in DbAS from the overall optimization strategy. DbAs VAE outperforms DbAs RNN on all problems except for TF bind. Figure 9: Comparison of alternative approaches for promoting diversity. Left column: The proposed density based exploration bonus as described in section 2.4, which adds a penalty to the reward of a sequence x that is proportional to the distance-weighted number of past sequence that are less than a specified distance away from x (here edit distance one). Middle column: An alternative approach where the exploration bonus of a sequence is proportional to the distance to the nearest neighboring past sequence. Right column: standard entropy regularization. Shown are the cumulative maximum reward and alternative metrics for quantifying diversity depending on the penalty strength (λ in section 2.4) of each exploration approach. Without exploration bonus (penalty = 0.0; red line), PPO does not find the optimal solution (cumulative maximum is below 1.0) and the hamming distance and uniqueness of sequences within a batch converge to zero. PPO finds the optimal solutions and continues to generate diverse sequences by increasing the strength of any of the three exploration approaches. The density based exploration bonus is most effective in recovering all optima (second row, left plot) and enables a more fine-grained control of diversity compared to the distance based approach. Results are shown for target CRX REF R1 of the transcription factor binding problem. Figure 11: Analysis of density-based exploration bonus on the AMP problem. The top row shows the sensitivity to the distance radius and the bottom row to the regularization strength λ (section 2.4). Diversity correlates positively with the distance radius and regularization strength. = 2 and λ = 0.1 provides the best trade-off between optimization performances (cumulative maximum reward) and diversity (mean pairwise hamming distance and uniqueness). Penalizing only exact duplicates (= 0) is less effective in maintaining a high hamming distance than taking neighboring sequences into account (> 0). Figure 13: Optimization performance on the AMP problem depending on the uncertainty threshold for stopping model-based optimization and the maximum number model optimization rounds M. Without threshold (Inf; red line), DyNA PPO converges to a sub-optimal solution, in particular when the maximum number of model-based optimization rounds M is high. A threshold of 0.5 prevents a performance decrease due to inaccuracy of the model (see figure 12). Protein Ising AMP Figure 14: Sensitivity of DyNA PPO depending on the choice of the minimum cross-validation score τ for model-based optimization. Shown are the for the transcription factor binding-, protein contact Ising-, and AMP problem. DyNA PPO reduces to PPO if τ is above the maximum cross-validation score of models that are considered during model selection, e.g. if τ = 1.0. If τ is too low, also inaccurate models are selected, which reduces the overall accuracy of the ensemble model and optimization performance. A cross-validation score between 0.4 and 0.5 is best for all problems.
We augment model-free policy learning with a sequence-level surrogate reward functions and count-based visitation bonus and demonstrate effectiveness in the large batch, low-round regime seen in designing DNA and protein sequences.
1,117
scitldr
Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive. Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions (e.g. hand-written, placed randomly). We propose a two-tiered architecture for tackling this kind of problem. The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception. The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task. For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs. The ing model is able to solve a variety of tasks in the visual arithmetic domain,and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency. Recent successes in machine learning have shown that difficult perceptual tasks can be tackled efficiently using deep neural networks BID18. However, many challenging tasks may be most naturally solved by combining perception with symbol manipulation. The act of grading a question on a mathematics exam, for instance, requires both sophisticated perception (identifying discrete symbols rendered in many different writing styles) and complex symbol manipulation (confirming that the rendered symbols correspond to a correct answer to the given question). In this work, we address the question of creating machine learning systems that can be trained to solve such perceptuo-symbolic problems from a small number of examples. In particular, we consider, as a first step toward fullblown exam question grading, the visual arithmetic task, where the goal is to carry out basic arithmetic algorithms on hand-written digits embedded in an image, with the wrinkle that an additional symbol in the image specifies which of a handful of algorithms (e.g. max, min, +, *) should be performed on the provided digits. One straightforward approach to solving the visual arithmetic task with machine learning would be to formulate it as a simple classification problem, with the image as input, and an integer giving the correct answer to the arithmetic problem posed by the image as the label. A convolutional neural network (CNN; BID17 could then be trained via stochastic gradient descent to map from input images to correct answers. However, it is clear that there is a great deal of structure in the problem which is not being harnessed by this simple approach, and which would likely improve the sample efficiency of any learning algorithm that was able to exploit it. While the universal approximation theorem BID9 suggests that an architecturally homogeneous network such as a CNN should be able to solve any task when it is made large enough and given sufficient data, imposing model structure becomes important when one is aiming to capture human-like abilities of strong generalization and learning from small datasets BID16 .In particular, in this instance we would like to provide the learner with access to modules implementing information processing functions that are relevant for the task at hand -for example, modules that classify individual symbols in the image, or modules that perform symbolic computations on stored representations. However, it is not immediately clear how to include such modules in standard deep networks; the classifiers need to somehow be applied to the correct portion of the image, while the symbolic transformations need to be applied to the correct representations at the appropriate time and, moreover, will typically be non-differentiable, precluding the possibility of training via backpropogation. In this work we propose an approach that solves this type of task in two steps. First, the machine learning practitioner identifies a collection of modules, each performing an elementary information processing function that is predicted to be useful in the domain of interest, and assembles them into a designed information processing machine called an interface BID30 that is coupled to the external environment. Second, reinforcement learning (RL) is used to train a controller to make use of the interface; use of RL alleviates any need for the interface to be differentiable. For example, in this paper we make use of an interface for the visual arithmetic domain that contains: a discrete attention mechanism; three pre-trained perceptual neural networks that classify digits/classify arithmetic symbols/detect salient locations (respectively); several modules performing basic arithmetic operations on stored internal representations. Through the use of RL, a controller learns to sequentially combine these components to solve visual arithmetic tasks. We propose a novel recipe for constructing agents capable of solving complex tasks by sequentially combining provided information processing modules. The role of the system designer is limited to choosing a pool of modules and gathering training data in the form of input-output examples for the target task. A controller is then trained by RL to use the provided modules to solve tasks. We evaluate our approach on a family of visual arithmetic tasks wherein the agent is required to perform arithmetical reduction operations on handwritten digits in an image. Our experiments show that the proposed model can learn to solve tasks in this domain using significantly fewer training examples than unstructured feedforward networks. The remainder of the article is organized as follows. In Section 2 we describe our general approach and lay down the required technical machinery. In Section 3 we describe the visual arithmetic task domain in detail, and show how our approach may be applied there. In Section 4 we present empirical demonstrating the advantages of our approach as it applies to visual arithmetic, before reviewing related work in Section 5 and concluding with a discussion in Section 6. Our approach makes use of standard reinforcement learning formalisms BID27. The external world is modelled as a Partially Observable Markov Decision Process (POMDP), E. Each time step E is in a state s t, based upon which it emits an observation o t that is sent to the learning agent. The agent responds with an action a t, which causes the environment to emit a reward r t. Finally, the state is stochastically updated according to E's dynamics, s t+1 ∼ P (·|s t, a t). This process repeats for T time steps. The agent is assumed to choose a t according to a parameterized policy that maps from observation-action histories to distributions over actions, i.e. a t ∼ π θ (·|h t), where h t = o 0, a 0,..., o t and θ is a parameter vector. We make extensive use of the idea of an interface as proposed in BID30. An interface is a designed, domain-specific machine that mediates a learning agent's interaction with the external world, providing a representation (observation and action spaces) which is intended to be more conducive to learning than the raw representation provided by the external world. In this work we formalize an interface as a POMDP I distinct from E, with its own state, observation and action spaces. The interface is assumed to be coupled to the external world in a particular way; each time step E sends an observation to I, which potentially alters its state, after which I emits its own observation to the agent. When the agent responds with an action, it is first processed by I, which once again has the opportunity to change its state, after which I sends an action to E. The agent may thus be regarded as interacting with a POMDP C comprised of the combination of E and I. C's observation and action spaces are the same as those of I, its state is the concatenation of the states of I and E, and its dynamics are determined by the nature of the coupling between I and E. BID30 learn to control interfaces in order to solve purely algorithmic tasks, such as copying lists of abstractly (rather than perceptually) represented digits. One of the main insights of the current work is that the idea of interfaces can be extended to tasks with rich perceptual domains by incorporating pre-trained deep networks to handle the perceptual components. We train controllers using the actor-critic algorithm (see e.g. BID27 BID2 BID20 BID26). We model the controller as a policy π θ that is differentiable with respect to its parameters θ. Assume from the outset that the goal is to maximize the expected sum of discounted rewards when following π θ: DISPLAYFORM0 where DISPLAYFORM1 is the probability of that trajectory under π θ, and γ ∈ is a discount factor. We look to maximize this objective using gradient ascent; however, it is not immediately clear how to compute ∇ θ J(θ), since the probability of a trajectory P π θ (τ) is a function of the environment dynamics, which are generally unknown. Fortunately, it can be shown that an unbiased estimate of ∇ θ J(θ) can be obtained by differentiating a surrogate objective function that can be estimated from sample trajectories. Letting R t = T −1 i=t γ i−t r i, the surrogate objective is: DISPLAYFORM2 The standard REINFORCE algorithm BID29 consists in first sampling a batch of trajectories using π θ, then forming an empirical estimate f (θ) of F(θ). ∇ θ f (θ) is then computed, and the parameter vector θ updated using standard gradient ascent or one of its more sophisticated counterparts (e.g. ADAM; BID13).The above gradient estimate is unbiased (i.e. E [∇ θ f (θ)] = ∇ θ J(θ)) but can suffer from high variance. This variance can be somewhat reduced by the introduction of a baseline function b t (h) into Equation: DISPLAYFORM3 It can be shown that including b t (h) does not bias the gradient estimate and may lower its variance if chosen appropriately. The value function for the current policy, et al., 2000) -however this will rarely be known. A typical compromise is to train a function V ω (h), parameterized by a vector ω, to approximate V π θ (h) at the same time as we are training π θ. Specifically, this is achieved by minimizing a sample-based estimate of DISPLAYFORM4 DISPLAYFORM5 We employ two additional standard techniques BID20. First, we have V ω (h) share the majority of its parameters with π θ, i.e. ω = θ. This allows the controller to learn useful representations even in the absence of reward, which can speed up learning when reward is sparse. Second, we include in the objective a term which encourages π θ to have high entropy, thereby favouring exploration. Overall, given a batch of N sample trajectories from policy π θ, we update θ in the direction of the gradient of the following surrogate objective:1 N T k t as the first term does not provide a useful training signal for V θ. We now describe the Visual Arithmetic task domain in detail, as well as the steps required to apply our approach there. We begin by describing the external environment E, before describing the interface I, and conclude the section with a specification of the manner in which E and I are coupled in order to produce the POMDP C with which the controller ultimately interacts. Tasks in the Visual Arithmetic domain can be cast as image classification problems. For each task, each input image consists of an (n, n) grid, where each grid cell is either blank or contains a digit or letter from the Extended MNIST dataset BID1. Unless indicated otherwise, we use n = 2. The correct label corresponding to an input image is the integer that from applying a specific (task-varying) reduction operation to the digits present in the image. We consider 5 tasks within this domain, grouped into two kinds. Changing the task may be regarded as changing the external environment E. Single Operation Tasks. In the first kind of task, each input image contains a randomly selected number of digits (2 or 3 unless otherwise stated) placed randomly on the grid, and the agent is required to output an answer that is a function of the both the specific task being performed and the digits displayed in the image. We consider 4 tasks of this kind: Sum, Product, Maximum and Minimum. Example input images are shown in FIG0, Top Row. Combined Task. We next consider a task that combines the four Single Operation tasks. Each input example now contains a capital EMNIST letter in addition to 2-to-3 digits. This letter indicates which reduction operation should be performed on the digits: A indicates add/sum, M indicates multiplication/product, X indicates maximum, N indicates minimum. Example input images are shown in FIG0, Bottom Row. Succeeding on this task requires being able to both carry out all the required arithmetic algorithms and being able to identify, for any given input instance, which of the possible algorithms should be executed. def update_interface(ExternalObs e, string action): if action == "right": fovea_x += 1 elif action == "left": fovea_x -= 1 elif action == "down": fovea_y += 1 elif action == "up": fovea_y -= 1 elif action == "+": store += digit elif action == " * ": store * = digit elif action == "max": store = max(store, digit) elif action == "min": store = min(store, digit) elif action == "+1": store += 1 elif action == "classify_op": op = op_classifier(get_glimpse(e, fovea_x, fovea_y)) elif action == "classify_digit": digit = digit_classifier(get_glimpse(e, fovea_x, fovea_y)) elif action == "update_salience": salience_map = salience_detector(e, fovea_x, fovea_y) else:raise Exception("Invalid action") obs = (fovea_x, fovea_y, store, op, digit, salience_map) return obs We now describe the interface I that is used to solve tasks in this domain. The first step is to identify information processing functions that we expect to be useful. We can immediately see that for Visual Arithmetic, it will be useful to have modules implementing the following functions:1. Detect and attend to salient locations in the image.2. Classify a digit or letter in the attended region.3. Manipulate symbols to produce an answer. We select modules to perform each of these functions and then assemble them into an interface which will be controlled by an agent trained via reinforcement learning. A single interface, depicted in FIG1, is used to solve the various Visual Arithmetic tasks described in the previous section. This interface includes 3 pre-trained deep neural networks. Two of these are instances of LeNet , each consisting of two convolutional/max-pool layers followed by a fully-connected layer with 128 hidden units and RELU non-linearities. One of these LeNets, the op classifier, is pre-trained to classify capital letters from the EMNIST dataset. The other LeNet, the digit classifier, is pre-trained to classify EMNIST digits. The third network is the salience detector, a multilayer perceptron with 3 hidden layers of 100 units each and RELU non-linearities. The salience network is pre-trained to output a salience map when given as input scenes consisting of randomly scattered EMNIST characters (both letters and digits). In the Visual Arithmetic setting, E may be regarded as a degenerate POMDP which emits the same observation, the image containing the EMNIST letters/digits, every time step. I sends the contents of its store field (see FIG1) to E every time step as its action. During training, E responds to this action with a reward that depends on both the time step and whether the action sent to E corresponds to the correct answer to the arithmetic problem represented by the input image. Specifically, for all but the final time step, a reward of 0 is provided if the answer is correct, and −1/T otherwise. On the final time step, a reward of 0 is provided if the answer is correct, and −1 otherwise. Each episode runs for T = 30 time steps. At test time, no rewards are provided and the contents of the interface's store field on the final time step is taken as the agent's guess for the answer to the arithmetic problem posed by the input image. For the controller, we employ a Long Short-Term Memory (LSTM) BID8 with 128 hidden units. This network accepts observations provided by the interface (see FIG1 as input, and yields as output both π θ (·|h) (specifically a softmax distribution) from which an action is sampled, and V θ (h) (which is only used during training). The weights of the LSTM are updated according to the actor-critic algorithm discussed in Section 2.3. In this section, we consider experiments applying our approach to the Visual Arithmetic domain. These experiments involve the high-level tasks described in Section 3.1. For all tasks, our reinforcement learning approach makes use of the interface described in Section 3.2 and the details provided in Section 3.3.Our experiments look primarily at how performance is influenced by the number of external environment training samples provided. For all sample sizes, training the controller with reinforcement learning requires many thousands of experiences, but all of those experiences operate on the small provided set of input-output training samples from the external environment. In other words, we assume the learner has access to a simulator for the interface, but not one for the external environment. We believe this to be a reasonable assumption given that the interface is designed by the machine learning practitioner and consists of a collection of information processing modules which will, in most cases, take the form of computer programs that can be executed as needed. We compare our approach against convolutional networks trained using cross-entropy loss, the de facto standard when applying deep learning to image classification tasks. These feedforward networks can be seen as interacting directly with the external environment (omitting the interface) and running for a single time step, i.e. T = 1. The particular feedforward architecture we experiment with is the LeNet , with either 32, 128 or 512 units in the fully-connected layer. Larger, more complex networks of course exist, but these will likely require much larger amounts of data to train, and here we are primarily concerned with performance at small sample sizes. For training the convolutional networks, we treat all tasks as classification problems with 101 classes. The first 100 classes correspond to the integers 0-99. All integers greater than or equal to 100 (e.g. when multiplying 3 digits) are subsumed under the 101st class. Experiments showing the sample efficiency of the candidate models on the Single Operation tasks are shown in FIG2. Similar for the Combined task are shown in FIG3. In both cases, our reinforcement learning approach is able to leverage the inductive bias provided by the interface to achieve good performance at significantly smaller sample sizes than the relatively architecturally homogeneous feedforward convolutional networks. in cognitive science. Production systems date back to Newell and Simon's pioneering work on the study of high-level human problem solving, manifested most clearly in the General Problem Solver (GPS; BID23 . While production systems have fallen out of favour in mainstream AI, they still enjoy a strong following in the cognitive science community, forming the core of nearly every prominent cognitive architecture including ACT-R , SOAR BID22 BID15, and EPIC . However, the majority of these systems use hand-coded, symbolic controllers; we are not aware of any work that has applied recent advances in reinforcement learning to learn controllers for these systems for difficult tasks. A related body of work concerns recurrent neural networks applied to supervised learning problems. BID19, for example, use reinforcement learning to train a recurrent network to control the location of an attentional window in order to classify images containing MNIST digits placed on cluttered s. Our approach may be regarded as providing a recipe for building similar kinds of models while placing greater emphasis on tasks with a difficult algorithmic components and the use of structured interfaces. Neural Abstract Machines. One obvious point of comparison to the current work is recent research on deep neural networks designed to learn to carry out algorithms on sequences of discrete symbols. Some of these frameworks, including the Differentiable Forth Interpreter BID25 and TerpreT , achieve this by explicitly generating code, while others, including the Neural Turing Machine (NTM;, Neural Random-Access Machine (NRAM; BID14, Neural Programmer (NP; BID21, and Neural Programmer-Interpreter (NPI; BID24 avoid generating code and generally consist of a controller network that learns to perform actions using a differentiable external computational medium (i.e. a differentiable interface) in order to carry out an algorithm. Our approach is most similar to the latter category, the main difference being that we have elected not to require the external computational medium to be differentiable, which provides it with greater flexibility in terms of the components that can be included in the interface. In fact, our work is most similar to BID30, which also uses reinforcement learning to learn algorithms, and from which we borrowed the idea of an interface, the main difference being that we have included deep networks in our interfaces in order to tackle tasks with non-trivial perceptual components. Visual Arithmetic. Past work has looked at learning arithmetic operations from visual input. BID10 train a multi-layer perceptron to map from images of two 7-digit numbers to an image of a number that is some task-specific function of the numbers in the input images. Specifically, they look at addition, subtraction, multiplication and Roman-Numeral addition. Howeover, they do not probe the sample efficiency of their method, and the digits are represented using a fixed computer font rather than being hand-written, making the perceptual portion of the task significantly easier. Gaunt et al. FORMULA0 addresses a task domain that is similar to Visual Arithmetic, and makes use of a differentiable code-generation method built on top of TerpreT. Their work has the advantage that their perceptual modules are learned rather than being pre-trained, but is perhaps less general since it requires all components to be differentiable. Moreover, we do not view our reliance on pre-trained modules as particularly problematic given the wide array of tasks deep networks have been used for. Indeed, we view our approach as a promising way to make further use of any trained neural network, especially as facilities for sharing neural weights mature and enter the mainstream. Additional work has focused more directly on the use of neural modules and adaptively choosing groups of modules to apply depending on the input. Endto-End Module Networks BID11 use reinforcement learning to train a recurrent neural network to lay out a feedforward neural network composed of elements of a stored library of neural modules (which are themselves learnable). Our work differs in that rather than having the layout of the network depend solely on the input, the module applied at each stage (i.e. the topology of the network) may depend on past module applications within the same episode, since a decision about which module to apply is made at every time step. Systems built using our framework can, for example, use their modules to gather information about the environment in order to decide which modules to apply at a later time, a feat that is not possible with Module Networks. In Deep Sequential Neural Networks (DSNN; BID3), each edge of a fixed directed acyclic graph (DAG) is a trainable neural module. Running the network on an input consists in moving through the DAG starting from the root while maintaining a feature vector which is repeatedly transformed by the neural modules associated with the traversed edges. At each node of the DAG, an outgoing edge is stochastically selected for traversal by a learned controller which takes the current features as input. This differs from our work, where each module may be applied many times rather than just once as is the case for the entirely feedforward DSNNs (where no module appears more than once in any path through the DAG connecting the input to the output). Finally, PathNet is a recent advance in the use of modular neural networks applied to transfer learning in RL BID4 ). An important difference from our work is that modules are recruited for the entire duration of a task, rather than on the more fine-grained step-by-step basis used in our approach. There are number of possible future directions related to the current work, including potential benefits of our approach that were not explored here. These include the ability to take advantage of conditional computation; in principle, only the subset of the interface needed to carry out the chosen action needs to be executed every time step. If the interface contains many large networks or other computationally intensive modules, large speedups can likely be realized along these lines. A related idea is that of adaptive computation time; in the current work, all episodes ran for a fixed number of time steps, but it should be possible to have the controller decide when it has the correct answer and stop computation at that point, saving valuable computational resources. Furthermore, it may be beneficial to train the perceptual modules and controller simultaneously, allowing the modules to adapt to better perform the uses that the controller finds for them. Finally, the ability of reinforcement learning to make use of discrete and non-differentiable modules opens up a wide array of possible interface components; for instance, a discrete knowledge base may serve as a long term memory. Any generally intelligent system will need many individual competencies at its disposal, both perceptual and algorithmic; in this work we have proposed one path by which a system may learn to coordinate such competencies. We have proposed a novel approach for solving tasks that require both sophisticated perception and symbolic computation. This approach consists in first designing an interface that contains information processing modules such as pre-trained deep neural networks for processing perceptual data and modules for manipulating stored symbolic representations. Reinforcement learning is then used to train a controller to use the interface to solve tasks. Using the Visual Arithmetic task domain as an example, we demonstrated empirically that the interface acts as a source of inductive bias that allows tasks to be solved using a much smaller number of training examples than required by traditional approaches.
We use reinforcement learning to train an agent to solve a set of visual arithmetic tasks using provided pre-trained perceptual modules and transformations of internal representations created by those modules.
1,118
scitldr
Animals develop novel skills not only through the interaction with the environment but also from the influence of the others. In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers. Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness. Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task. The ing algorithm, namely Interior Policy Differentiation (IPD), brings about performance improvement as well as a collection of policies that solve a given task with distinct behaviors The paradigm of Reinforcement Learning (RL), inspired by cognition and animal studies , can be described as learning by interacting with the environment to maximize a cumulative reward . From the perspective of ecology, biodiversity as well as the development of various skills are crucial to the continuation and evolution of species (Darwin, 1859;). Thus the behavioral diversity becomes a rising topic in RL. Previous works have tried to encourage the emergence of behavioral diversity in RL with two approaches: The first approach is to design interactive environments which contain sufficient richness and diversity. For example, show that rich environments enable agents to learn different locomotion skills even using the standard RL algorithms. Yet designing a complex environment requires manual efforts, and the diversity is limited by the obstacle classes. The second approach to increase behavioral diversity is to motivate agents to explore beyond just maximizing the reward for the given task. proposed to maximize a heuristically defined novelty metric between policies through task-novelty joint optimization, but the final performance of agents is not guaranteed. In this work, we address the topic of policy differentiation in RL, i.e., to improve the diversity of RL agents while keeping their ability to solve the primal task. We draw the inspiration from the Social Influence in animal society (; ; van ; ;) and formulate the concept of social influence in the reinforcement learning paradigm. Our learning scheme is illustrated in Fig 1. The target agent not only learns to interact with the environment to maximize the reward but also differentiate the actions it takes in order to be different from other existing agents. Since the social influence often acts on people passively as a sort of peer pressure, we implement the social influence in terms of social uniqueness motivation and consider it as a constrained optimization problem. In the following of our work, we first define a rigorous policy distance metric in the policy space to compare the similarity of the agents. Then we develop an optimization constraint using the proposed metric, which brings immediate rather than episodic feedback in the learning process. A novel method, namely Interior Policy Differentiation (IPD), is further I should learn to run as fast as I can I should try to be different Figure 1: The illustration of learning with social influence. Instead of focusing only on the primal task, an additional constraint is introduced to the target agent, motivating it to not only perform well in the primal task but also take actions differently to other existing agents. proposed as a better solution for the constrained policy optimization problem. We benchmark our method on several locomotion tasks and show it can learn various diverse and well-behaved policies for the given tasks based on the standard Proximal Policy Optimization (PPO) algorithm . Intrinsic motivation methods. The Variational Information Maximizing Exploration (VIME) method is designed by to tackle the sparse reward problems. In VIME, an intrinsic reward term based on the maximization of information gains is added to contemporary RL algorithms to encourage exploration. The curiosity-driven methods, proposed by and Burda et al. (2018a) define intrinsic rewards according to prediction errors of neural networks. i.e., when taking previous unseen states as inputs, networks trained with previous states will tend to predict with low accuracy, so that such prediction errors can be viewed as rewards. Burda et al. (2018b) proposed Random Network Distillation (RND) to quantify intrinsic reward by prediction differences between a fixed random initialized network and another randomly initialized network trained with previous state information. proposed Competitive Experience Replay (CER), in which they use two actors and a centralized critic, and defined an intrinsic reward by the state coincidence of two actors. The values of intrinsic rewards are fixed to be ±1 for the two actors separately. All of those approaches leverage the weighted sum of the external rewards, i.e., the primal rewards provided by environments, and intrinsic rewards that provided by different heuristics. A challenging problem is the trade-off between external rewards and intrinsic rewards. The Task-Novelty Bisector (TNB) learning method introduced by aims to solve such problem by jointly optimize the extrinsic rewards and intrinsic rewards. Specifically, TNB updates the policy in the direction of the angular bisector of the two gradients, i.e., gradients of the extrinsic and intrinsic objective functions. However, the foundation of such joint optimization is not solid. Besides, creating an extra intrinsic reward function and evaluating the novelty of states or policies always requires additional neural networks such as auto-encoders. Thus extra computation expenses are needed . Diverse behaviors from rich environments and algorithms. introduce the Distributed Proximal Policy Optimization (DPPO) method and enable agents with simulated bodies to learn complex locomotion skills in a diverse set of challenging environments. Although the learning reward they utilize is straightforward, the skills their policy learned are quite impressive and effective in traveling terrains and obstacles. Their work shows that rich environments can encourage the emergence of different locomotion behaviors, but extra manual efforts are required in designing such environments. The research of shows that different RL algorithms may converge to different policies for the same task. The authors find that algorithms based on policy gradient tend to converge to the same local optimum in the game of Pitfall, while off-policy and value-based algorithms are prone to learn sophisticated strategies. On the contrary, in this paper, we are more interested in how to learn different policies through a single learning algorithm and learn the capability of avoiding local optimum. , maintains model uncertainty given the data collected from the environment via an ensemble of deep neural networks. To encourage the emergence of behavioral diversity in RL, we first define a metric to measure the difference between policies, which is the foundation for the later algorithm we propose. We denote the learned policies as {π θi ; θ i ∈ Θ, i = 1, 2, ...}, wherein θ i represents parameters of the i-th policy, Θ denotes the whole parameter space. In the following, we omit π and denote a policy π θi as θ i for simplicity unless stated otherwise. Mathematically, a metric should satisfy three important properties, namely the identity, the symmetry as well as the triangle inequality. Definition 1 A metric space is an ordered pair (M, d) where M is a set and d is a metric on M, i.e., a function d: M × M → R such that for any x, y, z ∈ M, the following holds: We use the Total Variance Divergence D T V to measure the distance between policies. Concretely, for discrete probability distributions p and q, this distance is defined as is a metric on Θ, thus (Θ, D ρ T V) is a metric space. 2003;; ), and similar can be get 3 It can be extended to continuous state and action spaces by replacing the sums with integrals. 4 The factor 1 2 in is omitted in our work for conciseness. Consequently, to motivate RL with the social uniqueness, we hope our method can maximize the uniqueness of a new policy, i.e., max θ U(θ|Θ ref), where the Θ ref includes all the existing policies. In practice, the calculation of D ρ T V (θ i, θ j) is based on Monte Carlo estimation. i.e., we need to sample s from ρ(s). Although in finite state space we can get precise estimation after establishing ergodicity, problem arises when we are facing continuous state cases. i.e. it is difficult to efficiently get enough samples. Formally, we denote the domain of ρ(s) as S and denote the domain of ρ θ (s) as S θ ⊂ S, where ρ θ (s):= ρ(s|s ∼ θ) and in finite time horizon problems ρ(s|s ∼ θ) = P (s 0 = s|θ) + P (s 1 = s|θ) +... + P (s T = s|θ). As we only care about the reachable regions, the domain S can be divided In order to improve the sample efficiency, we propose to approximate, where θ is a certain fixed behavior policy that irrelevant to θ i, θ j. Such approximation requires a necessary condition: The domain of possible states are similar between different policies: When such condition holds, we can use ρ(s|s ∼ θ) as our choice of ρ(s), and the properties in Definition 1 still holds. In practice, the Condition 1 always holds as we can ensure this by adding sufficiently large noise on θ, while the permitted state space is always limited. And for more general cases, to satisfy the properties in Definition 1, we must sample s from S θ ∪ S θj, accordingly, where N represents random action when a policy have never been trained or visited such state domain. Plugging Eq. into Eq., the objective function of policy differentiation is While the first two terms are related to the policy θ, the last term is only related to the domain S θ. If we enable sufficient exploration in training as well as in the initialization of θ, the last term will disappear (i.e. S θj ⊂ S θ). Hence we can also use D Proposition 1 (Unbiased Single Trajectory Estimation) The estimation of ρ θ (s) using a single trajectory τ is unbiased. The proof of Proposition 1 is in Appendix B. Given the definition of uniqueness and a practically unbiased sampling method, the next step is to develop an efficient learning algorithm. In the traditional RL paradigm, maximizing the expectation of cumulative rewards g = t=0 γ t r t is commonly used as the objective. i.e. max θ∈Θ E τ ∼θ [g], where τ ∼ θ denotes a trajectory τ sampled from the policy θ using Monte Carlo methods. To improve the behavioral diversity of different agents, the learning objective must take both reward from the primal task and the policy uniqueness into consideration. Previous approaches (; ; a; b;) often directly write the weighted sum of the reward from the primal task and the intrinsic reward g int = t=0 γ t r int,t, where r int,t denotes the intrinsic reward (e.g., as the uniqueness reward in our case) as follows, where 0 < α < 1 is a weight parameter. Such an objective is sensitive to the selection of α as well as the formulation of r int. For example, in our case formulating the intrinsic reward r int as will in significantly different . Besides, a trade-off arises in the selection of α: while a large α may undermine the contribution of intrinsic reward, a small α could ignore the importance of the reward, leading to the failure of agent in solving the primal task. To tackle these issues, we draw inspiration from the observation that social uniqueness motivates people in passive ways. In other words, it plays more like a constraint rather than an additional target. Therefore, we change the multi-objective optimization problem in Eq. into a constrained optimization problem as: where r 0 is a threshold indicating minimal permitted uniqueness, and r int,t denotes a moving average of r int,t. Further discussion on the selection of r 0 will be deliberated in Appendix D. From the perspective of optimization, Eq. can be viewed as a penalty method which replaces the constrained optimization problem in Eq. with the penalty term r int and the penalty coefficient 1−α α > 0, where the difficulty lies in the selection of α. The work of ) tackles this challenge by the Task Novel Bisector (TNB) in the form of Feasible Direction Methods (FDMs) . As a heuristic approximation, that approach requires reward shaping and intensive emphasis on r int,t. Instead, in this work we propose to solve the constrained optimization problem Eq. by resembling the Interior Point Methods (IPMs) . In vanilla IPMs, the constrained optimization problem in Eq. is solved by reforming it to an unconstrained form with an additional barrier term in the objective as The limit of Eq. when α → 0 then leads to the solution of Eq.. Readers please refer to Appendix G for more discussion on the correspondence between those novel policy seeking methods and constrained optimization methods. However, directly applying the IPMs is computationally challenging and numerically unstable, especially when α is small. Luckily, in our proposed RL paradigm where the behavior of an agent is influenced by its peers, a more natural way can be used. Precisely, since the learning process is based on sampled transitions, we can simply bound the collected transitions in the feasible region by permitting previous trained M policies θ i ∈ Θ ref, i = 1, 2,..., M sending termination signals during the training process of new agents. In other words, we implicitly bound the feasible region by terminating any new agent that steps outside it. Consequently, during the training process, all valid samples we collected are inside the feasible region, which means these samples are less likely to appear in previously trained policies. At the end of the training, we then naturally obtain a new policy that has sufficient uniqueness. In this way, we no longer need to consider the trade-off problem between intrinsic and extrinsic rewards deliberately. The learning process of our method is thus more robust and no longer suffer from objective inconsistency. As our formulation of the constrained optimization problem Eq. is inspired by IPMs, we name our approach as Interior Policy Differentiation (IPD) method. The MuJoCo environment We demonstrate our proposed method on the OpenAI Gym where the physics engine is based on MuJoCo . Concretely, we test on three locomotion environments, the Hopper-v3 (11 observations and 3 actions), Walker2d-v3 (11 observations and 2 actions), and HalfCheetah-v3 (17 observations and 6 actions). In our experiments, all the environment parameters are set as default values. Uniqueness beyond intrinsic stochasticity Experiments in show that policies that perform differently can be produced by simply selecting different random seeds before training. Before applying our method to improve behavior diversity, we firstly benchmark how much uniqueness can be generated from the stochasticity in the training process of vanilla RL algorithms as well as the random weight initialization. In this work, we mainly demonstrate our proposed method based on PPO . The extension to other popular algorithms is straightforward. We also compare our proposed method with the TNB and weighted sum reward (WSR) approaches as different ways to combine the goal of the task and the uniqueness motivation . More implementation details are depicted in Appendix D. According to Theorem 2, the uniqueness r int in equation under our uniqueness metric can be unbiased approximated by i.e., we utilize the metric directly in learning new policies instead of applying any kind of reshaping. We implement WSR, TNB, and our method in the same experimental settings and for each method, 10 different policies are trained and try to be unique with regard to all previously trained policies sequentially. Concretely, the 1st policy is trained by ordinary PPO without any social influence. The 2nd policy should be different from 1st policy, and the 3rd should be different from the previous two policies, and so on. Fig.2 shows the qualitative of our method. We visualize the motion of agents by drawing multiple frames representing the pose of agents at different time steps in the same row. The horizontal interval between consecutive frames is proportional to the velocity of agents. The settings of the frequency of highlighted frames and the correlation between interval and velocity are fixed for each environment. The visualization starts from the beginning of each episode and therefore the readers can get sense of the process of acceleration as well as the pattern of motion of agents clearly. Fig. 3 shows our experimental in terms of uniqueness (the x-axis) and the performance (the y-axis). Policies in the upper right are the more unique ones with higher performance. In Hopper and HalfCheetah, our proposed method distinctively outperforms other methods. In Walker2d, both WSR and our method work well in improving the uniqueness of policies, but none of the three methods can find way to surpass the performance of PPO apparently. Detailed comparison on the task related rewards are carried out in Table 1. A box figure depicting the performance of each trained policy and their reward gaining curve are disposed in Fig.5 and Fig.6 in Appendix C. And Fig.7 in Appendix C provides more detailed from the view of uniqueness. In addition to averaged reward, we also use success rate as another metrics to compare the performance of different approaches. In this work, we consider a policy is success when its performance is at least as good as the averaged performance of policies trained without social influences. To be specific, we use the averaged final performance of PPO as the baseline. If a new policy, which aims at performing differently to solve the same task, surpasses the baseline during its training process, it will be regarded as a successful policy. Through the success rate, we know the policy does not learn unique behavior at the expense of performance. Table 1 shows the success rate of all the methods, including the PPO baseline. The show that our method can always surpass the average baseline during training. Thus the performance of our method can always be insured. In our experiments, we observed noticeable performance improvements in the Hopper and the HalfCheetah environments. For the environment of Hopper, in many cases, the agents trained with PPO tend to learn a policy that jumps as far as possible and then fall to the ground and terminate this episode (please refer to Fig.11 in Appendix E). Our proposed method can prevent new policies from always falling into the same local minimum. After the first policy being trapped in a local minimum, the following policies will try other approaches to avoid the same behavior, explore other feasible action patterns, and thereafter the performance may get improved. Such property shows that our method can be a helpful enhancement of the traditional RL scheme, which can be epitomized as policies could make mistakes, but they should explore more instead of hanging around the same local minimum. The similar feature attributes to the reward growth in the environment of HalfCheetah. Moreover, we can illuminate the performance improvement of HalfCheetah from another perspective. The environment of HalfCheetah is quite different from the other two for there is no explicit termination signal in its default settings (i.e., no explicit action like falling to the ground would trigger termination). At the beginning of the learning process, an agent will act randomly, ing in massive repeat, trivial samples as well as large control costs. In our learning scheme, since the agent also interacts with the peers, it can receive termination signals from the peers to prevent wasting too much effort acting randomly. During the learning process in our method, an agent will first learn to terminate itself as soon as possible to avoid heavy control costs by imitating previous policies and then learns to behave differently to pursue higher reward. From this point of view, such learning process can be regarded as a kind of implicit curriculum. As the number of policies learned with social influence grows, the difficulty of finding a unique policy may also increase. Later policies must keep away from all previous solutions. The of our ablation study on how the performance changes under different scales of social influence (i.e., the number of peers) is shown in Fig. 4, where the thresholds are selected according to our previous ablation study in Sec. D. The performance decrease is more obvious in Hopper than the other two environments for the action space of Hopper is only 3 dimensional. Thus the number of possible diverse policies can be discovered is limited. In this work, we develop an efficient approach to motivate RL to learn diverse strategies inspired by social influence. After defining the distance between policies, we introduce the definition of policy uniqueness. Regarding the problem as constrained optimization problem, our proposed method, Interior Policy Differentiation (IPD), draws the key insight of the Interior Point Methods. And our experimental demonstrate IPD can learn various well-behaved policies, and our approach can help agents to avoid local minimum and can be interpreted as a kind of implicit curriculum learning in certain cases. The first two properties are obviously guaranteed by D ρ T V. As for the triangle inequality, Figure 7: Maximal and minimal between policy uniqueness in Hopper, Walker2d and HalfCheetah environments. The are averaged over all possible combinations of 10 policies. As TNB and WSR optimize the uniqueness reward directly, their uniqueness sometimes can exceed our proposed method. However, such direct optimization will lead to decreasing in task related performance as cost. To tackle the trade-off problem, carefully hyper-parameter tuning and reward shaping is always a must. Detailed comparison on the task related rewards are carried out in Table 1 D IMPLEMENTATION DETAILS Calculation of D T V We use deterministic part of policies in the calculation of D T V, i.e., we remove the Gaussian noise on the action space in PPO and use Network Structure We use MLP with 2 hidden layers as our actor models in PPO. The first hidden layer is fixed to have 32 units. Our ablation study on the choice of unit number in the second layer is detailed in Table. 2, Table3 and Fig.8. Moreover, we choose to use 10, 64 and 256 hidden units for the three tasks respectively in all of the main experiments, after taking the success rate (Table. 2), performance (Table. 3) and computation expense (i.e. the preference to use less unit when the other two factors are similar) into consideration. Training Timesteps We fix the training timesteps in our experiments. The timesteps are fixed to be 1M in Hopper-v3, 1.6M for Walker2d-v3 and 3M for HalfCheetah. Threshold Selection In our proposed method, we can control the magnitude of policy uniqueness flexibly by adjusting the constraint threshold r 0. Choosing different thresholds will lead to different policy behaviors. Concretely., a larger threshold may drive the agent to perform more differently while smaller threshold imposes a lighter constraint on the behavior of the agent. Intuitively, a larger threshold will lead to relatively poor performance for the learning algorithm is less likely to find a feasible solution to Eq.. Besides, we do not use constraints in the form of Eq. as we need not force every single action of a new agent to be different from others. Instead, we are more care about the long term differences. Therefore, we use the cumulative uniqueness as constraints, We test our method with different choices of threshold values. The performance of agents under different thresholds are shown in Fig. 9 and more detailed analysis of their success rate is presented in Table. 2. F IMPLEMENTATION OF EQ. We do not use constraints in the form of Eq. as we need not force every single action of a new agent to be different from others. Instead, we are more care about the long term differences. Therefore, we use the cumulative uniqueness as constraints. Moreover, the constraints can be applied after the first t S timesteps (e.g. t S = 20) for the consideration of similar starting sequences. We note here, the WSR, TNB and IPD methods correspond to three approaches in constrained optimization problem. For simplicity, we consider Eq. with a more concise notion g int,t − g 0,t ≥ 0, where g int,t = t t=0 r int,t, i.e., max As the optimization of policy is based on batches of trajectory samples and is implemented with stochastic gradient descent, Eq. can be further simplified as: where g t (θ) denotes the average over a trajectory. The Penalty Method considers the constraints of Eq. by putting constraint g(θ) into a penalty term, and then solve the unconstrained problem using an iterative manner, and the limit when α → 0 lead to the solution of the primal constrained problem. As an approximation, WSR choose a fixed weight term α, and use the gradient of ∇ θ f + 1−α α ∇ θ g instead of ∇ θ f + 1−α α ∇ θ min{g(θ), 0}, thus the final solution will intensely rely on the selection of α. The Taylor series of g(θ) at pointθ is g(θ + λ p) = g(θ) + ∇ θ g(θ) T λ p + O(||λ p||) The Feasible Direction Method (FDM) considers the constraints of Eq. by first finding a direction p satisfies so that for small λ, we have and g(θ + λ p) = g(θ) + λ∇ θ g(θ) T p > 0 if g(θ) > 0 The TNB method, by using the bisector of gradients ∇ θ f and ∇ θ g, select p to be Clearly, Eq. satisfies Eq., but it is more strict than Eq. as the ∇ θ g term always exists during the optimization of TNB. In TNB, the learning stride is fixed to be, leading to problem when ∇ θ f → 0, which shows the final optimization will heavily rely on the selection of g. i.e., the shape of g is crucial for the success of TNB. or use the barrier term of −α log g(θ) instead: where α, the barrier factor, is a small positive number. As α is small, the barrier term will introduce only minuscule influence on the objective. On the other hand, when θ get closer to the barrier, the objective will increase fast. It is clear that the solution of the objective with barrier term will get closer to the primal objective as α getting smaller. Thus in practice, such methods will choose a sequence of {α i} such that 0 < α i < α k+1 and α i → 0 as k → ∞ The limit of Eq. Directly applying this method is computationally challenging and numerically unstable, especially when α is small. A more natural way can be used: since the learning process is based on sampled transitions, we can simply bound the collected transitions in the feasible region by permitting previous trained M policies θ i ∈ Θ ref, i = 1, 2,..., M sending termination signals during the training process of new agents. In other words, we implicitly bound the feasible region by terminating any new agent that steps outside it. Consequently, during the training process, all valid samples we collected are inside the feasible region, which means these samples are less likely to appear in previously trained policies. At the end of the training, we then naturally obtain a new policy that has sufficient uniqueness. In this way, we no longer need to consider the trade-off problem between intrinsic and extrinsic rewards deliberately. The learning process of our method is thus more robust and no longer suffer from objective inconsistency. Algorithm.1 shows the pseudo code of IPD based on PPO, where the blue lines show the addition to primal PPO algorithm.
A new RL algorithm called Interior Policy Differentiation is proposed to learn a collection of diverse policies for a given primal task.
1,119
scitldr
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores. Generative adversarial networks (GANs) aim to approximate a data distribution P, using a parameterized model distribution Q. They achieve this by jointly optimizing generative and discriminative networks BID9. GANs are end-to-end differentiable, and samples from the generative network are propagated forward to a discriminative network, and error signals are then propagated backwards from the discriminative network to the generative network. The discriminative network is often viewed as learned, adaptive loss function for the generative network. GANs have achieved state-of-the-art for a number of applications BID8, producing more realistic, sharper samples than other popular generative models, such as variational autoencoders BID22. Because of their success, many GAN frameworks have been proposed. However, it has been difficult to compare these algorithms and understand their strengths and weaknesses because we are currently lacking in quantitative methods for assessing the learned generators. In this work, we propose new metrics for measuring how realistic samples generated from GANs are. These criteria are based on a formulation of divergence between the distributions P and Q BID33 BID38: DISPLAYFORM0 Here, different choices of µ, υ, and F can correspond to different f -divergences BID33 or different integral probability metrics (IPMs) BID38. Importantly, J(Q) can be estimated using samples from P and Q, and does not require us to be able to estimate P (x) or Q(x) for samples x. Instead, evaluating J(Q) involves finding the function f ∈ F that is maximally different with respect to P and Q.This measure of divergence between the distributions P and Q is related to the GAN criterion if we restrict the function class F to be neural network functions parameterized by the vector φ and the class of approximating distributions to correspond to neural network generators G θ parameterized by the vector θ, allowing formulation as a min-max problem: H is a Reproducing Kernel Hilbert Space (RKHS) and · L is the Lipschitz constant. For the LS-DCGAN, we used b = 1 and a = 0 BID28. DISPLAYFORM1 Metric µ υ Function Class GAN (GC) log f − log(1 − f) X → R +, ∃M ∈ R: |f (x)| ≤ M Least-Squares GAN (LS) −(f − b) DISPLAYFORM2 In this formulation, Q θ corresponds to the generator network's distribution and D φ corresponds to the discriminator network (see BID33 for details).We propose using J(θ) to evaluate the performance of the generator network G θ for various choices of µ and υ, corresponding to different f -divergences or IPMs between distributions P and Q θ, that have been successfully used for GAN training. Our proposed metrics differ from most existing metrics in that they are adaptive, and involve finding the maximum over discriminative networks. We compare four metrics, those corresponding to the original GAN (GC) BID8, the Least-Squares GAN (LS) BID28,the Wasserstein GAN (IW), and the Maximum Mean Discrepency GAN (MMD) criteria. Choices for µ, υ, and F for these metrics are shown in TAB0. Our method can easily be extended to other f -divergences or IPMs. To compare these and previous metrics for evaluating GANs, we performed many experiments, training and comparing multiple types of GANs with multiple architectures on multiple data sets. We qualitatively and quantitatively compared these metrics to human perception, and found that our proposed metrics better reflected human perception. We also show that rankings produced using our proposed metrics are consistent across metrics, thus are robust to the exact choices of the functions µ and υ in Equation 2.We used the proposed metrics to quantitatively analyze three different families of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN) BID34, Least-Squares GANs (LS-DCGAN), and Wasserstein GANs (W-DCGAN), each of which corresponded to a different proposed metric. Interestingly, we found that the different proposed metrics still agreed on the best GAN framework for each dataset. Thus, even though, e.g. for MNIST the W-DCGAN was trained with the IW criterion, LS-DCGAN still outperformed it for the IW criterion. Our analysis also included carrying out a sensitivity analysis with respect to various factors, such as the architecture size, noise dimension, update ratio between discriminator and generator, and number of data points. Our empirical show that: i) the larger the GAN architecture, the better the ; ii) having a generator network larger than the discriminator network does not yield good ; iii) the best ratio between discriminator and generator updates depend on the data set; and iv) the W-DCGAN and LS-DCGAN performance increases much faster than DCGAN as the number of training examples grows. These metrics thus allow us to tune the hyper-parameters and architectures of GANs based on our proposed method. GANs can be evaluated using manual annotations, but this is time consuming and difficult to reproduce. Several automatically computable metrics have been proposed for evaluating the performance of probabilistic general models and GANs in particular. We review some of these here, and compare our proposed metrics to these in our experiments. Many previous probabilistic generative models were evaluated based on the pointwise likelihood of the test data, the criterion also used during training. While GANs can be used to generate samples from the approximate distribution, its likelihood on test samples cannot be evaluated without simplifying assumptions. As discussed in BID41, likelihood often does not provide good rankings of how realistic samples look, the main goal of GANs. We evaluted the efficacy of the log-likelihood of the test data, as estimated using Annealed Importance Sampling (AIS) BID43. AIS has been to estimate the likelihood of a test sample x by considering many intermediate distributions that are defined by taking a weighted geometric mean between the prior (input) distribution, p(z), and an approximation of the joint distribution p σ (x, z) = p σ (x|z)p(z). Here, p σ (x|z) is a Gaussian kernel with fixed standard deviation σ around mean G θ (z). The final estimate depends critically on the accuracy of this approximation. In Section 4, we demonstrate that the AIS estimate of p(x) is highly dependent on the choice of this hyperparameter. The Generative Adversarial Metric BID18 measures the relative performance of two GANs by measuring the likelihood ratio of the two models. Consider two GANs with their respective trained partners, M 1 = (D 1, G 1) and M 2 = (D 2, G 2), where G 1 and G 2 are the generators and D 1 and D 2 are the discriminators. The hypothesis H 1 is that M 1 is better than M 2 if G 1 fools D 2 more than G 2 fools D 1, and vice versa for the hypothesis H 0. The likelihood-ratio is defined as: DISPLAYFORM0 where M 1 and M 2 are the swapped pairs (D 1, G 2) and (D 2, G 1), and p(x|y = 1, M) is the likelihood of x generated from the data distribution p(x) by model M and p(y = 1|x; D) indicates that discriminator D thinks x is a real sample. To evaluate this, we measure the ratio of how frequently G 1, the generator from model 1, fools D 2, the discriminator from model 2, and vice-versa: DISPLAYFORM1, where x 1 ∼ G 1 and x 2 ∼ G 2. There are two main caveats to the Generative Adversarial Metric. First, the measurement only provides comparisons between pairs of models. Second, the metric has a constraint where the two discriminators must have an approximately similar performance on a calibration dataset, which can be difficult to satisfy in practice. The Inception Score BID36 (IS) measures the performance of a model using a third-party neural network trained on a supervised classification task, e.g. Imagenet. The IS computes the expectation of divergence between the distribution of class predictions for samples from the GAN compared to the distribution of class to the distribution of class labels used to train the third-party network, DISPLAYFORM2 Here, the class prediction given a sample x is computed using the third-party neural network. In BID36 ), Google's Inception Network BID40 trained on Imagenet was the third-party neural network. IS is the most widely used metric to measure GAN performance. However, summarizing samples as the class prediction from a network trained for a different task discards much of the important information in the sample. In addition, it requires another neural network that is trained separately via supervised learning. We demonstrate an example of a failure case of IS in the Experiments section. The Fréchet Inception Distance (FID) BID14 extends upon IS. Instead of using the final classification outputs from the third-party network as representations of samples, it uses a representation computed from a late layer of the third-party network. It compares the mean m Q and covariance C Q of the Inception-based representation of samples generated by the GAN to the mean m P and covariance C P of the same representation for training samples: DISPLAYFORM3 This method relies on the Inception-based representation of the samples capturing all important information and the first two moments of the distributions being descriptive of the distribution. Classifier Two-Sample Tests (C2ST) BID27 proposes training a classifier, similar to a discriminator, that can distinguish real samples from P from generated samples from Q, and using the error rate of this classifier as a measure of GAN performance. In their work, they used single-layer and k-nearest neighbor (KNN) classifiers trained on a representation of the samples computed from a late layer of a third-party network (in this case, ResNet BID13). C2ST is an IPM BID38, like the MMD and Wasserstein metrics we propose, with µ(f) = f and υ(f) = f, but with a different function class F, corresponding to the family of classifiers chosen (in this case, single-layer networks or KNN, see see our detailed explanation in Appendix 5). The accuracy of a classifier trained to distinguish samples from distributions P and Q is just one way to measure the distance between these distributions, and, in this work, we propose a general family. Given a generator G θ with parameters θ which generates samples from the distribution Q θ, we propose to measure the quality of G θ by estimating divergence between the true data distribution P and Q θ for different choices of divergence measure. We train both G θ and D ϕ on a training data set, and measure performance on a separate test set. See Algorithm 1 for details. We consider metrics from two widely studied divergence and distance measures, f -divergence BID32 and the Integral Probability Metric (IPM) BID31. In our experiments, we consider the following four metrics that are commonly used to train GANs. Below, ϕ represents the parameters of the discriminator network and θ represents the parameters of the generator network. Training a standard GAN corresponds to minimizing the following BID9: DISPLAYFORM0 where p(z) is the prior distribution of the generative network and G θ (z) is a differentiable function from z to the data space represented by a neural network with parameter θ. D ϕ is trained with a sigmoid activation function, thus its output is guaranteed to be positive. A Least-Squares GAN corresponds to training with a Pearson χ 2 divergence BID28: DISPLAYFORM0 Following BID28, we set a = 0 and b = 1 when training D ϕ. The maximum mean discrepancy metric considers the largest difference in the expectations over a unit ball of RKHS H, DISPLAYFORM0 where H is the RKHS with kernel k(·, ·) BID11. In this case, we do not need to train a discriminator D ϕ to evaluate our metric. Improved Wasserstein Distance (IW) proposed the use of the dual representation of the Wasserstein distance BID42 ) for training GANs. The Wasserstein distance is an IPM which considers the 1-Lipschitz function class ϕ: DISPLAYFORM1 Note that IW and MMD BID39 were recently proposed to evaluate GANs, but have not been compared before. DISPLAYFORM2 Initialize critic network parameter ϕ.3: DISPLAYFORM3 Sample data points from X, {x m} ∼ X tr. Sample points from generative model, {s m} ∼ G θ. ϕ ← ϕ + η∇ ϕ J({x m}, {s m}; ϕ). Sample points from generative model, {s m} ∼ G θ. return J(ϕ, X te, {s m}). The goals in our experiments are two-fold. First, we wanted to evaluate the metrics we proposed for evaluating GANs. Second, we wanted to use these metrics to evaluate GAN frameworks and architectures. In particular, we evaluated how size of the discriminator and generator networks affected performance, and the sensitivity of each algorithm to training data set size. GAN frameworks. We conducted our experiments on three types of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN), Least-Squares GANs (LS-DCGAN), and Wasserstein GANs (W-DCGAN). Note that to not confuse the test metric names with the GAN frameworks we evaluated, we use different abbreviations. GC is the original GAN criterion, which is used to train DCGANs. The LS criterion is used to train the LS-DCGAN, and the IW is used to train the W-DCGAN.Evaluation criteria. We evaluated these three families of GANs with six metrics. We compared our four proposed metrics to the two most commonly used metrics for evaluating GANs, the IS and FID.Because the optimization of a discriminator is required both during training and test time, we will call the discriminator learned for evaluaton of our metrics the critic, in order to not confuse the two discriminators. We also compared these metrics to human perception, and had three volunteers evaluate and compare sets of images, either from the training data set or generated from different GAN frameworks during training. Data sets. In our experiments, we considered the MNIST , CIFAR10, LSUN Bedroom, and Fashion MNIST datasets. MNIST consists of 60,000 training and 10,000 test images with a size of 28 × 28 pixels, containing handwritten digits from the classes 0 to 9. From the 60,000 training examples, we set aside 10,000 as validation examples to tune various hyper-parameters. Similarly, FashionMNIST consists exactly the same number of training and test examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The CIFAR10 dataset 1 consists of images with a size of 32 × 32 × 3 pixels, with ten different classes of objects. We used 45,000, 5,000, and 10,000 examples as training, validation, and test data, respectively. The LSUN Bedroom dataset consists of images with a size of 64×64 pixels, depicting various bedrooms. From the 3,033,342 images, we used 90,000 images as training data and 90,000 images as validation data. The learning rate was selected from discrete ranges and chosen based on a held-out validation set. Hyperparameters. TAB0 in the Appendix shows the learning rates and the convolutional kernel sizes that were used for each experiment. The architecture of each network is presented in the Appendix in Figure 10. Additionally, we used exponential-mean-square kernels with several different sigma values for MMD. A pre-trained logistic regression and pre-trained residual network were used for IS and FID on the MNIST and CIFAR10 datasets, respectively. For every experiment, we retrained 10 times with different random seeds, and report the mean and standard deviation. The log-likelihood measurement is the most commonly used metric for generative models. We measured the log-likelihood using AIS 2 on GANs is strange, as shown in Figure 1. We measured the log-likelihood of the DCGAN on MNIST with three different variances, σ 2 = 0.01, 0.025, and 0.05. The figure illustrates that the log-likelihood curve over the training epochs varies substantially depending on the variance, which indicates that the fixed Gaussian observable model might not be the ideal assumption for GANs. Moreover, we observe a high log-likelihood at the beginning of training, followed by a drop in likelihood, which then returns to the high value. The IS and MMD metrics do not require training a critic. It was easy to find samples for which IS and MMD scores did not match their visual quality. For example, Figure 2 shows samples generated by a DCGAN when it failed to train properly. Even though the failed DCGAN samples are much darker than the samples on the right, the IS for the left samples is higher/better than for the right samples. As the Imagenet-trained network is likely trained to be somewhat invariant to overall intensity, this issue is to be expected. A failure case for MMD is shown in FIG2. The samples on the right are dark, like the previous examples, but still textually recognizable, whereas the samples on the left are totally meaningless. However, MMD gives lower/worse distances to the left samples. The average intensity of the pixels of the left samples are closer to that for the training data, suggesting that MMD is overly sensitive to image intensity. Thus, IS is under-sensitive to image intensity, while MMD if oversensitive to it. In Section 4.2.1, we conduct more systematic experiments by measuring the correlation between these metrics to human perceptual scores. To both compare the metrics as well as different GAN frameworks, we evaluated the six metrics on different GAN frameworks. TAB1, and 4 present the on MNIST, CIFAR10, and LSUN respectively. As each type of GAN was trained using one of our proposed metrics, we investigated whether the metric favors samples from the model trained using the same metric. Interestingly, we do not see this behavior, and our proposed metrics agree on which GAN framework produces samples closest to the test data set. Every metric, except for MMD, showed that LS-DCGAN performed best for MNIST and CIFAR10, while W-DCGAN performed best for LSUN. As discussed below, we found DCGAN to be unstable to train, and thus excluded GC as a metric for experiments except for this first data set. For Fashion-MNIST, FID's ranking disagreed with IW and LS.We observed similar for a range of different critic CNN architectures (number of feature maps in each convolutional layer):,,, and (see Supp. FIG0).We evaluated a larger variety of GAN frameworks using pre-trained GANs downloaded from (pyt). In particular, we evaluated on EBGAN, BEGAN BID4, W-DCGAN GP BID12, and DRAGAN BID23. TAB4 presents the evaluation . Critic architectures were selected to match those of these pre-trained GANs. For both MNIST and FashionMNIST, the three metrics are consistent and they rank DRAGAN the highest, followed by LS-DCGAN and DCGAN.The standard deviations for the IW distance are higher than for LS divergence. We computed the Wilcoxon rank sum in order to test that whether medians of the distributions of distances are the same for DCGAN, LS-DCGAN, and W-DCGAN. We found that the different GAN frameworks have significantly different performance according to the LS-GAN criterion, but not according to the IW criterion (p < .05, Wilcoxon rank-sum test). Thus LS is more sensitive than IW.We evaluated the consistency of the metrics with respect to the size of validation set. We trained our three GAN frameworks for 100 epochs with training 90,000 examples from the LSUN Bedroom dataset. We then trained LS and IW critics using both 300 and 90,000 validation examples. We looked at how often the critic trained with 300 examples agreed with that trained with 90,000 examples. The LS critics agreed 88% of the time, while the IW critics agreed only 55% of the time (slightly better than chance). Thus, LS is more robust to validation data set size. Another advantage is that measuring the LS distance is faster than measuring the IW distance, as estimating IW involves regularizing with a gradient penalty time BID12. Computing the gradient penalty term and tuning its regularization coefficient requires extra computational time. As mentioned above, we found training a critic using the GC criterion (corresponding to a DCGAN) to be unstable. It has previously been speculated that this is the case because the support of the data and model distributions possibly becoming disjoint, and the Hessian of the GAN objective being non-Hermitian BID29. LS-DCGAN and W-DCGAN are proposed to address this by providing non-saturating gradients. We also found DCGAN to be difficult to train, and thus only report using the corresponding criterion GC for MNIST. Note that this is different than training a discriminator as part of standard GAN training because we are training from a random initialization, not from the previous version of the discriminator. Our experience was that the LS-DCGAN was the simplest and most stable model to train. We visualized the 2D subspace of the loss surface of the GANs in Supp. Fig. 29. Here, we took the parameters of three trained models (corresponds to red vertices in the figure) and applied barycentric interpolation with respect to three parameters (see details from BID20). DCGAN surfaces have much sharper slopes when compared to the LS-DCGAN and W-DCGAN, and LS-DCGAN has the most gentle surfaces. In what follows, we show that this geometric view is consistent with our finding that LS-DCGAN is the easiest and the most stable to train. We compared the LS, IW, MMD, and IS metrics to human perception for the CIFAR10 dataset. To accomplish this, we asked five volunteers to choose which of two sets of 100 samples, each generated using a different generator, looked most realistic. Before surveying, the volunteers were trained to choose between real samples from CIFAR10 and samples generated by a GAN. Supp. FIG1 displays the user interface for the participants, and Supp. FIG2 shows the fraction of labels that the volunteers agreed upon. TAB5 ) presents the fraction of pairs for which each metric agrees with humans (higher is better). IW has a slight edge over LS, and both outperform IS and MMD. In Figure 3, we show examples in which all humans agree and metrics disagrees with human perception. All such examples are shown in Supp. Fig. 21 Figure 3: Pairs of generated image sets for which human perception and metrics disagree. Here, we selected one such example for each metric for which the difference in that metric's scores was high. For each pair, humans perceived the set of images on the left to be more realistic than those on the right, while the metric predicted the opposite. Below each pair of images, we indicate the metric's score for the left and right image sets. Several works have demonstrated an improvement in performance by enlarging deep network architectures BID24 BID37 BID13 BID16. Here, we investigate performance changes with respect to the width and depth of the networks. First, we trained three GANs with varying numbers of feature map sizes, as shown in TAB8 (a-d).Note that we double the number of feature maps in TAB8 for both the discriminators and generators. (a) Samples from (e) in TAB8, MMD= 0.03, IS= 5.11(b) Samples from (f) in TAB8, MMD= 0.49, IS= 6.15 In FIG1, the performance of the LS score increases logarithmically as the number of feature maps is doubled. A similar behaviour is observed in other metrics as well (see S.M. FIG4). We then analyzed the importance of size in the discriminative and generative networks. We considered two extreme feature map sizes, where we choose a small and large number of feature maps for the generator and discriminator, and vice versa (see label (e) and (f) in TAB8, and are shown in TAB7. For LS-DCGAN, it can be seen that a large number of feature maps for the discriminator has a better score than a large number of feature maps for the generator. This can also be qualitatively verified by looking at the samples from architectures (a), (e), (f), and (d) in FIG4. For W-DCGAN, we observe the agreement between the LS and IW metric and conflict with MMD and IS. When we look at the samples from the W-DCGAN in FIG2, it is clear that the model with a larger number of feature maps in the discriminator should get a better score; this is another example of false intuition propagated by MMD and IS. One interesting observation is that when we compare the score and samples from architecture (a) and (e) from TAB8, architecture (a) is much better than (e) (see FIG4). This demonstrates that having a large generator and small discriminator is worse than having a small architecture for both networks. Overall, we found that having a larger generator than discriminator does not give good , and that it is more desirable to have a larger discriminator than generator. Similar were also observed for MNIST, as shown in S.M. Figure 20. This somewhat supports the theoretical from BID2, where the generator capacity needs to be modulated in order for approximately pure equilibrium to exist for GANs. Lastly, we experimented with how performance changes with respect to the dimension of the noise vectors. The source of the sample starts by transforming a noise vector into a meaningful image. It is unclear how the size of noise affects the ability of the generator to generate a meaningful image. Che et al. FORMULA0 TAB8 )."Small" and "large" number of filters for discriminator and generator respectively (ref.(e) in TAB8 )."Large" and "small" number of filters for discriminator and generator respectively (ref.(f) in TAB8 )."Large" number of filters for both discriminator and generator (ref. (d) in TAB8 ). for DCGAN. Our experiments show that this depends on the model. Given a fixed size architecture (d) from TAB8, we observed the performance of LS-DCGAN and W-DCGAN by varying the size of noise vector z. TAB9 illustrates that LS-DCGAN gives the best score with a noise dimension of 50 and W-DCGAN gives best score with a noise dimension of 150 for both IW and LS. The outcome of LS-DCGAN is consistent with the in BID5. It is possible that this occurs because both models fall into the category of f -divergences, whereas the W-DCGAN behaves differently because its metric falls under a different category, the Integral Probability Metric. In practice, we alternate between updating the discriminator and generator, and yet this is not guaranteed to give the same as the solution to the min-max problem in Equation 2. Hence, the update ratio can influence the performance of GANs. We experimented with three different update ratios, 5: 1, 1: 1, and 1: 5, with respect to the discriminator and generator update. We applied these ratios to both the MNIST and CIFAR10 datasets on all models. FIG5 presents the LS scores on both MNIST and CIFAR10 and this is consistent with the IW metric as well (see S.M. FIG2). However, we did not find that any one update ratio was superior over others between the two datasets. For CIFAR10, the 1: 1 update ratio worked best for all models, and for MNIST, different ratios worked better for different models. Hence, we conclude that number of update ratios for each model needs to be dynamically tuned. The corresponding samples from the models trained by different update ratios are shown in S.M. Figure In practice, DCGANs are known to be unstable, and the generator tends to suffer as the discriminator gets better due to disjoint support between the data and generator distributions BID9 Here, we explore the sensitivity of three different kinds of GANs with respect to the number of training examples. We have trained GANs with 10,000, 20,000, 30,000, 40,000, and 45,000 examples on CIFAR10. FIG6 shows that the LS score curve of DCGAN grows quite slowly when compared to W-DCGAN and LS-DCGAN. The three GANs have a relatively similar loss when they are trained with 10,000 training examples. However, the DCGAN only gained 0.0124 ± 0.00127 by increasing from 10,000 to 40,000 training examples, whereas the performance of W-DCGAN and LS-DCGAN improved by 0.03016 ± 0.00469 and 0.0444 ± 0.0033, respectively. Thus, we empirically observe that W-DCGAN and LS-DCGAN have faster performance increases than a DCGAN as the number of training examples grows. In this paper, we proposed to use four well-known distance functions as an evaluation metrics, and empirically investigated the DCGAN, W-DCGAN, and LS-DCGAN families under these metrics. Previously, these models were compared based on visual assessment of sample quality and difficulty of training. In our experiments, we showed that there are performance differences in terms of average experiments, but that some are not statistically significant. Moreover, we thoroughly analyzed the performance of GANs under different hyper-parameter settings. There are still several types of GANs that need to be evaluated, such as GRAN BID18, IW-DCGAN BID12, BEGAN BID4, MMDGAN, and CramerGAN . We hope to evaluate all of these models under this framework and thoroughly analyze them in the future. Moreover, there has been an investigation into taking ensemble approaches to GANs, such as Generative Adversarial Parallelization BID19. Ensemble approaches have been empirically shown to work well in many domains of research, so it would be interesting to find out whether ensembles can also help in min-max problems. Alternatively, we can also try to evaluate other log-likelihood-based models like NVIL BID30, VAE BID22, DVAE BID17, DRAW BID10, RBMs BID15 BID35, , etc. Model evaluation is an important and complex topic. Model selection, model design, and even research direction can change depending on the evaluation metric. Thus, we need to continuously explore different metrics and rigorously evaluate new models. In this paper, we considered four distance metrics that belong to two class of metrics, φ-divergence and IPMs. BID38 have shown that the optimal risk function is associated with a binary classifier with P and Q distributions conditioned on a class when the discriminant function is restricted to certain F (Theorem 17 from BID38).Let the optimal risk function be: DISPLAYFORM0 where F is the set of discriminant functions (classifier), y ∈ −1, 1, and L is the loss function. By following derivation, we can see that the optimal risk function becomes IPM: DISPLAYFORM1 where DISPLAYFORM2 The second equality is derived by separating the loss for class 1 and class 0. The third equality is from the way how we chose L(1,f(x)) and L(0,f(x)). The last equality is derived from that fact that F is symmetric around zero (f ∈ F => −f ∈ F). Hence, this shows that with appropriately choosing L, MMD and Wasserstein distance can be understood as the optimal L-risk associated with binary classifier with specific set of F functions. For example, Wasserstein distance and MMD distances are equivalent to the optimal risk function with 1-Lipschitz classifiers and a RKHS classifier with an unit length. We trained two critics on training data and validation data, respectively, and evaluated on test data from both critics. We trained six GANs (GAN, LS-DCGAN, W-DCGAN GP, DRAGAN, BEGAN, EBGAN) on MNIST and FashionMNIST. We trained these GANs with 50,000 training examples. At test time, we used 10,000 training and 10,000 validation examples for training the critics, and evaluated on 10,000 test examples. Here, we present the test scores from the critics trained on training and validation data. The are shown in Table??. Note that we also have the IW and FID evaluation on these models in the paper. For FashionMNIST, we find that test scores with a critic trained on training and validation data are very close. Hence, we do not see any indication of overfitting. On the other hand, there are gaps between the scores for the MNIST dataset and the test scores from critics trained on the validation set. which gives better performance than the ones that are trained on the training set. FIG1: The participants are trained by selecting between random samples generated by GANs versus samples from data distribution. They get a positive reward if they selected the data samples and a negative reward if they select the samples from the model. After enough training, they choose the better group of samples among two randomly select set of samples. (a) Samples from (e) in TAB8, MMD= 0.03, IS= 5.11(b) Samples from (f) in TAB8,, MMD= 0.49, IS= 6.15 TAB8 ).(b) "Small" and "large" number of filters for discriminator and generator respectively (ref.(e) in TAB8 ).(c) "Large" and "small" number of filters for discriminator and generator respectively (ref.(f) in TAB8 ).(d) "Large" number of filters for both discriminator and generator (ref. (d) in TAB8 ). TAB8 ).(b) "Small" and "large" number of filters for discriminator and generator respectively (ref.(e) in TAB8 ).(c) "Large" and "small" number of filters for discriminator and generator respectively (ref.(f) in TAB8 ).(d) "Large" number of filters for both discriminator and generator (ref. (d) in TAB8 ). Figure 31: The training curve of critics to show that the training curve converges. IW distance curves in (a) increase because we used linear output unit for the critic network (by design choice). This can be simply bounded by adding a sigmoid at the output of the critic network.
An empirical evaluation on generative adversarial networks
1,120
scitldr
Knowledge bases (KB) are often represented as a collection of facts in the form (HEAD, PREDICATE, TAIL), where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two. It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion methods, specifically on link prediction. However, though frequently ignored, these repositories also contain numerical facts. Numerical facts link entities to numerical values via numerical predicates; e.g., (PARIS, LATITUDE, 48.8). Likewise, numerical facts also suffer from the incompleteness problem. To address this issue, we introduce the numerical attribute prediction problem. This problem involves a new type of query where the relationship is a numerical predicate. Consequently, and contrary to link prediction, the answer to this query is a numerical value. We argue that the numerical values associated with entities explain, to some extent, the relational structure of the knowledge base. Therefore, we leverage knowledge base embedding methods to learn representations that are useful predictors for the numerical attributes. An extensive set of experiments on benchmark versions of FREEBASE and YAGO show that our approaches largely outperform sensible baselines. We make the datasets available under a permissive BSD-3 license. Knowledge Bases (KBs) are playing an increasingly important role in a number of AI applications. KBs can be seen as a collection of facts or triples of the form (head, predicate, tail), denoted as (h, p, t), where head and tail correspond to entities and predicate corresponds to a relationship that holds between these two entities. This structured information is easily accessible by AI systems to enhance their performance. A variety of AI applications such as recommender systems, natural language chatbots or question answering models, have benefited from the rich structural information archived in these repositories. This is because much of human knowledge can be expressed with one or more conjunctions of knowledge facts. However, KBs' capabilities are limited due to their incompleteness 1. Consequently there has been a flurry of research on knowledge base completion methods in recent years. Relationship extraction BID27 (i.e., classification of semantic relationship mentions), knowledge graph matching BID32 BID12 (i.e., alignment and integration of entities and predicates across KBs), or search-based question-answering BID36 (i.e., queries issued to a web search engine) are a few different ways to address the incompleteness problem. However, the literature on the so-called link prediction methods BID22 has received more attention in the last few years in comparison to the aforementioned approaches. Contrary to other solutions, link prediction methods aim to find missing links between entities exclusively based on the existing information contained in the KB. This is achieved by ranking entities that are answer candidates for the query. The queries these methods typically address are of the form (USA, /location/contains, ?), or (Madrid, /location/capitalOf, ?), whereas the missing element -represented by a question mark-is an entity contained in the KB.Many link prediction methods only harness feature types learned from the rich relational information contained in the KB to infer new links, and only very recently , ] numerical attributes have been integrated along with other feature types to improve link prediction performance. Similarly, numerical information is also represented as facts such as (Berlin, /location/latitude, 52.31) or (Albert Einstein, /person/birth year, 1879). However, as shown in BID5 ] the application of numerical attributes is limited because of the same incompleteness problem: Many entities are missing numerical attribute values they are expected to possess. For example, entities that represent locations should have numerical information regarding latitude, longitude or area, among others; whereas for entities representing people, numerical predicates such as the birth year, weight or height would be more appropriate. In this work we focus on the problem of completing queries where the relationship is a numerical predicate. Consequently, the answer to this new type of query is a numerical value. This is contrary to the link prediction problem, wherein the answer to a query is always an element of a closed vocabulary. Examples of queries addressed in this paper are (Apple Inc., revenue, ?) or (California, average salary, ?). While one can interpret link prediction as a classification/ranking problem, this is rather a regression problem. The main contributions of this paper are:• We introduce the problem of predicting the value of entities' numerical attributes in KBs. For the sake of simplicity we term this as'numerical attribute prediction problem'. To our knowledge, this is the first time this problem is addressed in the literature.• We create benchmark datasets for this problem. We use well-known subsets of Freebase and Yago as the blueprints for creating these benchmarks. We also create versions of these datasets for different percentages of sparsity by artificially removing facts that involve numerical predicates. All these benchmark datasets will be made publicly available.• We propose two meaningful baselines for this problem. These baselines are inspired by previous work done in the node classification and the imputation literature.• We propose supervised and semi-supervised approaches to this problem. The semisupervised approaches significantly outperform the baselines in all datasets and conditions. The paper is organized as follows: We discuss the related work in Section 2. Afterwards we formalize the problem of predicting numerical values for entities' numerical attributes in KBs in Section 3. We describe our approaches to this problem, as well as the two baselines. Section 4 reports the experimental setting followed by an extensive set of experiments on different datasets with different degrees of sparsity in Section 5. Finally, we summarize the of our study in Section 6. There is an extensive body of work on link prediction BID2 BID37 BID22 BID34. Logical approaches [] operate on a set of logical rules that are usually handcrafted and/or mined. These logical formulas are evaluated between entity pairs to generate feature representations which are then used for a downstream machine learning model. On the other hand, KB embedding methods BID22 learn feature representations -embeddings-for all elements in a KG by optimizing a designed scoring function. Given a fact, these scoring functions output a score that relates to the likelihood of that fact being true. A popular and successful instance of KB embedding method is TransE BID2, where predicates are modeled as translations in the entity embedding space. Much less work has been done on entity-type classification BID19,. This problem is inherently related to link prediction, since it amounts to complete queries of the form (head, typeOf, ?), where the question mark corresponds to a certain entity type (e.g. location, artist, ...). Therefore, link prediction and entity-type classification share certain similarities with the numerical attribute prediction problem. Most importantly, they all make use of the relational information for KB completion, one way or another. However, there is a crucial difference between link prediction and numerical attribute prediction. In the former, a query can be completed with one or several elements contained in a relatively small vocabulary, whereas in the later the answer may (potentially) take an infinite number of real values. There is another line of research related to our work, namely value imputation BID28. In statistics, imputation is the process of replacing missing data with substituted values. In the simplest case, one can replace the missing values of a variable by the mean of all existing values of that variable. This technique is called mean imputation. It preserves the mean of the variable, but alters the underlying variable distribution to be more peaked at the mean BID0. However, it is the most commonly practiced approach for value imputation BID29, and it has been shown to be competitive for a number of downstream tasks BID1 , ]. Another popular approach is called regression imputation, where the missing values of a variable are estimated by a regression model from the observed values of other variables. There is some work on using text for predicting numerical attributes of entities such as BID3 , ]. BID9 uses Word2Vec embeddings of named entities as inputs to a number of regression models. Similar to us, they aim to predict numerical attributes of knowledge base entities. Different to us, they leverage text information to do so. This difference is important, because we do not assume the existence of information other than the graph structure. Our problem is general enough to address knowledge bases where entities names are unknown or anonymized (e.g. medical knowledge bases).To our knowledge there is no existing work in the value imputation literature that attempts to fill missing values in KBs while taking advantage of the structural information provided by the KB. A knowledge base, KB, is denoted as G = (E, P), where E is a set of entities and, P is a set of relation types or predicates. This standard definition can be found in many papers in the link prediction literature [, Garcia-Duran and . A KB is a collection of facts (or standard facts) (h, p, t) where p ∈ P and h, t ∈ E. We now define a knowledge base enriched with numerical attributes as G N A = (G, A, N). Entities in G are associated with numerical values N via numerical predicates A. This information can be expressed as a collection of numerical facts (h, a, t) where a ∈ A, h ∈ E and t ∈ N. In the paper we interchangeably use the term'numerical predicate' with'numerical attribute'. The numerical attribute prediction problem seeks the most probable completion of a fact (h, a, ?), where h ∈ E, a ∈ A and? ∈ N.We refer to the set of entities for which the value of the numerical attribute a is known as E a ⊆ E. Let e be an entity with numerical attribute a, then we denote the known numerical value for attribute a as n a e. The goal is to learn a function f: E →, denotes the set of reals. One can omit the relational information given by the graph G and apply a value imputation method to fill missing values. However, it is intuitive to assume the existence of an underlying generative model that (partially) determines the relational structure of the KB based on the values of the entities' numerical attributes. For instance, two entities are likely linked via the relationship /location/contains if they have similar latitude and longitude; or two highly connected entities that correspond to people are likely to have similar birth years. If this assumption is true, then a model that exploits the graph structure information is likely to outperform simple value imputation methods. Nevertheless, while this may be true for a number of numerical attributes, for others the graph structure may introduce noise or, in the best case, be irrelevant. Inspired by previous work in the value imputation and the node classification literature, we propose the following baselines. A simple and natural baseline is simply using the sample mean of the attribute specific training data as a predictor for missing values. This is known as mean imputation BID29. At test time, given an entity e for which we aim to predict the value of numerical predicate a, denoted asn a e, this baseline simply assigns the sample mean of all known entities possessing the same numerical attribute (E a). This is formally described below.n DISPLAYFORM0 where f is the sample mean. We term this model as Global because it harnesses global information from the entire attribute specific training set. In this work we use the root mean square error (RMSE) and the mean absolute error (MAE) as evaluation metrics. While the sample mean is the best estimator for the former, the sample median is the optimum for the latter BID20. Consequently, in the experimental section we use median imputation when reporting the MAE and mean imputation when reporting on RMSE metrics. Median imputation is obtained by simply replacing the sample average by the median in Eq.. Our second baseline takes into account that entities are interconnected through a relational graph structure. Thus it is natural to define a baseline that exploits the neighborhood or local graph structure. The weighted-vote relational neighbor BID14 ] is a relational classifier often used as a benchmark in the node classification literature. It estimates the class label of a node as a weighted average of its neighbors' class labels. Despite its simplicity, it is shown to be competitive BID24 and is advocated as a sensible relational classification baseline BID15.Inspired by such work, we propose an adaptation for our setting and problem. For a numerical attribute a, this baseline estimates a value for the entity e as the average of its neighbors' attribute values for that numerical attribute. Here, the neighborhood of a node e, denoted N e, is defined as the set of nodes that are connected to e through any relation type. The baseline is formalized as followŝ DISPLAYFORM0 where, as before, f is either the sample mean or the sample median depending on the evaluation metric reported. We term this model as Local because it uses the local neighborhood information for prediction. In the case where E a ∩ N e = ∅, we make use of the so-called Global baseline to make a prediction. We leverage KB embedding methods to learn feature representations -embeddings-of entities that (ideally) are predictive for the different numerical attributes. As we argued before, this is only true if the entities' numerical attributes determine, to a certain extent, the existence of a certain relation type between two entities. We first learn knowledge base embeddings, and in a second step we use these embeddings, along with the numerical facts, to train downstream machine learning models for predicting numerical attributes. This pipeline is reminiscent of recent work BID25 in the node classification literature. While there is an extent literature on KG embedding methods BID21 BID2 BID22, recent work BID10 shows that well-tuned "simple" scoring functions BID37 are very hard to beat. Likewise BID6 shows that TransE BID2 performs similarly or even better than many of its variants, such as TransH BID35 or TransR BID13.Due to its simplicity and good performance in related problems, we choose TransE to illustrate the generic principles behind our models. Note, however, that the methodology described is agnostic to the chosen KG embedding method. The probability for a fact DISPLAYFORM0 c g(c|θ)), where c indexes all possible triples, and θ all learnable parameters of TransE, whose scoring function g is g(d | θ) = ||h + p − t|| 2. We use bold letters h, p, t ∈ d to denote the corresponding d-dimensional feature representations of h, p, t, respectively. Note that this formulation is impractical because the cost of computing all possible triples is unfeasible. Instead, for each triple d = (h, p, t) ∈ G we generate a set of N triples (h, p, t) by sampling N entities t uniformly at random from the set of all entities. This process, which is termed as negative sampling, is repeated for the head of the triple. For a given set of facts D that are part of the KB G, the logarithmic loss is defined as DISPLAYFORM1 All parameters θ are learned for minimizing L G with stochastic gradient descent. Once the representation learning phase is finished we evaluate two different approaches that utilize these embeddings for addressing the numerical attribute prediction problem. In the simplest case, for each numerical attribute we use the learned feature representations as input to a regression model to predict the corresponding numerical attribute. For numerical attribute a the loss function is given by DISPLAYFORM0 where f ϑ a refers to the regression function for numerical attribute a, ϑ a refers to the learnable parameters of f ϑ a, and λ a is the regularization hyper-parameter. In this work we use a linear regression model: f ϑ a (e) = e T w a + b a, where w a ∈ d is the weight vector and b a is the corresponding bias term. At test time, given a query related to a certain numerical attribute a and a certain entity e, the prediction is computed by applying the corresponding linear regression model:n a e = f ϑ a (e). We refer to this approach as Lr. Previously we defined E a as the set of entities with known numerical attribute a. Similarly, we define Q a as the set of entities with missing values for numerical attribute a. We consider numerical attribute values as labels, and, consequently, we can think of E a and Q a as the set of labeled and unlabeled nodes, respectively. Therefore, semi-supervised learning is a natural choice because it also uses unlabeled data to infer values of numerical attributes. Label propagation (LP) BID13 , Fujiwara and BID4 has been proven to be an effective semi-supervised learning algorithm for classification problems. The key assumption of label propagation, and in general most semi-supervised learning algorithms, is similar to ours: Data points close to each other are likely to have the same label -numerical attribute values in our case. We aim to propagate numerical attribute information across the graph using LP. For numerical attribute a, we use the learned representations {e} e∈E a ∪Q a to induce a k-nearest neighbor graph (kNN) using euclidean distance. This graph is characterized by an adjacency matrix A ∈ N ×N, where N = |E a | + |Q a |. The edge weights of the adjacency matrix represent similarities between the connected entities, which are computed according to a similarity metric ρ -in this work we use a radial basis function kernel 2.We then compute the transition matrix T by row-wise normalizing the matrix A. Without loss of generality, we arrange labeled and unlabeled data so that T can be decomposed as DISPLAYFORM0 2. ρ(x, y) = exp(− ||x − y|| The transition matrix T (illustrated in FIG2) can be iteratively used to propagate numerical information across the graph until a stopping criterion is reached. Alternatively, this problem can be solved in a closed form: DISPLAYFORM1 where n a E a ∈ |E a | is a vector that contains all values of numerical attribute a for labeled nodes. Similarly,n a Q a ∈ |Q a | is a vector that contains all predicted values of numerical attribute a for unlabeled nodes. We refer to the matrix M a in Section 5.1. We term this as Numerical Attribute Propagation (or Nap).Related work BID18 uses label propagation to perform link prediction in web ontologies by casting it as a binary classification problem, where the similarity graph is built based on homophilic relationships. In the two aforementioned solutions, we fully rely on the feature representations learned by, in this case, TransE to be meaningful with respect to the numerical attributes we aim to predict. This relates to our initial assumption that the relational structure of a KB can be explained, to some extent, by the numerical attributes of the entities. However, there might be cases where the values taken by entities for a certain numerical attribute do not fully relate to the relational structure of the KB.Motivated by this consideration, we set out to answer the question: Can these models benefit from learning feature representations incorporating, beside the graph structure, numerical attribute information?To answer this question we incorporate the learning objective of Eq. into the learning objective of TransE (Eq.): DISPLAYFORM0 where α weights the importance of the linear regression objectives. All parameters are learned using stochastic gradient descent. We term these embeddings as TransE++, which, contrary to TransE, are also learned with numerical facts. While Nap and Lr use TransE feature representations, their counterparts Nap++ and Lr++ leverage TransE++ embeddings. Different numerical attributes exhibit different scales of attribute values. For example'person.height' ranges from 1 to 2 meters while'location.area' scales from several hundred to even millions of kilometers. Bigger scales lead to larger prediction errors during training which in turn affect the back-propagation gradients used for learning the TransE++ embeddings. To alleviate this problem, we normalize each numerical attribute to zero-mean and unit-variance. We also experimented with min-max scaling, however it gave worse performance compared to standard scaling. Scaling numerical attribute values remains an interesting challenge. For Lr++ we do not directly use the regression models learned during opimizing the TransE++'s learning objective (Eq. FORMULA7). Instead we use the learned TransE++ embeddings to train a new regression model L a R for each numerical attribute a ∈ A. This is because of the computational difficulty in tuning hyper parameter λ a for each numerical attribute while learning TransE++, which we found important to obtain good performance. Note that the hyper parameter space grows exponentially with the number of attributes |A|. For this reason, we set λ a = λ = 0 in the regression objectives in Eq. for learning TransE++ embeddings (first step). For the final regression models (second step) we do tune λ a s (independently), which, though suboptimal, facilitates their tuning. The proposed methods are evaluated by their ability to answer completion queries of the form (h, a, ?), where h ∈ E, a ∈ A and? ∈ N. We evaluate the baselines and our models on two benchmark datasets: FB15K-237 BID33 and YAGO15K. While for the former, numerical attributes were introduced in BID5, for the later we obtained this information from dumps found online on YAGO's website 3.The FB15K-237 dataset contains a total of 29,395 numerical facts divided in 116 different numerical predicates. We evaluate our models on the top 10 numerical attributes ranked by the number of data samples. This reduces the dataset to 22,929 samples. We split these numerical facts into training, validation and test in the proportion of 80/10/10%, respectively. All other facts from FB15K-237 whose predicate belongs to P are used as training data, which amounts to 310,116 facts. Thus we only evaluate our approaches on their ability to answer queries whose answer is a numerical value. The YAGO15K dataset contains 23,520 numerical facts divided in 7 different attributes. Similarly, we split these numerical facts into training, validation and test in the same proportion. We use all other 122,886 facts from this dataset for learning knowledge base embeddings. A summary of the datasets can be found in Table 1. All the splits of both datasets used in this work will be made publicly available to facilitate future comparisons. We compare performance across methods using two evaluation metrics -MAE and RMSE. These are standard metrics used in regression problems and were also used in a related study of predicting numerics for language models BID31. DISPLAYFORM0 For TransE and TransE++ we fix the embedding dimension d to 100. After some preliminary experiments, the weight α of TransE++ was fixed to 1. We used Adam BID11 ] to learn the parameters in a mini-batch setting with a learning rate of 0.001. We fixed the number of epochs to 100 and the mini-batch size to 256. The parameter N of the negative sampling was set to 50. Within a batch, the number of data points for each of the TransE++'s regression objectives is proportional to the frequency of each of the numerical predicates in the training set. In all cases, the parameters were initialized following BID8.We used the Scikit-learn BID23 implementation of ridge regression for the approaches Lr and Lr++. The regularization term λ a is tuned using the values [0, 0.1, 1, 10, 100].For Nap and Nap++ the number of neighbors (k) of the kNN graph is validated among; and the σ of the RBF kernel is validated among [0.25, 0.5, 1, 10].All of the above is validated for each numerical predicate and evaluation metric. The objectives of this section is twofold: First we investigate the performance of our approaches (Lp and Nap, and their variants) with respect to the baselines. And second we experimentally check how robust these methods are for different degrees of sparsity in the training data. TAB2 detail the performance of the baselines and our approaches on FB15K-237. For each numerical attribute we always indicate in bold font the best performing method, which happens to be either Nap++ or Nap most of the time. Interestingly enough, from TAB2 we observe that for the numerical attributes'location.area' and'population.number' Global largely outperforms Local. This seems to indicate that the relational structure of this data set does not relate to these two numerical predicates. Overall, predictions for all other numerical attributes tend to benefit from the local information given by the entities' neighborhood. In comparing Tables 2 and 3, we note that Local is very competitive in Table 3: Performance of Lr-and Nap-based models on FB15K-237.regard to the numerical attributes'latitude' and'longitude'. This can be explained by the presence of predicates such as'location.adjoins' or'location.contains' in the relational structure of the graph. Similarly, entities' neighborhoods are useful for predicting'date of birth' or'date of death' because (some of the) surrounding entities correspond to people who have similar birth or death dates. Interestingly, all our approaches beat both baselines in the numerical attribute'person.height mt', for which a priori one would not expect performance gains in learning from the graph structure. Overall, Lr++ and Nap++ outperform their counterparts Lr and Nap, respectively, for most numerical predicates. As we argued in Section 3.2.3 it is not feasible to validate the regularization term λ a for every numerical attribute while learning TransE++. We speculate that setting λ a = 0 while training TransE++ may explain why Lr++ and Nap++ do not always beat their counterparts. Table 4: Performance of Local and Nap++ on FB15K-237 for different degrees of sparsity, P r, on the numerical facts. Results are reported in terms of Mean Absolute Error (MAE).Another observation from Table 3 is that, in general, Nap-based models perform much better compared to Lr-based models. One can find a number of explanations to this. The obvious explanation is that the numerical attribute propagation approaches learn from labeled and unlabeled data, whereas the regression models only learn from labeled data. A second explanation is that whereas Nap's predictions are computed as a weighted average of observed numerical values, Lr's predictions are not bounded. This prevents Nap-based approaches from making large mistakes. On the other hand, for example, we observed non-plausible values (e.g. > 2020) predicted by the Lr-based models for the numerical attribute'date of birth'. We also experimented with non-linear regression models, but did not observe any performance improvement. Knowledge graphs are known to suffer from data sparsity due to missing facts. The same incompleteness is also true for numerical facts. Therefore it is crucial to study model performance under a sparse data regime. We generate data sparsity by artificially removing numerical facts from the training set while keeping the validation and test sets unchanged. We keep the underlying knowledge graph G unchanged because we aim to isolate the effect of numerical fact sparsity. In other words, only a number of numerical facts are removed from the training set. We retained a percentage P r of training numerical facts and ran Local and Nap++ with the same experimental set-up. We experimented with the following values of P r: %. We detail the of these experiments in Table 4. Note that the performance of Local degrades more rapidly compared to Nap++ as the sparsity increases. Even in high regimes of sparsity, Nap++'s performance is remarkably robust. TAB5 lists for Global and Local in YAGO15K. As for FB15K-237, Local outperforms Global for most of the numerical attributes. This reinforces our assumption that the numerical attributes explain, to some extent, relation structure between entities. Table 6 depicts the performance of Local, Nap and Nap++ under different degrees of sparsity in YAGO15K. In the light of these numbers, we can conclude that the Nap-based Table 6: Performance of Local, Nap and Nap++ on YAGO15K for different degrees of sparsity, P r, on the numerical facts. Results are reported in terms of Mean Absolute Error. models are more robust than Local to data sparsity. Nap++ achieves the best performance for most of the numerical attributes and degrees of sparsity. It performs remarkably well for the numerical attribute'happenedOnDate' in comparison to Nap. Across all values of P r, on average, NAP++ improves Nap's performance by 20 points (in mean absolute value) for'happenedOnDate'. We recognize that reporting model performance in absolute values complicates comparison since numerical attributes lie on different ranges of values. To have a better picture of performance gains we report percentage error reduction between Nap++ and the best performing baseline. For numerical attribute a, the percentage error reduction in MAE is computed as follows DISPLAYFORM0 One can compute the percentage error reduction in terms of RMSE in a similar manner. This is shown in TAB7 for P r = 100. We do not include'location.area' and'population.number', as previous experiments indicate that they do not relate to the graph structure of FB15K-237. Overall, Nap++ significantly outperforms baselines for almost all numerical attributes in both FB15K-237 and YAGO15K data sets. These demonstrate that the embeddings learned from the graph structure are useful predictors of entity numerical attributes. Table 8: Qualitative comparison between Nap and Nap++. The three first rows correspond to queries where the numerical attribute is'date of birth', whereas for the two last queries it is'date of death'. The actual value of labeled entities for the corresponding numerical attribute is shown in parenthesis. This last experimental section aims to provide some insights on the benefit of adding numerical information during the representation learning stage. Table 3 shows a noteworthy behavior of these methods with respect to the numerical attributes'date of birth' and'date of death'. While the performance of both approaches is comparable in terms of MAE, their RMSE largely differ. It is known that the mean absolute error is an evaluation metric more robust to outliers than the root mean squared error. We set out to inspect these outliers to shed light on the usefulness of incorporating numerical information in the embeddings. Nap-based models leverage these embeddings to build a similarity graph on which numerical information is propagated via Eq.. The ing predictions are the of multiplying the matrix M a by the observed numerical values. This matrix 5 determines which observed entities' numerical values to pay attention to. These attention values are different for Nap and Nap++ as the graph similarity is constructed with different embeddings. We qualitatively compare Nap and Nap++ based on a number of predictions computed in the test set. For each method we compare the two labeled entities they pay the most attention to. For the sake of simplicity we refer to these two entities as nearest neighbors. This is shown in Table 8. An interesting observation is that for NAP the two nearest neighbors are always entities topically similar to the entity in the query. On the other hand the nearest entities retrieved by NAP++ are more meaningful with respect to the queried numerical attribute. This is seen in the first query: (Alexander the Great 6, date of birth, ?). While Nap pays the most attention to topically similar entities, Nap++ puts a high attention on Julius Caesar, 7 which is more meaningful in regard to the date of birth. Nap++ uses Euclidean distance between vectors to build the k-nearest neighbor graph. Table 8 that subsets of entities latent factors could be encoding different relational and numerical information. For instance, a few dimensions of the entity embeddings encode location information, while others encode population information and so on. To exploit this we learned Mahalanobis metrics for capturing different entity similarities. We did this while learning knowledge base embeddings by using an additional nearest neighbor loss. It did slightly improve the performance for few attributes, but overall it did not make significant distance. We suggest that future work should address this research direction in greater depth. We introduce a novel problem, namely numerical attribute prediction in knowledge bases. Contrary to link prediction, the answer to this new query type is a numerical value, and not an element from a (small) closed vocabulary. Our premise to this problem is that the relational structure of a KB can be partially explained by the numerical attribute values associated with entities. This allows for leveraging KB embedding methods to learn representations that are useful predictors of numerical attributes. An extensive set of experiments validates our premise. Furthermore, we also show that learning KB representations enriched with numerical attribute information are helpful for addressing this task. Finally, we believe that this new problem introduced in the paper will spur interest and deeper investigation from the research community.5. Note that it is non-negative and is row normalized. 6. For all practical purposes he is deemed a philosopher in FB15K-237. 7. Julius Caesar belongs to profession Politician in FB15K-237
Prediction of numerical attribute values associated with entities in knowledge bases.
1,121
scitldr
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These support contextual alignment as a useful concept for understanding large multilingual pre-trained models. Figure 1: t-SNE visualization of the embedding space of multilingual BERT for English-German word pairs (left: pre-alignment, right: post-alignment). Each point is a different instance of the word in the Europarl corpus. This figure suggests that BERT begins already somewhat aligned out-of-the-box but becomes much more aligned after our proposed procedure. Embedding alignment was originally studied for word vectors with the goal of enabling cross-lingual transfer, where the embeddings for two languages are in alignment if word translations, e.g. cat and Katze, have similar representations (a;). Recently, large pretrained models have largely subsumed word vectors based on their accuracy on downstream tasks, partly due to the fact that their word representations are context-dependent, allowing them to more richly capture the meaning of a word (; ; ;). Therefore, with the same goal of cross-lingual transfer but for these more complex models, we might consider contextual embedding alignment, where we observe whether word pairs within parallel sentences, e.g. cat in "The cat sits" and Katze in "Die Katze sitzt," have similar representations. One model relevant to these questions is multilingual BERT, a version of BERT pre-trained on 104 languages that achieves remarkable transfer on downstream tasks. For example, after the model is fine-tuned on the English MultiNLI training set, it achieves 74.3% accuracy on the test set in Spanish, which is only 7.1% lower than the English accuracy (; b). Furthermore, while the model transfers better to languages similar to English, it still achieves reasonable accuracies even on languages with different scripts. However, given the way that multilingual BERT was pre-trained, it is unclear why we should expect such high zero-shot performance. Compared to monolingual BERT which exhibits no zero-shot transfer, multilingual BERT differs only in that during pre-training (i.e. masked word prediction), each batch contains sentences from all of the languages, and it uses a single shared vocabulary, formed by WordPiece on the concatenated monolingual corpora . Therefore, we might wonder: How can we better understand BERT's multilingualism? Can we further improve BERT's cross-lingual transfer? In this paper, we show that contextual embedding alignment is a useful concept for addressing these questions. First, we propose a contextual version of word retrieval to evaluate the degree of alignment, where a model is presented with two parallel corpora, and given a word within a sentence in one corpus, it must find the correct word and sentence in the other. Using this metric of alignment, we show that multilingual BERT achieves zero-shot transfer because its embeddings are partially aligned, as depicted in Figure 1, with the degree of alignment predicting the degree of downstream transfer. Next, using between 10K and 250K sentences per language from the Europarl corpus as parallel data , we propose a fine-tuning-based alignment procedure and show that it significantly improves BERT as a multilingual model. Specifically, on zero-shot XNLI, where the model is trained on English MultiNLI and tested on other languages (b), the aligned model improves accuracies by 2.78% on average over the base model, and it remarkably matches translate-train models for Bulgarian and Greek, which approximate the fully-supervised setting. To put our in the context of past work, we also use word retrieval to compare our finetuning procedure to two alternatives: fastText augmented with sentence and aligned using rotations (; Rücklé et al., 2018;), and BERT aligned using rotations (; ;). We find that when there are multiple occurences per word, fine-tuned BERT outperforms fastText, which outperforms rotation-aligned BERT. This supports the intuition that contextual alignment is more difficult than its non-contextual counterpart, given that a rotation, at least when applied naively, is no longer sufficient to produce strong alignments. In addition, when there is only one occurrence per word, fine-tuned BERT matches the performance of fastText. Given that context disambiguation is no longer necessary, this suggests that our fine-tuning procedure is able to align BERT at the type level to a degree that matches non-contextual approaches. Finally, we use the contextual word retrieval task to conduct finer-grained analysis of multilingual BERT, with the goal of better understanding its strengths and shortcomings. Specifically, we find that base BERT has trouble aligning open-class compared to closed-class parts-of-speech, as well as word pairs that have large differences in usage frequency, suggesting insight into the pre-training procedure that we explore in Section 5. Together, these experiments support contextual alignment as an important task that provides useful insight into large multilingual pre-trained models. Word vector alignment. There has been a long line of works that learn aligned word vectors from varying levels of supervision . One popular family of methods starts with word vectors learned independently for each language (using a method like skip-gram with negative sampling (b) ), and it learns a mapping from source language vectors to target language vectors with a bilingual dictionary as supervision (a; ;). When the mapping is constrained to be an orthogonal linear transformation, the optimal mapping that minimizes distances between word pairs can be solved in closed form . Alignment is evaluated using bilingual lexicon induction, so these papers also propose ways to mitigate the hubness problem in nearest neighbors, e.g. by using alternate similarity functions like CSLS (a). A recent set of works has also shown that the mapping can be learned with minimal to no supervision by starting with some minimal seed dictionary and alternating between learning the linear map and inducing the dictionary (; a; ; ;). Incorporating context into alignment. One key challenge in making alignment context aware is that the embeddings are now different across multiple occurrences of the same word. Past papers have handled this issue by removing context and aligning the "average sense" of a word. In one such study, learn a rotation to align contextual ELMo embeddings with the goal of improving zero-shot multilingual dependency parsing, and they handle context by taking the average embedding for a word in all of its contexts. In another paper, learn a rotation on sentence vectors, produced by taking the average word vector over the sentence, and they show that the ing alignment also works well for word-level tasks. In a contemporaneous work, align not only the word but also the context by learning a linear transformation using word-aligned parallel data to align multilingual BERT, with the goal of improving zero-shot dependency parsing numbers. In this paper, we similarly align not only the word but also the context, and we also depart from these past works by using more expressive alignment methods than rotation. Incorporating parallel texts into pre-training. Instead of performing alignment post-hoc, another line of works proposes contextual pre-training procedures that are more cross-lingually-aware. pre-train sentence embeddings using parallel texts by maximizing similarity between sentence pairs while minimizing similarity with negative examples. propose a cross-lingual pre-training objective that incorporates parallel data in addition to monolingual corpora, leading to improved downstream cross-lingual transfer. In contrast, our method uses less parallel data and aligns existing pre-trained models rather than requiring pretraining from scratch. Analyzing multilingual BERT. present a series of probing experiments to better understand multilingual BERT, and they find that transfer is possible even between dissimilar languages, but that it works better between languages that are typologically similar. They conclude that BERT is remarkably multilingual but falls short for certain language pairs. We first briefly describe multilingual BERT . Like monolingual BERT, multilingual BERT is pre-trained on sentences from Wikipedia to perform two tasks: masked word prediction, where it must predict words that are masked within a sentence, and next sentence prediction, where it must predict whether the second sentence follows the first one. The model is trained on 104 languages, with each batch containing training sentences from each language, and it uses a shared vocabulary formed by WordPiece on the 104 Wikipedias concatenated . In the following sections, we describe how to define, evaluate, and improve contextual alignment. Given two languages, a model is in contextual alignment if it has similar representations for word pairs within parallel sentences. More precisely, suppose we have N parallel sentences is a source-target sentence pair. Also, let each sentence pair (s, t) have word pairs, denoted a(s, t) = {(i 1, j 1),..., (i m, j m)}, containing position tuples (i, j) such that the words s i and t j are translations of each other. 1 We will use f to represent a pre-trained model such that f (i, s) is the contextual embedding for the ith word in s. Published as a conference paper at ICLR 2020 As an example, we might have the following sentence pair: Then, using the parallel corpus C, we can measure the contextual alignment of the model f using its accuracy in contextual word retrieval. In this task, the model is presented with two parallel corpora, and given a word within a sentence in one corpus, it must find the correct word and sentence in the other. Specifically, we can define a nearest neighbor retrieval function where i and j denote the position within a sentence and sim is a similarity function. The accuracy is then given by the percentage of exact matches over the entire corpus, or where I represents the indicator function. We can perform the same procedure in the other direction, where we pull target words given source words, so we report the average between the two directions. As our similarity function, we use CSLS, a modified version of cosine similarity that mitigates the hubness problem, with neighborhood size 10 (a). One additional point is that this procedure can be made more or less contextual based on the corpus: a corpus with more occurrences for each word type requires better representations of context. Therefore, we also test non-contextual word retrieval by removing all but the first occurrence of each word type. Given parallel data, these word pairs can be procured in an unsupervised fashion using standard techniques developed by the machine translation community . While these methods can be noisy, by running the algorithm in both the source-target and target-source directions and only keeping word pairs in their intersection, we can trade-off coverage for accuracy, producing a reasonably high-precision dataset . To improve the alignment of the model f with respect to the corpus C, we can encapsulate alignment in the loss function where we sum the similarities between word pairs. Because the CSLS metric is not easily optimized, we instead use the squared error loss, or sim(2 . However, note that this loss function does not account for the informativity of f ; for example, it is zero if f is constant. Therefore, at a high level, we would like to minimize L(f ; C) while maintaining some aspect of f that makes it useful, e.g. its high accuracy when fine-tuned on downstream tasks. Letting f 0 denote the initial pre-trained model before alignment, we achieve this goal by defining a regularization term which imposes a penalty if the target language embeddings stray from their initialization. Then, we sample minibatches B ⊂ C and take gradient steps of the function L(f ; B) + λR(f ; B) directly on the weights of f, which moves the source embeddings toward the target embeddings while preventing the latter from drifting too far. In our experiments, we set λ = 1. In the multilingual case, suppose we have k parallel corpora C 1,..., C k, where each corpus has a different source language with the target language as English. Then, we sample equal-sized batches B i ⊂ C i from each corpus and take gradient steps on, which moves all of the non-English embeddings toward English. Note that this alignment method departs from prior work, in which each non-English language is rotated to match the English embedding space through individual learned matrices. Specifically, the most widely used post-hoc alignment method learns a rotation W applied to the source vectors to minimize the distance between parallel word pairs, or This problem is known as the Procrustes problem and can be solved in closed form . This approach has the nice property that the vectors are only rotated, preserving distances and therefore the semantic information captured by the embeddings . However, rotation requires the strong assumption that the embedding spaces are roughly isometric (Søgaard et al., 2018), an assumption that may not hold for contextual pre-trained models because they represent more aspects of a word than just its type, i.e. context and syntax, which are less likely to be isomorphic between languages. Given that past work has also found independent alignment per language pair to be inferior to joint training , another advantage of our method is that the alignment for all languages is done simultaneously. As our dataset, we use the Europarl corpora for English paired with Bulgarian, German, Greek, Spanish, and French, the languages represented in both Europarl and XNLI . After tokenization , we produce word pairs using fastAlign and keep the one-to-one pairs in the intersection . We use the most recent 1024 sentences as the test set, the previous 1024 sentences as the development set, and the following 250K sentences as the training set. Furthermore, we modify the test set accuracy calculation to only include word pairs not seen in the training set. We also remove any exact matches, e.g. punctuation and numbers, because BERT is already aligned for these pairs due to its shared vocabulary. Given that parallel data may be limited for low-resource language pairs, we also report numbers for 10K and 50K parallel sentences. Given that there has been a long line of work on word vector alignment (; a; , inter alia), we also compare BERT to a sentence-augmented fastText baseline . , we first normalize, then mean-center, then normalize the word vectors, and we then learn a rotation with the same parallel data as in the contextual case, as described in Equation 1. We also strengthen this baseline by including sentence information: specifically, during word retrieval, we concatenate each word vector with a vector representing its sentence. Following Rücklé et al., we compute the sentence vector by concatenating the average, maximum, and minimum vector over all of the words in the sentence, a method that was shown to be state-of-the-art for a suite of cross-lingual tasks. We also experimented with other methods, such as first retrieving the sentence and then the word, but found this method ed in the highest accuracy. As a , the fastText vectors are 1200-dimensional, while the BERT vectors are 768-dimensional. The next step is to determine whether better alignment improves cross-lingual transfer. As our downstream task, we use the XNLI dataset, where the English MultiNLI development and test sets are human-translated into multiple languages (b;). Given a pair of sentences, the task is to predict whether the first sentence implies the second, where there are three labels: entailment, neutral, or contradiction. Starting from either the base or aligned multilingual BERT, we train on English and evaluate on Bulgarian, German, Greek, Spanish, and French, the XNLI languages represented in Europarl. As our architecture, following , we apply a linear layer followed by softmax on the [CLS] embedding of the sentence pair, producing scores for each of the three labels. The model is trained using cross-entropy loss and selected based on its development set accuracy averaged across all of the languages. As a fully-supervised ceiling, we also compare to models trained and tested on the same language, where for the non-English training data, we use the machine translations of the English MultiNLI training data provided by Conneau et al. (2018b). While the quality of the training data is affected by the quality of the MT system, this comparison nevertheless serves as a good approximation for the fully-supervised setting. and two rotation-based methods, sentence alignment and word alignment . We also include the current state-of-the-art zero-shot achieved by XLM . Rotation-based methods provide small gains on some languages but not others. On the other hand, after fine-tuning-based alignment, Bulgarian and Greek match the translate-train ceiling, while German, Spanish, and French close roughly one-third of the gap. Table 2: Zero-shot accuracy on the XNLI test set, where we align BERT with varying amounts of parallel data. The method scales with the amount of data but achieves a large fraction of the gains with 50K sentences per language pair. First, we test whether alignment improves multilingual BERT by applying the models to zero-shot XNLI, as displayed in Table 1. We see that our alignment procedure greatly improves accuracies, with all languages seeing a gain of at least 1%. In particular, the Bulgarian and Greek zero-shot numbers are boosted by almost 5% each and match the translate-train numbers, suggesting that the alignment procedure is especially effective for languages that are initially difficult for BERT. We also run alignment for more distant language pairs (Chinese, Arabic, Urdu) and find similar , which we report in the appendix. Comparing to rotation-based methods , we find that a rotation produces small gains for some languages, namely Bulgarian, German, and Spanish, but is sub-optimal overall, providing evidence that the increased expressivity of our proposed procedure is beneficial for contextual alignment. We explore this comparison more in Section 5.1. Given that our goal is zero-shot transfer, we cannot expect to always have large amounts of parallel data. Therefore, we also characterize the performance of our alignment method with varying amounts of data, as displayed in Table 2. We find that it improves transfer with as little as 10K sentences per language, making it a promising approach for low-resource languages. Table 3: Word retrieval accuracy for the aligned sentence-augmented fastText baseline and BERT pre-and post-alignment. Across languages, base BERT has variable accuracy while fine-tuningaligned BERT is consistently effective. Fine-tuned BERT also matches fastText in a version of the task where context is not necessary, suggesting that our method matches the type-level alignment of fastText while also aligning context. In the following sections, we present word retrieval to both compare our method to past work and better understand the strengths and weaknesses of multilingual BERT. Table 3 displays the word retrieval accuracies for the aligned sentence-augmented fastText baseline and BERT pre-and postalignment. First, we find that in contextual retrieval, fine-tuned BERT outperforms fastText, which outperforms rotation-aligned BERT. This supports the intuition that aligning large pre-trained models is more difficult than aligning word vectors, given that a rotation, at least when applied naively, produces sub-par alignments. In addition, fine-tuned BERT matches the performance of fastText in non-contextual retrieval, suggesting that our alignment procedure overcomes these challenges and achieves type-level alignment that matches non-contextual approaches. In the appendix, we also provide examples of aligned BERT disambiguating between different meanings of a word, giving qualitative evidence of the benefit of context alignment. We also find that before alignment, BERT's performance varies greatly between languages, while after alignment it is consistently effective. In particular, Bulgarian and Greek initially have very low accuracies. This phenomenon is also reflected in the XNLI numbers (Table 1), where Bulgarian and Greek receive the largest boosts from alignment. Examining the connection between alignment and zero-shot more closely, we find that the word retrieval accuracies are highly correlated with downstream zero-shot performance (Figure 2), supporting our evaluation measure as predictive of cross-lingual transfer. The language discrepancies are also consistent with a hypothesis by to explain BERT's multilingualism. They posit that due to the shared vocabulary, shared words between languages, e.g. numbers and names, are forced to have the same representation. Then, due to the masked word prediction task, other words that co-occur with these shared words also receive similar representations. If this hypothesis is true, then languages with higher lexical overlap with English are likely to experience higher transfer. As an extreme form of this phenomenon, Bulgarian and Greek have completely different scripts and should experience worse transfer than the common-script languages, an intuition that is confirmed by the word retrieval and XNLI accuracies. The fact that all languages are equally aligned with English post-alignment suggests that the pre-training procedure is suboptimal for these languages. Table 4: Accuracy by part-of-speech tag for non-contextual word retrieval. To achieve better word type coverage, we do not remove word pairs seen in the training set. The tags are grouped into lexically overlapping, closed-class, and open-class groups. The "Particle," "Symbol," "Interjection," and "Other" tags are omitted. Fraction correct Aligned BERT Base BERT Figure 3: Contextual word retrieval accuracy plotted against difference in frequency rank between source and target. The accuracy of base BERT plummets for larger differences, suggesting that its alignment depends on word pairs having similar usage statistics. Next, to gain insight into the multilingual pre-training procedure, we analyze the accuracy broken down by part-of-speech using the Universal Part-of-Speech Tagset , annotated using polyglot for Bulgarian and spaCy for the other languages, as displayed in Table 4. Unsurprisingly, multilingual BERT has high alignment out-of-the-box for groups with high lexical overlap, e.g. numerals, punctuation, and proper nouns, due to its shared vocabulary. We further divide the remaining tags into closed-class and open-class, where closed-class parts-of-speech correspond to fixed sets of words serving grammatical functions (e.g. determiner, preposition, conjunction, pronoun, and auxiliary), while open-class parts-of-speech correspond to lexical words (e.g. noun, adverb, adjective, verb). Interestingly, we see that base BERT has consistently lower accuracy for closed-class versus open-class categories (0.70 vs 0.54), but that this discrepancy disappears after alignment (0.88 vs 0.89). From this closed-class vs open-class difference, we hypothesize that BERT's alignment of a particular word pair is influenced by the similarity of their usage statistics. Specifically, given that BERT is trained through masked word prediction, its embeddings are in large part determined by the co-occurrences between words. Therefore, two words that are used in similar contexts should be better aligned. This hypothesis provides an explanation of the closed-class vs open-class difference: closed-class words are typically grammatical, so they are used in similar ways across typologically similar languages. Furthermore, these words cannot be substituted for one another due to their grammatical function. Therefore, their usage statistics are a strong signature that can be used for alignment. On the other hand, open-class words can be substituted for one another: for example, in most sentences, the noun tokens could be replaced by a wide range of semantically dissimilar nouns with the sentence remaining syntactically well-formed. By this effect, many nouns have similar co-occurrences, making them difficult to align through masked word prediction alone. To further test this hypothesis, we plot the word retrieval accuracy versus the difference between the frequency rank of the target and source word, where this difference measures discrepancies in usage, as depicted in Figure 3. We see that accuracy drops off significantly as the source-target difference increases, supporting our hypothesis. Furthermore, this shortcoming is remedied by alignment, revealing another systematic deficiency of multilingual pre-training. Given that the degree of alignment is causally predictive of downstream cross-lingual transfer, contextual alignment proves to be a useful concept for understanding and improving multilingual pretrained models. Given small amounts of parallel data, our alignment procedure improves multilingual BERT and corrects many of its systematic deficiencies. Contextual word retrieval also provides useful new insights into the pre-training procedure, opening up new avenues for analysis. Table 5: Zero-shot accuracy on the XNLI test set with more languages, where we use 20K parallel sentences for each language paired with English. This confirms that the alignment method works for distant languages and a variety of parallel corpora, including Europarl, MultiUN, and Tanzil, which contains sentences from the Quran (; ;). A APPENDIX For both alignment and XNLI optimization, we use a learning rate of 5 × 10 −5 with Adam hyperparameters β = (0.9, 0.98), = 10 −9 and linear learning rate warmup for the first 10% of the training data. For alignment, the model is trained for one epoch, with each batch containing 2 sentence pairs per language. For XNLI, each model is trained for 3 epochs with 32 examples per batch, and 10% dropout is applied to the BERT embeddings. In Table 5, we report numbers for additional languages, where we align a single BERT model for all eight languages and then fine-tune on XNLI. We use 20K sentences per language, where we use the MultiUN corpus for Arabic and Chinese , the Tanzil corpus for Urdu , and the Europarl corpus for the other five languages . This confirms that the alignment method works for a variety of languages and corpora. Furthermore, the Tanzil corpus consists of sentences from the Quran, suggesting that the method works even when the parallel corpus and downstream task contain sentences from entirely different domains. In this section, we qualitatively show that aligned BERT is able to disambiguate between different occurences of a word. First, we find two meanings of the word "like" occurring in the EnglishGerman Europarl test set. Note also that in the second and third example, the two senses of "like" occur in the same sentence. • This empire did not look for colonies far from home or overseas, like most Western European States, but close by. Dieses Reich suchte seine Kolonien nicht weit von zu Hause und in bersee wie die meisten westeuropäischen Staaten, sondern in der unmittelbaren Umgebung. • Like other speakers, I would like to support the call for the arms embargo to remain. Wie andere Sprecher, so möchte auch ich den Aufruf zur Aufrechterhaltung des Waffenembargos untersttzen. • Like other speakers, I would like to support the call for the arms embargo to remain. Wie andere Sprecher, so möchte auch ich den Aufruf zur Aufrechterhaltung des Waffenembargos untersttzen. • I would also like, although they are absent, to mention the Commission and the Council. Ich möchte mir sogar erlauben, die Kommission und den Rat zu nennen, auch wenn sie nicht anwesend sind. Multiple meanings of "order": • Moreover, the national political elite had to make a detour in Ambon in order to reach the civil governor's residence by warship. In Ambon mußte die politische Spitze des Landes auch noch einen Umweg machen, um mit einem Kriegsschiff die Residenz des Provinzgouverneurs zu erreichen. • Although the European Union has an interest in being surrounded by large, stable regions, the tools it has available in order to achieve this are still very limited. Der Europäischen Union ist zwar an großen stabilen Regionen in ihrer Umgebung gelegen, aber sie verfgt nach wie vor nur ber recht begrenzte Instrumente, um das zu erreichen. • We could reasonably expect the new Indonesian government to take action in three fundamental areas: restoring public order, prosecuting and punishing those who have blood on their hands and entering into a political dialogue with the opposition. Von der neuen indonesischen Regierung darf man mit Fug und Recht drei elementare Maß-nahmen erwarten: die Wiederherstellung deröffentlichen Ordnung, die Verfolgung und Bestrafung derjenigen, an deren Händen Blut klebt, und die Aufnahme des politischen Dialogs mit den Gegnern. • Firstly, I might mention the fact that the army needs to be reformed, secondly that a stable system of law and order needs to be introduced. Ich nenne hier an erster Stelle die notwendige Reform der Armee, ferner die Einfhrung eines stabilen Systems rechtsstaatlicher Ordnung. • Financial support is needed to enable poor countries to take part in these court activities. Arme Länder müssen finanziell unterstützt werden, damit auch sie sich an der Arbeit des Gerichtshofs beteiligen können. • We must help them and ensure that a proper action plan is implemented to support their work. Es gilt einen wirklichen Aktionsplan auf den Weg zu bringen, um die Arbeit dieser Organisationen zu unterstützen. • So I hope that you will all support this resolution condemning the abominable conditions of prisoners and civilians in Djibouti. Ich hoffe daher, daß Sie alle diese Entschließung befürworten, die die entsetzlichen Bedingungen von Inhaftierten und Zivilpersonen in Dschibuti verurteilt. • It would be difficult to support a subsidy scheme that channelled most of the aid to the large farms in the best agricultural regions. Es wäre auch problematisch, ein Beihilfesystem zu befürworten, das die meisten Beihilfen in die großen Betriebe in den besten landwirtschaftlichen Gebieten lenkt. Multiple meanings of "close": • This empire did not look for colonies far from home or overseas, like most Western European States, but close by. Dieses Reich suchte seine Kolonien nicht weit von zu Hause und in bersee wie die meisten westeuropäischen Staaten, sondern in der unmittelbaren Umgebung. • In addition, if we are to shut down or refuse investment from every company which may have an association with the arms industry, then we would have to close virtually every American and Japanese software company on the island of Ireland with catastrophic consequences. Wenn wir zudem jedes Unternehmen, das auf irgendeine Weise mit der Rstungsindustrie verbunden ist, schließen oder Investitionen dieser Unternehmen unterbinden, dann mßten wir so ziemlich alle amerikanischen und japanischen Softwareunternehmen auf der irischen Insel schließen, was katastrophale Auswirkungen hätte. • On the other hand, the deployment of resources left over in the Structural Funds from the programme planning period 1994 to 1999 is hardly worth considering as the available funds
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they both improve multilingual BERT's zero-shot XNLI transfer and provide useful insights into the model.
1,122
scitldr
Neural networks have reached outstanding performance for solving various ill-posed inverse problems in imaging. However, drawbacks of end-to-end learning approaches in comparison to classical variational methods are the requirement of expensive retraining for even slightly different problem statements and the lack of provable error bounds during inference. Recent works tackled the first problem by using networks trained for Gaussian image denoising as generic plug-and-play regularizers in energy minimization algorithms. Even though this obtains state-of-the-art on many tasks, heavy restrictions on the network architecture have to be made if provable convergence of the underlying fixed point iteration is a requirement. More recent work has proposed to train networks to output descent directions with respect to a given energy function with a provable guarantee of convergence to a minimizer of that energy. However, each problem and energy requires the training of a separate network. In this paper we consider the combination of both approaches by projecting the outputs of a plug-and-play denoising network onto the cone of descent directions to a given energy. This way, a single pre-trained network can be used for a wide variety of reconstruction tasks. Our show improvements compared to classical energy minimization methods while still having provable convergence guarantees. In many image processing tasks an observed image f is modeled as the of the transformation of a clean imageû under a known (linear) operator A and unknown noise ξ, f = Aû + ξ. (In most cases, the problem of reconstructingû from f and A is ill-posed and can thus not be solved by a simple inversion of A, giving rise to the field of regularization theory with iterative or variational methods, see e.g. for an overview. In recent years neural networks were very successful in learning a direct mapping G(f) ≈û for a variety of problems such as deblurring, denoising, super-resolution, demosaicing and MRI-or CT-reconstruction. Even though this works well in practice, there are rarely any guarantees on the behaviour of neural networks on unseen data, making them difficult to use in safety-critical applications. Moreover, for each problem and type of noise a separate network has to be trained. In contrast, classical variational methods try to find the solution by the minimization of a suitable energy function of the formû where H f is a data fidelity term, for example commonly chosen as H f (u) = 1 2 ||Au − f || 2, and R is a regularization function that models prior knowledge about the solution, e.g. the popular total variation (TV) regularization, R(u) = ∇u 1,. While minimizers of come with many desirable theoretical guarantees, regularizations like the TV often cannot perfectly capture the complex structure of the space of natural images. To combine the advantages of powerful feed-forward networks and model-based approaches like, authors have considered various hybrid models like learning regularizers (e.g. ), designing networks architectures that resemble the structure of minimization algorithms or differential equations, e.g., interleaving networks with classical optimization steps, or using the parametrization of networks as a regularization for, see e.g.. A particularly flexible approach arises from, where proximal operators with respect to the regularizer are replaced by arbitrary denoising operators, with recent works focusing on the use of denoising networks. While such approaches allow to tackle different inverse problems with the same neural network, the derivation of theoretical guarantees -even in terms of the convergence of the ing algorithmic scheme -remains difficult, see or some discussion in, unless the denoiser satisfies particular properties. The starting point of the above-mentioned algorithmic schemes that utilize denoising networks to regularize model-based inverse problems are methods for the minimization of. While most works focus on primal-dual / ADMM approaches, their convergence analysis is quite delicate even in a setting in which one still minimizes (nonconvex) energies, such that we turn to two simpler methods, gradient descent and proximal gradient methods, where Following the idea of, considering either a gradient descent or a proximal step on the regularization as a generic denoising operation gives rise to the following two algorithmic schemes, where G denotes any kind of denoiser, e.g. a convolutional neural network, and we define ρ(u for the sake of brevity of notation. We refer to for a more detailed derivation. Algorithmic schemes like or combine the model-based flexibility of energy minimization methods (i.e. explicit modelling of H f) with the expressive power of deep neural networks G. Unfortunately -despite their success in various practical applications -schemes like or remain dangerous to be used: Figure 1 shows the of running the iteration with H f = 0 on a noisy input image f = u 0 for 100 and 800 iterations using a DnCNN preimplemented in Matlab as the denoiser G. As we can see the image gets completely distorted. Even more strikingly, the range of the image increased from values in to an interval of [−185, 218] within the first 1000 iterations. Clearly, the algorithmic scheme diverges. A natural condition for the provable convergence of a scheme like (at least along subsequences) would be a 1-Lipschitz continuous operator G. There has been previous work on computing upper bounds for the best Lipschitz constant of a network and using it to enforce a user defined Lipschitz constant L during training time but we found that enforcing non-expansiveness drastically decreased the denoising performance. The problem of computing the best Lipschitz constant, in hope of improving those , was recently proved to be NP-hard and thus is infeasible. Therefore, we adapt the recent idea proposed in to safeguard neural networks by forcing them to predict a descent direction to a given model-based energy, such that it can be used within a line search algorithm to guarantee convergence. More precisely, at any given estimate u and model-based energy E the authors use the Euclidean projection onto the half space as the last layer of their network. Even though the ing algorithm converges to the minimizer of E, experiments showed significantly higher peaks of the PSNR value in early iterations compared to classical gradient descent on E. Intuitively, the descent direction proposed by the network pushes the iteration closer towards the distribution of the training data than a usual gradient descent step. While the approach of has to train a separate network for each inverse problem and each type of noise, we investigate the combination of the flexible algorithmic schemes and with the idea from to project onto the half-space of descent directions to safeguard the underlying algorithm. In the following G will always refer to a generic denoising network, like DnCNN. We assume that E(u) = H f (u) + R(u) is a continuously differentiable, strictly convex and coercive energy function. As a first step, we simply rewrite the algorithmic schemes and in such a way that they resemble a gradient descent iteration, i.e., such that we can interpret as "update directions" of the respective algorithmic schemes. Because the plain iterations and can easily be divergent, we safeguard them by projecting them onto the half-space of descent directions C(γ, ∇E(u k)), i.e., Note that we replaced the averaging of the gradient descent and denoising step in by an abitrary convex combination using a parameter α to determine the respective influence of the data term and the denoising more flexibly. After computing the above directions d k, we update our iterates using with a step size t k chosen based on a backtracking line-search mechanism similar to. Under weak additional conditions, the latter guarantees the convergence of the proposed scheme to the minimizer of E. Such a minimizer could of course be determined by any classical algorithm, but we hope for to yield a better path towards the true minimizer, and consider a discrepancy principle for stopping the iteration before convergence. More precisely, we terminate as soon as for H f (û) being an estimate on the (data-term-dependent measure of the) noise level of the considered problem, and β being a scaling factor (typically close to 1). We tested our implementation with the image reconstruction tasks of Gaussian deblurring with standard deviation 1 and 4× single image super resolution. In both cases we added Gaussian noise with standard deviation 0.02 to the corrupted image. We chose the PyTorch implementation 1 of DnCNN pre-trained on a noise level of 0.1 as our denoising network. Our surrogate energy uses a TV regularization with Huber-norm instead of the 1 -norm. As our data term we choose H f (u) = operator. The best hyperparameters for all methods were found with a grid search. In all experiments, for scheme (conv), α = 0 was the best choice for any τ, indicating that the gradient descent step on the data term does not yield much additional information, assumably because the projection onto C(γ, ∇E(u)) which depends on the gradient of the data term anyway. When using (prox), we empirically found τ = 30 to be the best choice. For the projection onto the half-space of descent directions, we we used γ = 5 for both methods for deblurring, and γ = 50 in (conv) and γ = 1 in (prox) for super resolution. For fairness, the classical gradient descent was also implemented using backtracking line search. Figure 2 shows the reconstruction quality of the current iterate compared to ground truth over a span of 500 iterations. The PSNR quickly peaks before slowly converging to the fixed point of the surrogate energy which is consistent with the of. Notably, the convex combination method peaks earlier but not as high as the prox method. Tables 1 and 2 show using early stopping using a discrepancy principle. On all test images our prox scheme beats gradient descent. Table 1: PSNR values for deblurring for varying images and stopping criteria. The algorithm was stopped when H f (u k) < βH f (û). best refers to the highest PSNR over 500 iterations and a "*" means that the stopping criterion was not triggered such that the last iteration was used instead. Table 2: PSNR values for super resolution for varying images and stopping criteria. The algorithm was stopped when H f (u k) < βH f (û). best refers to the highest PSNR over 500 iterations. We combine deep learning and energy minimization methods for solving inverse problems in image reconstruction into a provably convergent algorithmic scheme. Still, our approach is able to generalize to different problems with a single denoising network and without the need to retrain if that problem changes. We were able to reach better than the energy minimization baseline in our experiments, and are happy to elaborate on the above aspects in the NeurIPS workshop.
We use neural networks trained for image denoising as plug-and-play priors in energy minimization algorithms for image reconstruction problems with provable convergence.
1,123
scitldr
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds. Knowledge graph embedding models are neural architectures that learn vector representations (i.e. embeddings) of nodes and edges of a knowledge graph. Such knowledge graph embeddings have applications in knowledge graph completion, knowledge discovery, entity resolution, and link-based clustering, just to cite a few (a). Despite burgeoning research, the problem of calibrating such models has been overlooked, and existing knowledge graph embedding models do not offer any guarantee on the probability estimates they assign to predicted facts. Probability calibration is important whenever you need the predictions to make probabilistic sense, i.e., if the model predicts a fact is true with 80% confidence, it should to be correct 80% of the times. Prior art suggests to use a sigmoid layer to turn logits returned by models into probabilities (a) (also called the expit transform), but we show that this provides poor calibration. Figure 1 shows reliability diagrams for off-the-shelf TransE and ComplEx. The identity function represents perfect calibration. Both models are miscalibrated: all TransE combinations in Figure 1a under-forecast the probabilities (i.e. probabilities are too small), whereas ComplEx under-forecasts or over-forecasts according to which loss is used (Figure1b). Calibration is crucial in high-stakes scenarios such as drug-target discovery from biological networks, where end-users need trustworthy and interpretable decisions. Moreover, since probabilities are not calibrated, when classifying triples (i.e. facts) as true or false, users must define relationspecific thresholds, which can be awkward for graphs with a great number of relation types. To the best of our knowledge, this is the first work to focus on calibration for knowledge embeddings. Our contribution is two-fold: First, we use Platt Scaling and isotonic regression to calibrate knowledge graph embedding models on datasets that include ground truth negatives. One peculiar feature of knowledge graphs is that they usually rely on the open world assumption (facts not present are not necessarily false, they are simply unknown). This makes calibration troublesome because of the lack of ground truth negatives. For this reason, our second and main contribution is a calibration heuristics that combines Platt-scaling or isotonic regression with synthetically generated negatives. Experimental show that we obtain better-calibrated models and that it is possible to calibrate knowledge graph embedding models even when ground truth negatives are not present. We also experiment with triple classification, and we show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds. A comprehensive survey of knowledge graph embedding models is out of the scope of this paper. Recent surveys such as (a) and present an overview or recent literature. TransE is the forerunner of distance-based methods, and spun a number of models commonly referred to as TransX. The intuition behind the symmetric bilinear-diagonal model DistMult paved the way for its asymmetric evolutions in the complex space, RotatE and ComplEx (a generalization of which uses hypercomplex representations ). HolE relies instead on circular correlation (b). The recent TorusE operates on a lie group and not in the Euclidean space. While the above models can be interpreted as multilayer perceptrons, others such as ConvE or ConvKB include convolutional layers. More recent works adopt capsule networks architectures . Adversarial learning is used by KBGAN , whereas attention mechanisms are instead used by . Some models such as RESCAL , TuckER (Balažević et al., 2019), and SimplE rely on tensor decomposition techniques. More recently, ANALOGY adopts a differentiable version of analogical reasoning . In this paper we limit our analysis to four popular models: TransE, DistMult, ComplEx and HolE. They do not address the problem of assessing the reliability of predictions, leave aside calibrating probabilities. Besides well-established techniques such as Platt scaling and isotonic regression , recent interest in neural architectures calibration show that modern neural architectures are poorly calibrated and that calibration can be improved with novel methods. For example, successfully proposes to use temperature scaling for calibrating modern neural networks in classification problems. On the same line, proposes a procedure based on Platt scaling to calibrate deep neural networks in regression problems. The Knowledge Vault pipeline in extracts triples from unstructured knowledge and is equipped with Platt scaling calibration, but this is not applied to knowledge graph embedding models. KG2E proposes to use normally-distributed embeddings to account for the uncertainty, but their model does not provide the probability of a triple being true, so KG2E would also benefit from the output calibration we propose here. To the best of our knowledge, the only work that adopts probability calibration to knowledge graph embedding models is Krompaß &. The authors propose to use ensembles in order to improve the of knowledge graph embedding tasks. For that, they propose to calibrate the models with Platt scaling, so they operate on the same scale. No further details on the calibration procedure are provided. Besides, there is no explanation on how to handle the lack of negatives. Knowledge Graph. Formally, a knowledge graph G = {(s, p, o)} ⊆ E × R × E is a set of triples t = (s, p, o), each including a subject s ∈ E, a predicate p ∈ R, and an object o ∈ E. E and R are the sets of all entities and relation types of G. Triple Classification. Binary classification task where G (which includes only positive triples) is used as training set, and T = {(s, p, o)} ⊆ E × R × E is a disjoint test set of labeled triples to classify. Note T includes positives and negatives. Since the learned models are not calibrated, multiple decision thresholds τ i must be picked, where 0 < i < |R|, i.e. one for each relation type. This is done using a validation set . Classification metrics apply (e.g. accuracy). Link Prediction. Given a training set G that includes only positive triples, the goal is assigning a score f (t) ∈ R proportional to the likelihood that each unlabeled triple t included in a held-out set S is true. Note S does not have ground truth positives or negatives. This task is cast as a learning to rank problem, and uses metrics such as mean rank (MR), mean reciprocal rank (MRR) or Hits@N. Knowledge Graph Embeddings. Knowledge graph embedding models are neural architectures that encode concepts from a knowledge graph G (i.e. entities E and relation types R) into lowdimensional, continuous vectors ∈ R k (i.e, the embeddings). Embeddings are learned by training a neural architecture over G. Although such architectures vary, the training phase always consists in minimizing a loss function L that includes a scoring function f m (t), i.e. a model-specific function that assigns a score to a triple t = (s, p, o) (more precisely, the input of f m are the embeddings of the subject e s, the predicate r p, and the object e o). The goal of the optimization procedure is learning optimal embeddings, such that the scoring function f m assigns high scores to positive triples t + and low scores to triples unlikely to be true t −. Existing models propose scoring functions that combine the embeddings e s, r p, e o ∈ R k using different intuitions. Table 1b lists the scoring functions of the most common models. For example, the scoring function of TransE computes a similarity between the embedding of the subject e s translated by the embedding of the predicate e p and the embedding of the object e o, using the L 1 or L 2 norm || · ||. Such scoring function is then used on positive and negative triples t + ∈ G, t − ∈ N in the loss function. This is usually a pairwise margin-based loss , negative log-likelihood, or multi-class log-likelihood . Since the training set usually includes positive statements, we generate synthetic negatives t − ∈ N required for training. We do so by corrupting one side of the triple at a time (i.e. either the subject or the object), following the protocol proposed by . Calibration. Given a knowledge graph embedding model identified by its scoring function f m, with f m (t) =p, wherep is the estimated confidence level that a triple t = (s, p, o) is true, we define f m to be calibrated ifp represents a true probability. For example, if f m (·) predicts 100 triples all with confidencep = 0.7, we expect exactly 70 to be actually true. Calibrating a model requires reliable metrics to detect miscalibration, and effective techniques to fix such distortion. Appendix A.1 includes definitions and on the calibration metrics adopted in the paper. We propose two scenario-dependent calibration techniques: we first address the case with ground truth negatives t − ∈ N. The second deals with the absence of ground truth negatives. Calibration with Ground Truth Negatives. We propose to use off-the-shelf Platt scaling and isotonic regression, techniques proved to be effective in literature. It is worth reiterating that to calibrate a model negative triples N are required from a held-out dataset (which could be the validation set). Such negatives are usually available in triple classification datasets (FB13, WN11, YAGO39K) Calibration with Synthetic Negatives. Our main contribution is for the case where no ground truth negatives are provided at all, which is in fact the usual scenario for link prediction tasks. We propose to adopt Platt scaling or isotonic regression and to synthetically generate corrupted triples as negatives, while using sample weights to guarantee that the frequencies adhere to the base rate of the population (which is problem-dependent and must be user-specified). It is worth noting that it is not possible to calibrate a model without implicit or explicit base rate. If it is not implicit on the dataset (the ratio of positives to totals), it must be explicitly provided. We generate synthetic negatives N following the standard protocol proposed by 1: for every positive triple t = (s, p, o), we corrupt one side of the triple at a time (i.e. either the subject s or the object o) by replacing it with other entities in E. The number of corruptions generated per positive is defined by the user-defined corruption rate η ∈ N. Since the number of negatives N = |N | can be much greater than the number of positive triples P = |G|, when dealing with calibration with synthetically generated corruptions, we weigh the positive and negative triples to make the calibrated model match the population base rate α = P/(P + N) ∈, otherwise the base rate would depend on the arbitrary choice of η. Given a positive base rate α, we propose the following weighting scheme: where ω + ∈ R is the weight associated to the positive triples and ω − ∈ R to the negatives. The ω + weight removes the imbalance determined by having a higher number of corruptions than positive triples in each batch. The ω − weight guarantees that the given positive base rate α is respected. The above can be verified as follows. For the unweighted problem, the positive base rate is simply the ratio of positive examples to the total number of examples: If we add uniform weights to each class, we have: By defining ω + = η, i.e. adopting the ratio of negatives to positives (corruption rate), we then have: Thus, the negative weights is: We compute the calibration quality of our heuristics, showing that we achieve calibrated predictions even when ground truth negative triples are not available. We then show the impact of calibrated predictions on the task of triple classification. Datasets. We run experiments on triple classification datasets that include ground truth negatives (Table 1). We train on the training set, calibrate on the validation set, and evaluate on the test set. • WN11 . A subset of Wordnet , it includes a large number of hyponym and hypernym relations thus including hierarchical structures. • FB13 . A subset of Freebase , it includes facts on famous people (place of birth and/or death, profession, nationality, etc). • YAGO39K . This recently released dataset has been carved out of YAGO3 , and includes a mixture of facts about famous people, events, places, and sports teams. We also use two standard link prediction benchmark datasets, WN18RR (a subset of Wordnet) and FB15K-237 (a subset of Freebase). Their test sets do not include ground truth negatives. Implementation Details. The knowledge graph embedding models are implemented with the AmpliGraph library version 1.1, using TensorFlow 1.13 and Python 3.6 on the backend 2. All experiments were run under Ubuntu 16.04 on an Intel Xeon Gold 6142, 64 GB, equipped with a Tesla V100 16GB. Hyperparameter Tuning. For each dataset in Table 1a, we train a TransE, DistMult, and a ComplEx knowledge graph embedding model. We rely on typical hyperparameter values: we train the embeddings with dimensionality k = 100, Adam optimizer, initial learning rate α 0 = 1e-4, negatives per positive ratio η = 20, epochs = 1000. We train all models on four different loss functions: Selfadversarial , pairwise , NLL, and Multiclass-NLL . Different losses are used in different experiments. Calibration Success. Table 2 reports Brier scores and log losses for all our calibration methods, grouped by the type of negative triples they deal with (ground truth or synthetic). All calibration methods show better-calibrated than the uncalibrated case, by a considerable margin and for all datasets. In particular, to put the of the synthetic strategy in perspective, if we suppose to predict the positive base rate as a baseline, for each of the cases in Table 2 (the three datasets share the same positive base rate α = 0.5), we would get Brier score B = 0.25 and log loss L log = 0.69, that are always worse than our methods. There is considerable variance of between models given a dataset, which also happens when varying losses given a particular combination of model and dataset (Table 3). TransE provides the best for WN11 and FB13, while DistMult works best for YAGO39K. We later propose that this variance comes from the quality of the embeddings themselves, that is, better embeddings allow for better calibration. In Figure 2, we also evaluate just the frequencies themselves, ignoring sharpness (i.e. whether probabilities are close to 0 or 1), using reliability diagrams for a single model-loss combination, for all datasets (ComplEx+NLL). Calibration plots show a remarkable difference between the uncalibrated baseline (s-shaped blue line on the left-hand side) and all calibrated models (curves closer to the identity function are better). A visual comparison of uncalibrated curves in Figure 1 with those in Figure 2 also gives a sense of the effectiveness of calibration. Ground Truth vs Synthetic. As expected, the ground truth method generally performs better than the synthetic calibration, since it has more data in both quantity (twice as much) and quality (two classes instead of one). Even so, the synthetic method is much closer to the ground truth than to the uncalibrated scores, as highlighted by the calibration plots in Figure 2. For WN11, it is actually as good as the calibration with the ground truth. This shows that our proposed method works as intended and could be used in situations where we do not have access to the ground truth, as is the case for most knowledge graph datasets. Isotonic vs Platt. Isotonic regression performs better than Platt scaling in general, but in practice Isotonic regression has the disadvantage of not being a convex or differentiable algorithm. This is particularly problematic for the synthetic calibration, as it requires the generation of the synthetic corruptions, which can only be made to scale via a mini-batch based optimization procedure. Platt scaling, given that it is a convex and differentiable loss, can be made Table 2: Calibration test (self-adversarial loss ). Low score = better. Best in bold for each combination of dataset and metric. part of a computational graph and optimized with mini-batches, thus it can rely on the modern computational infrastructure designed to train deep neural networks. Influence of Loss Function. We experiment with different losses, to assess how calibration affects each of them (Table 3). We choose to work with TransE, which is reported as a strong baseline in (tion methods, across all datasets. Experiments also show the choice of the loss has a big impact, greater than the choice of calibration method or embedding model. We assess whether such variability is determined by the quality of the embeddings. To verify whether better embeddings lead to sharper calibration, we report the mean reciprocal rank (MRR), which, for each true test triple, computes the (inverse) rank of the triple against synthetic corruptions, then averages the inverse rank (Table 3). In fact, we notice no correlation between calibration and MRR. In other words, embeddings that lead to the best predictive power are not necessary the best calibrated. Positive Base Rate. We apply our synthetic calibration method to two link prediction benchmark datasets, FB15K-237 and WN18RR. As they only provide positive examples, we apply our method with varying base rates α i, linearly spaced from 0.05 to 0.95. We evaluate relying on the closed-world assumption, i.e. triples not present in training, validation or test sets are considered negative. For each α i we calibrate the model using the synthetic method with both isotonic regression and Platt scaling. We sample negatives from the negative set under the implied negative rate, and calculate a baseline which is simply having all probability predictions equal to α i. Figure 3 shows that isotonic regression and Platt scaling perform similarly and always considerably below the baseline. As expected from the previous , the uncalibrated scores perform poorly, only reaching acceptable levels around some particular base rates. Triple Classification and Decision Threshold. To overcome the need to learn |R| decision thresholds τ i from the validation set, we propose to rely on calibrated probabilities, and use the natural threshold of τ = 0.5. Table 4 shows how calibration affects the triple classification task, comparing with the literature standard of per-relation thresholds (last column). For simplicity, note we use the same self-adversarial loss in Table 2 and Table 4. We learn thresholds τ i on validation sets, ing in 11, 7, and 33 thresholds for WN11, FB13 and YAGO39K respectively. Using a single τ = 0.5 and calibration provides competitive compared to multiple learned thresholds (note uncalibrated with τ = 0.5 are poor, as expected). It is worth mentioning that Figure 3: Synthetic calibration on FB15K-237 and WN18RR, with varying positive base rates. The baseline stands for using the positive base rate as the probability prediction. Results are evaluated under the closed-world assumption, using the same positive base rate used to calibrate the models. Ground Truth (τ = .5) Table 4: Effect of calibration on triple classification accuracy. Best in bold. For all calibration methods there is one single threshold, τ = 0.5. For the per-relation τ, we learned multiple thresholds from validation sets (Appendix A.5). We did not carry out additional model selection, and used Table 2 hyperparameters instead. Isotonic regression reaches state-of-the-art for WN11. Results of * from ; from ; † from . we are at par with state-of-the-art for WN11. Isotonic regression is again the best method, but there is more variance in the model choice. Our proposed calibration method with synthetic negatives performs well overall, even though calibration is performed only using half of the validation set (negatives examples are replaced by synthetic negatives). We propose a method to calibrate knowledge graph embedding models. We target datasets with and without ground truth negatives. We experiment on triple classification datasets and apply Platt scaling and isotonic regression with and without synthetic negatives controlled by our heuristics. All calibration methods perform significantly better than uncalibrated scores. We show that isotonic regression brings better calibration performance, but it is computationally more expensive. Additional experiments on triple classification shows that calibration allows to use a single decision threshold, reaching state-of-the-art without the need to learn per-relation thresholds. Future work will evaluate additional calibration algorithms, such as beta calibration or Bayesian binning . We will also experiment on ensembling of knowledge graph embedding models, inspired by (Krompaß &). The rationale is that different models operate on different scales, but calibrating brings them all to the same probability scale, so their output can be easily combined. Reliability Diagram . Also known as calibration plot, this diagram is a visual depiction of the calibration of a model (see Figure 1 for an example). It shows the expected sample accuracy as a function of the estimated confidence. A hypothetical perfectly calibrated model is represented by the diagonal line (i.e. the identity function). Divergence from such diagonal indicates calibration issues . Brier Score . It is a popular metric used to measure how well a binary classifier is calibrated. It is defined as the mean squared error between n probability estimatesp and the corresponding actual outcomes y ∈ 0, 1. The smaller the Brier score, the better calibrated is the model. Note that the Brier score B ∈. Log Loss is another effective and popular metric to measure the reliability of the probabilities returned by a classifier. The logarithmic loss measures the relative uncertainty between the probability estimates produced by the model and the corresponding true labels. Platt Scaling. Proposed by for support vector machines, Platt scaling is a popular parametric calibration techniques for binary classifiers. The method consists in fitting a logistic regression model to the scores returned by a binary classifier, such thatq = σ(ap + b), wherê p ∈ R is the uncalibrated score of the classifier, a, b ∈ R are trained scalar weights. andq is the calibrated probability returned as output. Such model can be trained be trained by optimizing the NLL loss with non-binary targets derived by the Bayes rule under an uninformative prior, ing in an Maximum a Posteriori estimate. Isotonic Regression . This popular non-parametric calibration techniques consists in fitting a non-decreasing piecewise constant function to the output of an uncalibrated classifier. As for Platt scaling, the goal is learning a functionq = g(p), such thatq is a calibrated probability. Isotonic regression learns g by minimizing the square loss n i=1 (q i − y i) 2 under the constraint that g must be piecewise constant . We present in Figure 4 the total count of instances for each bin used in the calibration plots included in Figure 2. As expected, calibration considerably helps spreading out instances across bins, whereas in uncalibrated scenarios instances are squeezed in the first or last bins. A.3 IMPACT OF MODEL HYPERPARAMETERS: η AND EMBEDDING DIMENSIONALITY In Figure 5 we report the impact of negative/positive ratio η and the embedding dimensionality k. Results show that the embedding size k has higher impact than the negative/positive ratio η. We observe that calibrated and uncalibrated low-dimensional embeddings have worse Brier score. Results also show that any k > 50 does not improve calibration anymore. The negative/positive ratio η follows a similar pattern: choosing η > 10 does not have any effect on the calibration score. In Table 5, we present the traditional knowledge graph embedding rank metrics: MRR (mean reciprocal rank), MR (mean rank) and Hits@10 (precision at the top-10 ). We report the for all datasets and models used in the main text, which appear in Table 2, Table 4 Table 5: Standard filtered metrics for knowledge graph embeddings models. The models are implemented in the same codebase and share the same evaluation protocol. Note that we do not include from reciprocal evaluation protocols. We report in Table 6 the per-relation decision thresholds τ used in Table 4, under the'Reproduced' column. Note that the thresholds reported here are not probabilities, as they have been applied to the raw scores returned by the model-dependent scoring function f m (t). Table 6: Relation-specific decision thresholds learned on uncalibrated raw scores (See also Table 4 for a report on triple classification .)
We propose a novel method to calibrate knowledge graph embedding models without the need of negative examples.
1,124
scitldr
As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability. More recently, absorbing the idea of mining-based strategies is adopted to emphasize the misclassified samples and achieve promising . However, during the entire training process, the prior methods either do not explicitly emphasize the sample based on its importance that renders the hard samples not fully exploited or explicitly emphasize the effects of semi-hard/hard samples even at the early training stage that may lead to convergence issues. In this work, we propose a novel Adaptive Curriculum Learning loss (CurricularFace) that embeds the idea of curriculum learning into the loss function to achieve a novel training strategy for deep face recognition, which mainly addresses easy samples in the early training stage and hard ones in the later stage. Specifically, our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages. In each stage, different samples are assigned with different importance according to their corresponding difficultness. Extensive experimental on popular benchmarks demonstrate the superiority of our CurricularFace over the state-of-the-art competitors. Code will be available upon publication. The success of Convolutional Neural Networks (CNNs) on face recognition can be mainly credited to: enormous training data, network architectures, and loss functions. Recently, designing appropriate loss functions that enhance discriminative power is pivotal for training deep face CNNs. Current state-of-the-art face recognition methods mainly adopt softmax-based classification loss. Since the learned features with the original softmax is not discriminative enough for the open-set face recognition problem, several margin-based variants have been proposed to enhance features' discriminative power. For example, explicit margin, i.e., CosFace (a), Sphereface , ArcFace , and implicit margin, i.e., Adacos (a), supplement the original softmax function to enforce greater intra-class compactness and inter-class discrepancy, which are shown to in more discriminate features. However, these margin-based loss functions do not explicitly emphasize each sample according to its importance. As demonstrated in , hard sample mining is also a critical step to further improve the final accuracy. Recently, Triplet loss and SV-Arc-Softmax (b) integrate the motivations of both margin and mining into one framework for deep face recognition. Triplet loss adopts a semi-hard mining strategy to obtain semi-hard triplets and enlarge the margin between triplet samples. SV-Arc-Softmax (b) clearly defines hard samples as misclassified samples and emphasizes them by increasing the weights of their negative cosine similarities with a preset constant. In a nutshell, mining-based loss functions explicitly emphasize the effects of semi-hard or hard samples. However, there are drawbacks in training strategies of both margin-and mining-based loss functions. For margin-based methods, mining strategy is ignored and thus the difficultness of each sample is not fully exploited, which may lead to convergence issues when using a large margin on small backbones, e.g., MobileFaceNet . As shown in Fig. 1, the modulation coefficient for the negative cosine similarities I(·) is fixed as a constant 1 in ArcFace for all samples during the entire training process. For mining-based methods, over-emphasizing hard samples in early training Figure 1: Different training strategies for modulating negative cosine similarities of hard samples (i.e., the mis-classified sample) in ArcFace, SV-Arc-Softmax and our CurricularFace. Left: The modulation coefficients I(t, cos θj) for negative cosine similarities of hard samples in different methods, where t is an adaptively estimated parameter and θj denotes the angle between the hard sample and the non-ground truth j-class center. Right: The corresponding hard samples' negative cosine similarities N (t, cos θj) = I(t, cos θj) cos θj + c after modulation, where c indicates a constant. On one hand, during early training stage (e.g., t is close to 0), hard sample's negative cosine similarities is usually reduced and thus leads to smaller hard sample loss than the original one. Therefore, easier samples are relatively emphasized; during later training stage (e.g., t is close to 1), the hard sample's negative cosine similarities are enhanced and thus leads to larger hard sample loss. On the other hand, in the same training stage, we modulate the hard samples' negative cosine similarities with cos θj. Specifically, the smaller the angle θj is, the larger the modulation coefficient should be. stage may hinder the model to converge. As SV-Arc-Softmax claimed, the manually defined constant t plays a key role in the model convergence property and a slight larger value (e.g., >1.4) may cause the model difficult to converge. Thus t needs to be carefully tuned. In this work, we propose a novel adaptive curriculum learning loss, termed CurricularFace, to achieve a novel training strategy for deep face recognition. Motivated by the nature of human learning that easy cases are learned first and then come the hard ones , our CurricularFace incorporates the idea of Curriculum Learning (CL) into face recognition in an adaptive manner, which differs from the traditional CL in two aspects. First, the curriculum construction is adaptive. In traditional CL, the samples are ordered by the corresponding difficultness, which are often defined by a prior and then fixed to establish the curriculum. In CurricularFace, the samples are randomly selected in each mini-batch, while the curriculum is established adaptively via mining the hard samples online, which shows the diversity in samples with different importance. Second, the importance of hard samples are adaptive. On one hand, the relative importance between easy and hard samples is dynamic and could be adjusted in different training stages. On the other hand, the importance of each hard sample in current mini-batch depends on its own difficultness. Specifically, the mis-classified samples in mini-batch are chosen as hard samples and weighted by adjusting the modulation coefficients I(t, cosθ j) of cosine similarities between the sample and the non-ground truth class center vectors, i.e., negative cosine similarity N (t, cosθ j). To achieve the goal of adaptive curricular learning in the entire training, we design a novel coefficient function I(·) that is determined by two factors: 1) the adaptively estimated parameter t that utilizes moving average of positive cosine similarities between samples and the corresponding ground-truth class center to unleash the burden of manually tuning; and 2) the angle θ j that defines the difficultness of hard samples to achieve adaptive assignment. To sum up, the contributions of this work are: • We propose an adaptive curriculum learning loss for face recognition, which automatically emphasizes easy samples first and hard samples later. To the best of our knowledge, it is the first work to introduce the idea of adaptive curriculum learning for face recognition. • We design a novel modulation coefficient function I(·) to achieve adaptive curriculum learning during training, which connects positive and negative cosine similarity simultaneously without the need of manually tuning any additional hyper-parameter. • We conduct extensive experiments on popular facial benchmarks, which demonstrate the superiority of our CurricularFace over the state-of-the-art competitors. Margin-based loss function Loss design is pivotal for large-scale face recognition. Current stateof-the-art deep face recognition methods mostly adopt softmax-based classification loss. Since the learned features with the original softmax loss are not guaranteed to be discriminative enough for open-set face recognition problem, margin-based losses (; ;) are proposed. Though the margin-based loss functions are verified to obtain good performance, they do not take the difficultness of each sample into consideration, while our CurricularFace emphasizes easy samples first and hard samples later, which is more reasonable and effectiveness. Mining-based loss function Though some mining-based loss function such as Focal loss, Online Hard Sample Mining (OHEM) are prevalent in the field of object detection, they are rarely used in face recognition. OHEM focuses on the large-loss samples in one mini-batch, in which the percentage of the hard samples is empirically determined and easy samples are completely discarded. Focal loss is a soft mining variant that rectifies the loss function to an elaborately designed form, where two hyper-parameters should be tuned with a lot of efforts to decide the weights of each samples and hard samples are emphasized by reducing the weight of easy samples. The recent work, SV-Arc-Softmax (b) fuses the motivations of both margin and mining into one framework for deep face recognition. They define hard samples as misclassified samples and enlarge the weight of hard samples with a preset constant. Our method differs from SV-Arc-Softmax in three aspects: 1) We do not always emphasize the hard samples, especially in the early training stages. 2) We assign different weights for hard samples according to their corresponding difficultness. 3) There's no need in our method to manually tune the additional hyper-parameter t, which is estimated adaptively. Curriculum Learning Learning from easier samples first and harder samples later is a common strategy in Curriculum Learning (CL) , . The key problem in CL is to define the difficultness of each sample. For example, takes the negative distance to the boundary as the indicator for easiness in classification. However, the ad-hoc curriculum design in CL turns out to be difficult to implement in different problems. To alleviate this issue, designs a new formulation, called Self-Paced Learning (SPL), where examples with lower losses are considered to be easier and emphasized during training. The key differences between our CurricularFace with SPL are: 1) Our method focuses on easier samples in the early training stage and emphasizes hard samples in the later training stage. 2) Our method proposes a novel modulation function N (·) for negative cosine similarities, which achieves not only adaptive assignment on modulation coefficients I(·) for different samples in the same training stage, but also adaptive curriculum learning strategy in different training stages. The original softmax loss is formulated as follows: where x i ∈ R d denotes the deep feature of i-th sample which belongs to the y i class, denotes the j-th column of the weight W ∈ R d×n and b j is the bias term. The class number and the embedding feature size are n and d, respectively. In practice, the bias is usually set to b j = 0 and the individual weight is set to ||W j || = 1 by l 2 normalization. The deep feature is also normalized and re-scaled to s. Thus, the original softmax can be modified as follows: Since the learned features with original softmax loss may not be discriminative enough for open-set face recognition problem, several variants are proposed and can be formulated in a general form: Under review as a conference paper at ICLR 2020 sT (cos θy i)+ n j=1,j =y i e sN (t,cos θ j) is the predicted ground truth probability and G(p(x i)) is an indicator function. T (cos θ yi) and N (t, cos θ j) = I(t, cos θ j) cos θ j + c are the functions to modulate the positive and negative cosine similarities, respectively, where c is a constant, and I(t, cos θ j) denotes the modulation coefficients of negative cosine similarities. In margin-based loss function, e.g, ArcFace, G(p(x i)) = 1, T (cos θ yi) = cos(θ yi + m), and N (t, cos θ j) = cos θ j. It only modifies the positive cosine similarity of each sample to enhance the feature discrimination. As shown in Fig. 1, the modulation coefficients of each sample' negative cosine similarity I(·) is fixed as 1. The recent work, SV-Arc-Softmax emphasizes hard samples by increasing I(t, cos θ j) for hard samples. That is, G(p(x i)) = 1 and N (t, cos θj) is formulated as follows: If a sample is defined to be easy, its negative cosine similarity is kept the same as the original one, cos θ j; if as a hard sample, its negative cosine similarity becomes t cos θ j + t − 1. That is, as shown in Fig. 1, I(·) is a constant and determined by a preset hyper-parameter t. Meanwhile, since t is always larger than 1, t cos θ j + t − 1 > cos θ j always holds true, which means the model always focuses on hard samples, even in the early training stage. However, the parameter t is sensitive that a large pre-defined value (e.g., > 1.4) may lead to convergence issue. Next, we present the details of our proposed adaptive curriculum learning loss, which is the first attempt to introduce adaptive curriculum learning into deep face recognition. The formulation of our loss function is also contained in the general form, where G(p(x i)) = 1, positive and negative cosine similarity functions are defined as follows: N (t, cos θ j) = cos θj, T (cos θy i) − cos θj ≥ 0 cos θj(t + cos θj), T (cos θy i) − cos θj < 0. It should be noted that the positive cosine similarity can adopt any margin-based loss functions and here we adopt ArcFace as the example. As shown in Fig. 1, the modulation coefficient of hard sample negative cosine similarity I(t, θ j) depends on both the value of t and θ j. In the early training stage, learning from easy samples is beneficial to model convergence. Thus, t should be close to zero and I(·) is smaller than 1. Therefore, the weights of hard samples are reduced and the easy samples are emphasized relatively. As training goes on, the model gradually focuses on the hard samples, i.e., the value of t shall increase and I(·) is larger than 1. Then, the weights of hard samples are enlarged, which are thus emphasized. Moreover, within the same training stage, I(·) is monotonically decreasing with θ j so that harder sample can be assigned with larger coefficient according to its difficultness. The value of the parameter t is automatically estimated in our CurricularFace, otherwise it would require a lot of efforts for manually tuning. Adaptive estimation of t It is critical to determine appropriate values of t in different training stages. Ideally the value of t can indicate the model training process. We empirically find the average of positive cosine similarities is a good indicator. However, mini-batch statistic-based methods usually face an issue: when many extreme data are sampled in one mini-batch, the statistics can be vastly noisy and the estimation will be unstable. Exponential Moving Average (EMA) is a common solution to address this issue. Specifically, let r (k) be the average of the positive cosine similarities of the k-th batch and be formulated as r (k) = i cos θ yi, we have: where t 0 = 0, α is the momentum parameter and set to 0.99. As shown in Fig. 2, the parameter t increases with the model training, thus the gradient modulation coefficients' range of hard sample, M (·) = 2 cos θ j + t, also increases. Therefore, hard samples are emphasized gradually. With the EMA, we avoid the hyper-parameter tuning and make the modulation coefficients of hard sample Input: The deep feature of i-th sample xi with its corresponding label yi, last fully-connected layer parameters W, cosine similarity cos θj between two vectors, embedding network parameters Θ, learning rate λ, number of iteration k, parameter t, and margin m k ← 0, t ← 0, m ← 0.5; while not converged do k ← k + 1; if cos(θy i + m) > cos θj then N (t, cos θj) = cos θj; else N (t, cos θj) = (t (k) + cos θj) cos θj; end T (cos θy i) = cos(θy i + m); Compute the loss L by Eq. 8; Compute the back-propagation error of xi and Wj by Eq. 9; Update the parameters W and Θ by: ∂Θ (k); Update the parameter t by Eq. 7; end Output: W, Θ negative cosine similarities I(·) adaptive to the current training stage. To sum up, the loss function of our CurricularFace is formulated as follows: where N (t (k), cos θ j ) is defined in Eq. 6. The entire training process is summarized in Algorithm 1. Figure 2: Illustrations on the adaptive parameter t (red line) and gradient modulation coefficients M (·) = 2 cos θj + t of hard samples (green area). Since the number of mined hard samples reduces with the model training, the green area M (·) is relatively smooth in early stage and there are some burrs in later stage. Fig. 3 illustrates how the loss changes from ArcFace to our CurricularFace during training. Here are some observations: 1) As we excepted, hard samples are suppressed in early training stage but emphasized later. 2) The ratio is monotonically increasing with cosθ j, since the larger cosθ j is, the harder the sample is. 3) The positive cosine similarity of a perceptualwell image is often large. However, during the early training stage, the negative cosine similarities of the perceptual-well image may also be large so that it could be classified as the hard one. Optimization Next, we show our CurricularFace can be easily optimized by the conventional stochastic gradient descent. Assuming x i denotes the deep feature of i-th sample which belongs to the y i class, the input of the proposed function is the logit f j, where j denotes the j-th class. In the forwarding process, when j = y i, it is the same as the ArcFace, i.e., f j = sT (cos θ yi), T (cos θ yi) = cos(θ yi + m). When j = y i, it has two cases, if x i is an easy sample, it is the the same as the original softmax, i.e., f j = s cos θ j. Otherwise, it will be modulated as f j = sN (t, cos θ j), where N (t, cos θ j) = (t + cos θ j) cos θ j. In the backward propagation process, the gradient of x i and W j can also be divided into three cases and formulated as follows: Based on the above formulations, we can find the gradient magnitude of the hard sample is determined by two parts, the negative cosine similarity N (·) and the value of t. Softmax cos θy i = cos θj SphereFace cos(mθy i) = cos θj CosFace cos θy i − m = cos θj ArcFace cos(θy i + m) = cos θj SV-Arc-Softmax cos(θy i + m) = cos θj (easy) cos(θy i + m) = t cos θj + t − 1 (hard) CurricularFace (Ours) cos(θy i + m) = cos θj (easy) cos(θy i + m) = (t + cos θj) cos θj (hard) Comparison with ArcFace and SV-Arc-Softmax We first discuss the difference between our CurricularFace and the two competitors, ArcFace and SV-Arc-Softmax, from the perspective of the decision boundary in Tab. 1. ArcFace introduces a margin function T (cos θ yi) = cos(θ yi + m) from the perspective of positive cosine similarity. As shown in Fig. 4, its decision condition changes from cos θ yi = cos θ j (i.e., blue line) to cos(θ yi + m) = cos θ j (i.e., red line) for each sample. SV-Arc-Softmax introduces additional margin from the perspective of negative cosine similarity for hard samples, and the decision boundary becomes cos(θ yi + m) = t cos θ j + t − 1 (i.e., green line). Conversely, we adaptively adjust the weights of hard samples in different training stages. The decision condition becomes cos(θ yi +m) = (t+cos θ j) cos θ j (i.e., purple line). During the training stage, the decision boundary for hard samples changes from one purple line (early stage) to another (later stage), which emphasizes easy samples first and hard samples later. Comparison with Focal loss Focal loss is a soft mining-based loss, which is formulated as: β, where α and β are modulating factors that need to be tuned manually. The definition of hard samples in Focal loss is ambiguous, since it always focuses on relatively hard samples by reducing the weight of easier samples during the entire training process. In contrast, the definition of hard samples in our CurricularFace is more clear, i.e., mis-classified samples. Meanwhile, the weights of hard samples are adaptively determined in different training stages. Datasets We separately employ CASIA-WebFace and refined MS1MV2 as our training data for fair comparisons with other methods. We extensively test our method on several popular benchmarks, including LFW , CFP-FP , CPLFW , AgeDB , CALFW , IJB-B , IJB-C , and MegaFace . Training Setting We follow to generate the normalised faces (112 × 112) with five landmarks. For the embedding network, we adopt ResNet50 and ResNet100 as in. Our framework is implemented in Pytorch . We train models on 4 NVIDIA Tesla P40 (24GB) GPU with batch size 512. The models are trained with SGD algorithm, with momentum 0.9 and weight decay 5e − 4. On CASIA-WebFace, the learning rate starts from 0.1 and is divided by 10 at 28, 38, 46 epochs. The training process is finished at 50 epochs. On MS1MV2, we divide the learning rate at 10, 18, 22 epochs and finish at 24 epochs. We follow the common setting as to set scale s = 64 and margin m = 0.5, respectively. Last but not least, since we only modify the loss function but use the same backbone as previous methods (e.g., ArcFace), NO additional time complexity is introduced for inference. Effects on Fixed vs. Adaptive Parameter t We first investigate the effect of adaptive estimation of t. We choose four fixed values between 0 and 1 for comparison. Specifically, 0 means the modulation coefficient I(·) of each hard sample's negative cosine similarity is always reduced based on its difficultness. In contrast, 1 means the hard samples are always emphasized. 0.3 and 0.7 are between the two cases. Tab. 2 shows that it is more effective to learn from easier samples first and hard samples later based on our adaptively estimated parameter t. Effects on Different Statistics for Estimating t We now investigate the effects of several other statistics, i.e., mode of positive cosine similarities in a mini-batch, or mean of the predicted ground truth probability for estimating t in our loss. As Tab. 3 shows, on one hand, the mean of positive cosine similarities is better than the mode. On the other hand, the positive cosine similarity is more accurate than the predicted ground truth probability to indicate the training stages. As claimed in , ArcFace exists divergence issue when using small backbones like MobileFaceNet. As the , softmax loss must be incorporated for pre-training. To illustrate the robustness of our loss function on convergence issue with small backbone, we use the MobileFaceNet as the network architecture and train it on CASIA-WebFace. As shown in Fig. 5, when the margin m is set to 0.5, the model trained with our loss achieves 99.25 accuracy on LFW, while the model trained with ArcFace does not converge and the loss is NAN at about 2, 400-th step. When the margin m is set to 0.45, both losses can converge, but our loss achieves better performance (99.20% vs. 99.10%). Comparing the yellow and red curves, since the losses of hard samples are reduced in early training stages, our loss converges much faster in the beginning, leading to lower loss than ArcFace. Later on, the value of our loss is slightly larger than ArcFace, because we emphasize the hard samples in later stages. The prove that learning from easy samples first and hard samples later is beneficial to model convergence. Results on LFW, CFP-FP, CPLFW, AgeDB and CALFW Next, we train our CurricularFace on dataset MS1MV2 with ResNet100, and compare with the SOTA competitors on various benchmarks, including LFW for unconstrained face verification, CFP-FP and CPLFW for large pose variations, AgeDB and CALFW for age variations. As reported in Tab. 4, our CurricularFace achieves comparable (i.e., 99.80%) with the competitors on LFW where the performance is near saturated. While for both CFP-FP and CPLFW, our method shows superiority over the baselines including general methods, e.g.,, (b), and cross-pose methods, e.g., , , (a) and. As a recent face recognition method, SV-Arc-Softmax achieves better performance than ArcFace, but still worse than Our CurricularFace. Finally, for AgeDB and CALFW, as Tab. 4 shows, our CurricularFace again achieves the best performance than all of the other state-of-the-art methods. Results on IJB-B and IJB-C The IJB-B dataset contains 1, 845 subjects with 21.8K still images and 55K frames from 7, 011 videos. In the 1:1 verification, there are 10, 270 positive matches and 8M negative matches. The IJB-C dataset is a further extension of IJB-B, which contains about 3, 500 identities with a total of 31, 334 images and 117, 542 unconstrained video frames. In the 1:1 verification, there are 19, 557 positive matches and 15, 638, 932 negative matches. On IJB-B and IJB-C datasets, we employ MS1MV2 and the ResNet100 for a fair comparison with recent methods. We follow the testing protocol in ArcFace and take the average of the image features as the corresponding template representation without bells and whistles. Tab. 5 exhibits the performance of different methods, e.g., Multicolumn, DCN, Adacos (a), P2SGrad (b), PFE and SV-Arc-Softmax (b) on IJB-B and IJB-C 1:1 verification, our method again achieves the best performance. Results on MegaFace Finally, we evaluate the performance on the MegaFace Challenge. The gallery set of MegaFace includes 1M images of 690K subjects, and the probe set includes 100K photos of 530 unique individuals from FaceScrub. We report the two testing under two protocols (large or small training set). Here, we use CASIA-WebFace and MS1MV2 under the small protocol and large protocol, respectively. In Tab. 6, our method achieves the best singlemodel identification and verification performance under both protocols, surpassing the recent strong competitors, e.g., CosFace, ArcFace, Adacos, P2SGrad and PFE. We also report the following the ArcFace testing protocol, which refines both the probe set and the gallery set. As shown from the figure in Tab. 6, our method still clearly outperforms the competitors and achieves the best performance on both verification and identification. In this paper, we propose a novel Adaptive Curriculum Learning Loss that embeds the idea of adaptive curriculum learning into deep face recognition. Our key idea is to address easy samples in the early training stage and hard ones in the later stage. Our method is easy to implement and robust to converge. Extensive experiments on popular facial benchmarks demonstrate the effectiveness of our method compared to the state-of-the-art competitors. Following the main idea of this work, future research can be expanded in various aspects, including designing a better function N (·) for negative cosine similarity that shares similar adaptive characteristic during training, and investigating the effects of noise samples that could be optimized as hard samples.
A novel Adaptive Curriculum Learning loss for deep face recognition
1,125
scitldr
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a , we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks. Machine learning (ML) models are often vulnerable to adversarial examples, maliciously perturbed inputs designed to mislead a model at test time BID4 BID36 BID15 BID29. Furthermore, BID36 showed that these inputs transfer across models: the same adversarial example is often misclassified by different models, thus enabling simple black-box attacks on deployed models BID23.Adversarial training BID36 increases robustness by augmenting training data with adversarial examples. showed that adversarially trained models can be made robust to white-box attacks (i.e., with knowledge of the model parameters) if the perturbations computed during training closely maximize the model's loss. However, prior attempts at scaling this approach to ImageNet-scale tasks BID12 ) have proven unsuccessful BID20.It is thus natural to ask whether it is possible, at scale, to achieve robustness against the class of black-box adversaries Towards this goal, BID20 adversarially trained an Inception v3 model BID38 on ImageNet using a "single-step" attack based on a linearization of the model's loss BID15. Their trained model is robust to single-step perturbations but remains vulnerable to more costly "multi-step" attacks. Yet, BID20 found that these attacks fail to reliably transfer between models, and thus concluded that the robustness of their model should extend to black-box adversaries. Surprisingly, we show that this is not the case. We demonstrate, formally and empirically, that adversarial training with single-step methods admits a degenerate global minimum, wherein the model's loss can not be reliably approximated by a linear function. Specifically, we find that the model's decision surface exhibits sharp curvature near the data points, thus degrading attacks based on a single gradient computation. In addition to the model of BID20, we reveal similar overfitting in an adversarially trained Inception ResNet v2 model BID37, and a variety of models trained on MNIST BID22.We harness this in two ways. First, we show that adversarially trained models using single-step methods remain vulnerable to simple attacks. For black-box adversaries, we find that perturbations crafted on an undefended model often transfer to an adversarially trained one. We also introduce a simple yet powerful single-step attack that applies a small random perturbation-to escape the nonsmooth vicinity of the data point-before linearizing the model's loss. While seemingly weaker than the Fast Gradient Sign Method of BID15, our attack significantly outperforms it for a same perturbation norm, for models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, a training methodology that incorporates perturbed inputs transferred from other pre-trained models. Our approach decouples adversarial example generation from the parameters of the trained model, and increases the diversity of perturbations seen during training. We train Inception v3 and Inception ResNet v2 models on ImageNet that exhibit increased robustness to adversarial examples transferred from other holdout models, using various single-step and multi-step attacks BID15 BID7 BID19. We also show that our methods globally reduce the dimensionality of the space of adversarial examples BID40. Our Inception ResNet v2 model won the first round of the NIPS 2017 competition on Defenses Against Adversarial Attacks BID21, where it was evaluated on other competitors' attacks in a black-box setting. BID16 BID24 BID31 BID28 BID10 and many remain vulnerable to adaptive attackers BID7 b; BID3. Adversarial training BID36 BID15 BID20 appears to hold the greatest promise for learning robust models. show that adversarial training on MNIST yields models that are robust to whitebox attacks, if the adversarial examples used in training closely maximize the model's loss. Moreover, recent works by BID34, BID33 and BID18 even succeed in providing certifiable robustness for small perturbations on MNIST. As we argue in Appendix C, the MNIST dataset is peculiar in that there exists a simple "closed-form" denoising procedure (namely feature binarization) which leads to similarly robust models without adversarial training. This may explain why robustness to white-box attacks is hard to scale to tasks such as ImageNet BID20. We believe that the existence of a simple robust baseline for MNIST can be useful for understanding some limitations of adversarial training techniques. BID36 found that adversarial examples transfer between models, thus enabling blackbox attacks on deployed models. showed that black-box attacks could succeed with no access to training data, by exploiting the target model's predictions to extract BID39 a surrogate model. Some prior works have hinted that adversarially trained models may remain vulnerable to black-box attacks: BID15 found that an adversarial maxout network on MNIST has slightly higher error on transferred examples than on white-box examples. further showed that a model trained on small perturbations can be evaded by transferring perturbations of larger magnitude. Our finding that adversarial training degrades the accuracy of linear approximations of the model's loss is as an instance of a gradient-masking phenomenon BID30, which affects other defensive techniques BID31 BID7 BID28 BID5 BID2. We consider a classification task with data x ∈ d and labels y true ∈ Z k sampled from a distribution D. We identify a model with an hypothesis h from a space H. On input x, the model outputs class scores h(x) ∈ R k. The loss function used to train the model, e.g., cross-entropy, is L(h(x), y). For some target model h ∈ H and inputs (x, y true) the adversary's goal is to find an adversarial example x adv such that x adv and x are "close" yet the model misclassifies x adv. We consider the wellstudied class of ∞ bounded adversaries BID15 that, given some budget, output examples x adv where x adv − x ∞ ≤. As we comment in Appendix C.1, ∞ robustness is of course not an end-goal for secure ML. We use this standard model to showcase limitations of prior adversarial training methods, and evaluate our proposed improvements. We distinguish between white-box adversaries that have access to the target model's parameters (i.e., h), and black-box adversaries with only partial information about the model's inner workings. Formal definitions for these adversaries are in Appendix A. Although security against white-box attacks is the stronger notion (and the one we ideally want ML models to achieve), black-box security is a reasonable and more tractable goal for deployed ML models. Following, we consider an adversarial variant of standard Empirical Risk Minimization (ERM), where our aim is to minimize the risk over adversarial examples: argue that adversarial training has a natural interpretation in this context, where a given attack (see below) is used to approximate solutions to the inner maximization problem, and the outer minimization problem corresponds to training over these examples. Note that the original formulation of adversarial training BID36 BID15 ), which we use in our experiments, trains on both the "clean" examples x and adversarial examples x adv. DISPLAYFORM0 We consider three algorithms to generate adversarial examples with bounded ∞ norm. The first two are single-step (i.e., they require a single gradient computation); the third is iterative-it computes multiple gradient updates. We enforce x adv ∈ d by clipping all components of x adv.Fast Gradient Sign Method (FGSM). This method BID15 linearizes the inner maximization problem in: DISPLAYFORM1 Single-Step Least-Likely Class Method (Step-LL). This variant of FGSM introduced by BID19 b) targets the least-likely class, y LL = arg min{h(x)}: DISPLAYFORM2 Although this attack only indirectly tackles the inner maximization in, BID20 find it to be the most effective for adversarial training on ImageNet. Iterative Attack (I-FGSM or Iter-LL). This method iteratively applies the FGSM or Step-LL k times with step-size α ≥ /k and projects each step onto the ∞ ball of norm around x. It uses projected gradient descent to solve the maximization in. For fixed, iterative attacks induce higher error rates than single-step attacks, but transfer at lower rates BID19 b). When performing adversarial training with a single-step attack (e.g., the FGSM or Step-LL methods above), we approximate Equation by replacing the solution to the inner maximization problem adv FGSM in). That is, we solve DISPLAYFORM0 For model families H with high expressive power, this alternative optimization problem admits at least two substantially different global minima h *:• For an input x from D, there is no x adv close to x (in ∞ norm) that induces a high loss. That is, DISPLAYFORM1 In other words, h * is robust to all ∞ bounded perturbations.• The minimizer h * is a model for which the approximation method underlying the attack (i.e., linearization in our case) poorly fits the model's loss function. That is, DISPLAYFORM2 Thus the attack when applied to h * produces samples x adv that are far from optimal. Note that this second "degenerate" minimum can be more subtle than a simple case of overfitting to samples produced from single-step attacks. Indeed, we show in Section 4.1 that single-step attacks applied to adversarially trained models create "adversarial" examples that are easy to classify even for undefended models. Thus, adversarial training does not simply learn to resist the particular attack used during training, but actually to make that attack perform worse overall. This phenomenon relates to the notion of Reward Hacking BID1 wherein an agent maximizes its formal objective function via unintended behavior that fails to captures the designer's true intent. The degenerate minimum described in Section 3.3 is attainable because the learned model's parameters influence the quality of both the minimization and maximization in. One solution is to use a stronger adversarial example generation process, at a high performance cost. Alternatively, BID3 suggest training an adversarial generator model as in the GAN framework BID14. The power of this generator is likely to require careful tuning, to avoid similar degenerate minima (where the generator or classifier overpowers the other).We propose a conceptually simpler approach to decouple the generation of adversarial examples from the model being trained, while simultaneously drawing an explicit connection with robustness to black-box adversaries. Our method, which we call Ensemble Adversarial Training, augments a model's training data with adversarial examples crafted on other static pre-trained models. Intuitively, as adversarial examples transfer between models, perturbations crafted on an external model are good approximations for the maximization problem in. Moreover, the learned model can not influence the "strength" of these adversarial examples. As a , minimizing the training loss implies increased robustness to black-box attacks from some set of models. Domain Adaptation with multiple sources. We can draw a connection between Ensemble Adversarial Training and multiple-source Domain Adaptation BID26 BID43. In Domain Adaptation, a model trained on data sampled from one or more source distributions S 1,..., S k is evaluated on samples x from a different target distribution T.Let A i be an adversarial distribution obtained by sampling (x, y true) from D, computing an adversarial example x adv for some model such that x adv − x ∞ ≤, and outputting (x adv, y true). In Ensemble Adversarial Training, the source distributions are D (the clean data) and A 1,..., A k (the attacks overs the currently trained model and the static pre-trained models). The target distribution takes the form of an unseen black-box adversary A *. Standard generalization bounds for Domain Adaptation BID26 BID43 Figure 1: Gradient masking in single-step adversarial training. We plot the loss of model v3 adv on points DISPLAYFORM0 where g is the signed gradient and g ⊥ is an orthogonal adversarial direction. Plot (b) is a zoom of (a) near x. The gradient poorly approximates the global loss. We give a formal statement of this and of the assumptions on A * in Appendix B. Of course, ideally we would like guarantees against arbitrary future adversaries. For very low-dimensional tasks (e.g., MNIST), stronger guarantees are within reach for specific classes of adversaries (e.g., ∞ bounded perturbations BID34 BID33 BID18), yet they also fail to extend to other adversaries not considered at training time (see Appendix C.1 for a discussion). For ImageNet-scale tasks, stronger formal guarantees appear out of reach, and we thus resort to an experimental assessment of the robustness of Ensemble Adversarially Trained models to various non-interactive black-box adversaries in Section 4.2. We show the existence of a degenerate minimum, as described in Section 3.3, for the adversarially trained Inception v3 model of BID20. Their model (denoted v3 adv) was trained on a Step-LL attack with ≤ 16/256. We also adversarially train an Inception ResNet v2 model BID37 using the same setup. We denote this model by IRv2 adv. We refer the reader to BID20 for details on the adversarial training procedure. We first measure the approximation-ratio of the Step-LL attack for the inner maximization in. As we do not know the true maximum, we lower-bound it using an iterative attack. For 1,000 random test points, we find that for a standard Inception v3 model, step-LL gets within 19% of the optimum loss on average. This attack is thus a good candidate for adversarial training. Yet, for the v3 adv model, the approximation ratio drops to 7%, confirming that the learned model is less amenable to linearization. We obtain similar for Inception ResNet v2 models. The ratio is 17% for a standard model, and 8% for IRv2 adv. Similarly, we look at the cosine similarity between the perturbations given by a single-step and multi-step attack. The more linear the model, the more similar we expect both perturbations to be. The average similarity drops from 0.13 for Inception v3 to 0.02 for v3 adv. This effect is not due to the decision surface of v3 adv being "too flat" near the data points: the average gradient norm is larger for v3 adv (0.17) than for the standard v3 model (0.10).We visualize this "gradient-masking" effect BID30 by plotting the loss of v3 adv on examples DISPLAYFORM0 where g is the signed gradient of model v3 adv and g ⊥ is a signed vector orthogonal to g. Looking forward to Section 4.1, we actually chose g ⊥ to be the signed gradient of another Inception model, from which adversarial examples transfer to v3 adv. Figure 1 shows that the loss is highly curved in the vicinity of the data point x, and that the gradient poorly reflects the global loss landscape. Similar plots for additional data points are in Figure 4.We show similar for adversarially trained MNIST models in Appendix C.2. On this task, input dropout BID35 mitigates adversarial training's overfitting problem, in some cases. Presumably, the random input mask diversifies the perturbations seen during training (dropout at intermediate layers does not mitigate the overfitting effect). BID27 find that input dropout significantly degrades accuracy on ImageNet, so we did not include it in our experiments. Top FORMULA4 4.1 ATTACKS AGAINST ADVERSARIALLY TRAINED NETWORKS BID20 found their adversarially trained model to be robust to various single-step attacks. They conclude that this robustness should translate to attacks transferred from other models. As we have shown, the robustness to single-step attacks is actually misleading, as the model has learned to degrade the information contained in the model's gradient. As a consequence, we find that the v3 adv model is substantially more vulnerable to single-step attacks than BID20 predicted, both in a white-box and black-box setting. The same holds for the IRv2 adv model. In addition to the v3 adv and IRv2 adv models, we consider standard Inception v3, Inception v4 and Inception ResNet v2 models. These models are available in the TensorFlow-Slim library BID0. We describe similar for a variety of models trained on MNIST in Appendix C.2.Black-box attacks. TAB1 shows error rates for single-step attacks transferred between models. We compute perturbations on one model (the source) and transfer them to all others (the targets). When the source and target are the same, the attack is white-box. Adversarial training greatly increases robustness to white-box single-step attacks, but incurs a higher error rate in a black-box setting. Thus, the robustness gain observed when evaluating defended models in isolation is misleading. Given the ubiquity of this pitfall among proposed defenses against adversarial examples BID7 BID5 BID30, we advise researchers to always consider both white-box and black-box adversaries when evaluating defensive strategies. Notably, a similar discrepancy between white-box and black-box attacks was recently observed in BID6.Attacks crafted on adversarial models are found to be weaker even against undefended models (i.e., when using v3 adv or IRv2 adv as source, the attack transfers with lower probability). This confirms our intuition from Section 3.3: adversarial training does not just overfit to perturbations that affect standard models, but actively degrades the linear approximation underlying the single-step attack. A new randomized single-step attack. The loss function visualization in Figure 1 shows that sharp curvature artifacts localized near the data points can mask the true direction of steepest ascent. We thus suggest to prepend single-step attacks by a small random step, in order to "escape" the non-smooth vicinity of the data point before linearizing the model's loss. Our new attack, called R+FGSM (alternatively, R+Step-LL), is defined as follows, for parameters and α (where α <): DISPLAYFORM1 Note that the attack requires a single gradient computation. The R+FGSM is a computationally efficient alternative to iterative methods that have high success rates in a white-box setting. Our attack can be seen as a single-step variant of the general PGD method from. TAB2 compares error rates for the Step-LL and R+Step-LL methods (with = 16/256 and α = /2). The extra random step yields a stronger attack for all models, even those without adversarial training. This suggests that a model's loss function is generally less smooth near the data points. We further compared the R+Step-LL attack to a two-step Iter-LL attack, which computes two gradient steps. Surprisingly, we find that for the adversarially trained Inception v3 model, the R+Step-LL attack is stronger than the two-step Iter-LL attack. That is, the local gradients learned by the adversarially trained model are worse than random directions for finding adversarial examples! TAB8.Step We find that the addition of this random step hinders transferability (see TAB11). We also tried adversarial training using R+FGSM on MNIST, using a similar approach as. We adversarially train a CNN (model A in TAB6) for 100 epochs, and attain > 90.0% accuracy on R+FGSM samples. However, training on R+FGSM provides only little robustness to iterative attacks. For the PGD attack of with 20 steps, the model attains 18.0% accuracy. We now evaluate our Ensemble Adversarial Training strategy described in Section 3.4. We recall our intuition: by augmenting training data with adversarial examples crafted from static pre-trained models, we decouple the generation of adversarial examples from the model being trained, so as to avoid the degenerate minimum described in Section 3.3. Moreover, our hope is that robustness to attacks transferred from some fixed set of models will generalize to other black-box adversaries. We train Inception v3 and Inception ResNet v2 models BID37 on ImageNet, using the pre-trained models shown in TAB4. In each training batch, we rotate the source of adversarial examples between the currently trained model and one of the pre-trained models. We select the source model at random in each batch, to diversify examples across epochs. The pre-trained models' gradients can be precomputed for the full training set. The per-batch cost of Ensemble Adversarial Training is thus lower than that of standard adversarial training: using our method with n − 1 pre-trained models, only every n th batch requires a forward-backward pass to compute adversarial gradients. We use synchronous distributed training on 50 machines, with minibatches of size 16 (we did not pre-compute gradients, and thus lower the batch size to fit all models in memory). Half of the examples in a minibatch are replaced by Step-LL examples. As in BID20, we use RMSProp with a learning rate of 0.045, decayed by a factor of 0.94 every two epochs. To evaluate how robustness to black-box attacks generalizes across models, we transfer various attacks crafted on three different holdout models (see TAB4), as well as on an ensemble of these models (as in BID23). We use the Step-LL, R+Step-LL, FGSM, I-FGSM and the PGD attack from using the hinge-loss function from BID7. Our are in Table 4. For each model, we report the worst-case error rate over all black-box attacks transfered from each of the holdout models (20 attacks in total). Results for MNIST are in TAB10.Convergence speed. Convergence of Ensemble Adversarial Training is slower than for standard adversarial training, a of training on "hard" adversarial examples and lowering the batch size. BID20 report that after 187 epochs (150k iterations with minibatches of size 32), the v3 adv model achieves 78% accuracy. Ensemble Adversarial Training for models v3 adv-ens3 and v3 adv-ens4 converges after 280 epochs (450k iterations with minibatches of size 16). The Inception ResNet v2 model is trained for 175 epochs, where a baseline model converges at around 160 epochs. Table 4: Error rates (in %) for Ensemble Adversarial Training on ImageNet. Error rates on clean data are computed over the full test set. For 10,000 random test set inputs, and = 16 /256, we report error rates on white-box Step-LL and the worst-case error over a series of black-box attacks (Step-LL, R+Step-LL, FGSM, I-FGSM, PGD) transferred from the holdout models in TAB4. For both architectures, we mark methods tied for best in bold (based on 95% confidence). White-box attacks. For both architectures, the models trained with Ensemble Adversarial Training are slightly less accurate on clean data, compared to standard adversarial training. Our models are also more vulnerable to white-box single-step attacks, as they were only partially trained on such perturbations. Note that for v3 adv-ens4, the proportion of white-boxStep-LL samples seen during training is 1 /4 (instead of 1 /3 for model v3 adv-ens3). The negative impact on the robustness to white-box attacks is large, for only a minor gain in robustness to transferred samples. Thus it appears that while increasing the diversity of adversarial examples seen during training can provide some marginal improvement, the main benefit of Ensemble Adversarial Training is in decoupling the attacks from the model being trained, which was the goal we stated in Section 3.4.Ensemble Adversarial Training is not robust to white-box Iter-LL and R+Step-LL samples: the error rates are similar to those for the v3 adv model, and omitted for brevity (see BID20 for Iter-LL attacks and TAB2 for R+Step-LL attacks). BID20 conjecture that larger models are needed to attain robustness to such attacks. Yet, against black-box adversaries, these attacks are only a concern insofar as they reliably transfer between models. Black-box attacks. Ensemble Adversarial Training significantly boosts robustness to all attacks transferred from the holdout models. For the IRv2 adv-ens model, the accuracy loss (compared to IRv2's accuracy on clean data) is 7.4% (top 1) and 3.1% (top 5). We find that the strongest attacks in our test suite (i.e., with highest transfer rates) are the FGSM attacks. Black-box R+Step-LL or iterative attacks are less effective, as they do not transfer with high probability (see BID20 and TAB11). Attacking an ensemble of all three holdout models, as in BID23, did not lead to stronger black-box attacks than when attacking the holdout models individually. Our have little variance with respect to the attack parameters (e.g., smaller) or to the use of other holdout models for black-box attacks (e.g., we obtain similar by attacking the v3 adv-ens3 and v3 adv-ens4 models with the IRv2 model). We also find that v3 adv-ens3 is not vulnerable to perturbations transferred from v3 adv-ens4. We obtain similar on MNIST (see Appendix C.2), thus demonstrating the applicability of our approach to different datasets and model architectures. The NIPS 2017 competition on adversarial examples. Our Inception ResNet v2 model was included as a baseline defense in the NIPS 2017 competition on Adversarial Examples BID21. Participants of the attack track submitted non-interactive black-box attacks that produce adversarial examples with bounded ∞ norm. Models submitted to the defense track were evaluated on all attacks over a subset of the ImageNet test set. The score of a defense was defined as the average accuracy of the model over all adversarial examples produced by all attacks. Our IRv2 adv-ens model finished 1 st among 70 submissions in the first development round, with a score of 95.3% (the second placed defense scored 89.9%). The test data was intentionally chosen as an "easy" subset of ImageNet. Our model achieved 97.9% accuracy on the clean test data. After the first round, we released our model publicly, which enabled other users to launch white-box attacks against it. Nevertheless, a majority of the final submissions built upon our released model. The winning submission (team "liaofz" with a score of 95.3%) made use of a novel adversarial The dimensionality of the adversarial cone. For 500 correctly classified points x, and for ∈ {4, 10, 16}, we plot the probability that we find at least k orthogonal vectors r i such that r i ∞ = and x + r i is misclassified. For ≥ 10, model v3 adv shows a bimodal phenomenon: most points x either have 0 adversarial directions or more than 90.denoising technique. The second placed defense (team "cihangxie" with a score of 92.4%) prepends our IRv2 adv-ens model with random padding and resizing of the input image BID42.It is noteworthy that the defenses that incorporated Ensemble Adversarial Training faired better against the worst-case black-box adversary. Indeed, although very robust on average, the winning defense achieved as low as 11.8% accuracy on some attacks. The best defense under this metric (team "rafaelmm" which randomly perturbed images before feeding them to our IRv2 adv-ens model) achieved at least 53.6% accuracy against all submitted attacks, including the attacks that explicitly targeted our released model in a white-box setting. Decreasing gradient masking. Ensemble Adversarial Training decreases the magnitude of the gradient masking effect described previously. For the v3 adv-ens3 and v3 adv-ens4 models, we find that the loss incurred on a Step-LL attack gets within respectively 13% and 18% of the optimum loss (we recall that for models v3 and v3 adv, the approximation ratio was respectively 19% and 7%). Similarly, for the IRv2 adv-ens model, the ratio improves from 8% (for IRv2 adv) to 14%. As expected, not solely training on a white-box single-step attack reduces gradient masking. We also verify that after Ensemble Adversarial Training, a two-step iterative attack outperforms the R+Step-LL attack from Section 4.1, thus providing further evidence that these models have meaningful gradients. Finally, we revisit the "Gradient-Aligned Adversarial Subspace" (GAAS) method of BID40. Their method estimates the size of the space of adversarial examples in the vicinity of a point, by finding a set of orthogonal perturbations of norm that are all adversarial. We note that adversarial perturbations do not technically form a "subspace" (e.g., the 0 vector is not adversarial). Rather, they may form a "cone", the dimension of which varies as we increase. By linearizing the loss function, estimating the dimensionality of this cone reduces to finding vectors r i that are strongly aligned with the model's gradient g = ∇ x L(h(x), y true ). BID40 give a method that finds k orthogonal vectors r i that satisfy g r i ≥ · g 2 · 1 √ k (this bound is tight). We extend this to the ∞ norm, an open question in BID40. In Section E, we give a randomized combinatorial construction BID11, that finds k orthogonal vectors r i satisfying r i ∞ = and E g r i ≥ · g 1 · 1 √ k. We show that this is tight as well. For models v3, v3 adv and v3 adv-ens3, we select 500 correctly classified test points. For each x, we search for a maximal number of orthogonal adversarial perturbations r i with r i ∞ =. We limit our search to k ≤ 100 directions per point. The are in FIG2. For ∈ {4, 10, 16}, we plot the proportion of points that have at least k orthogonal adversarial perturbations. For a fixed, the value of k can be interpreted as the dimension of a "slice" of the cone of adversarial examples near a data point. For the standard Inception v3 model, we find over 50 orthogonal adversarial directions for 30% of the points. The v3 adv model shows a curious bimodal phenomenon for ≥ 10: for most points (≈ 80%), we find no adversarial direction aligned with the gradient, which is consistent with the gradient masking effect. Yet, for most of the remaining points, the adversarial space is very high-dimensional (k ≥ 90). Ensemble Adversarial Training yields a more robust model, with only a small fraction of points near a large adversarial space. Previous work on adversarial training at scale has produced encouraging , showing strong robustness to (single-step) adversarial examples BID15 BID20 ). Yet, these are misleading, as the adversarially trained models remain vulnerable to simple black-box and white-box attacks. Our , generic with respect to the application domain, suggest that adversarial training can be improved by decoupling the generation of adversarial examples from the model being trained. Our experiments with Ensemble Adversarial Training show that the robustness attained to attacks from some models transfers to attacks from other models. We did not consider black-box adversaries that attack a model via other means than by transferring examples from a local model. For instance, generative techniques BID3 might provide an avenue for stronger attacks. Yet, a recent work by BID41 found Ensemble Adversarial Training to be resilient to such attacks on MNIST and CIFAR10, and often attaining higher robustness than models that were adversarially trained on iterative attacks. Moreover, interactive adversaries (see Appendix A) could try to exploit queries to the target model's prediction function in their attack, as demonstrated in. If queries to the target model yield prediction confidences, an adversary can estimate the target's gradient at a given point (e.g., using finite-differences as in) and fool the target with our R+FGSM attack. Note that if queries only return the predicted label, the attack does not apply. Exploring the impact of these classes of black-box attacks and evaluating their scalability to complex tasks is an interesting avenue for future work. We provide formal definitions for the threat model introduced in Section 3.1. In the following, we explicitly identify the hypothesis space H that a model belongs to as describing the model's architecture. We consider a target model h ∈ H trained over inputs (x, y true) sampled from a data distribution D. More precisely, we write h ← train(H, X train, Y train, r), where train is a randomized training procedure that takes in a description of the model architecture H, a training set X train, Y train sampled from D, and randomness r. Given a set of test inputs X, Y = {(x 1, y 1),..., (x m, y m)} from D and a budget > 0, an adversary A produces adversarial examples DISPLAYFORM0 We evaluate success of the attack as the error rate of the target model over X adv: DISPLAYFORM1 We assume A can sample inputs according to the data distribution D. We define three adversaries. Definition 2 (White-Box Adversary). For a target model h ∈ H, a white-box adversary is given access to all elements of the training procedure, that is train (the training algorithm), H (the model architecture), the training data X train, Y train, the randomness r and the parameters h. The adversary can use any attack (e.g., those in Section 3.2) to find adversarial inputs. White-box access to the internal model weights corresponds to a very strong adversarial model. We thus also consider the following relaxed and arguably more realistic notion of a black-box adversary. Definition 3 (Non-Interactive Black-Box Adversary). For a target model h ∈ H, a non-interactive black-box adversary only gets access to train (the target model's training procedure) and H (the model architecture). The adversary can sample from the data distribution D, and uses a local algorithm to craft adversarial examples X adv.Attacks based on transferability BID36 fall in this category, wherein the adversary selects a procedure train and model architecture H, trains a local model h over D, and computes adversarial examples on its local model h using white-box attack strategies. Most importantly, a black-box adversary does not learn the randomness r used to train the target, nor the target's parameters h. The black-box adversaries in our paper are actually slightly stronger than the ones defined above, in that they use the same training data X train, Y train as the target model. We provide A with the target's training procedure train to capture knowledge of defensive strategies applied at training time, e.g., adversarial training BID36 BID15 or ensemble adversarial training (see Section 4.2). For ensemble adversarial training, A also knows the architectures of all pre-trained models. In this work, we always mount black-box attacks that train a local model with a different architecture than the target model. We actually find that black-box attacks on adversarially trained models are stronger in this case (see TAB1).The main focus of our paper is on non-interactive black-box adversaries as defined above. For completeness, we also formalize a stronger notion of interactive black-box adversaries that additionally issue prediction queries to the target model. We note that in cases where ML models are deployed as part of a larger system (e.g., a self driving car), an adversary may not have direct access to the model's query interface. Definition 4 (Interactive Black-Box Adversary). For a target model h ∈ H, an interactive blackbox adversary only gets access to train (the target model's training procedure) and H (the model architecture). The adversary issues (adaptive) oracle queries to the target model. That is, for arbitrary inputs x ∈ d, the adversary obtains y = arg max h(x) and uses a local algorithm to craft adversarial examples (given knowledge of H, train, and tuples (x, y)). show that such attacks are possible even if the adversary only gets access to a small number of samples from D. Note that if the target model's prediction interface additionally returns class scores h(x), interactive black-box adversaries could use queries to the target model to estimate the model's gradient (e.g., using finite differences), and then apply the attacks in Section 3.2. We further discuss interactive black-box attack strategies in Section 5. We provide a formal statement of Theorem 1 in Section 3.4, regarding the generalization guarantees of Ensemble Adversarial Training. For simplicity, we assume that the model is trained solely on adversarial examples computed on the pre-trained models (i.e., we ignore the clean training data and the adversarial examples computed on the model being trained). Our are easily extended to also consider these data points. Let D be the data distribution and A 1,..., A k, A * be adversarial distributions where a sample (x, y)is obtained by sampling (x, y true) from D, computing an x adv such that x adv − x ∞ ≤ and returning (x adv, y true). We assume the model is trained on N data points Z train, where N k data points are sampled from each distribution A i, for 1 ≤ i ≤ k. We denote A train = {A 1, . . ., A k}. At test time, the model is evaluated on adversarial examples from A *.For a model h ∈ H we define the empirical risk DISPLAYFORM0 and the risk over the target distribution (or future adversary) DISPLAYFORM1 We further define the average discrepancy distance BID26 ) between distributions A i and A * with respect to a hypothesis space H as DISPLAYFORM2 This quantity characterizes how "different" the future adversary is from the train-time adversaries. Intuitively, the distance disc(A train, A *) is small if the difference in robustness between two models to the target attack A * is somewhat similar to the difference in robustness between these two models to the attacks used for training (e.g., if the static black-box attacks A i induce much higher error on some model h 1 than on another model h 2, then the same should hold for the target attack A *). In other words, the ranking of the robustness of models h ∈ H should be similar for the attacks in A train as for A *.Finally, let R N (H) be the average Rademacher complexity of the distributions A 1,..., A k BID43. Note that R N (H) → 0 as N → ∞. The following theorem is a corollary of Zhang et al. (2012, Theorem 5 .2): Theorem 5. Assume that H is a function class consisting of bounded functions. Then, with probability at least 1 −, DISPLAYFORM3 Compared to the standard generalization bound for supervised learning, the generalization bound for Domain Adaptation incorporates the extra term disc H (A train, A *) to capture the divergence between the target and source distributions. In our context, this means that the model h * learned by Ensemble Adversarial Training has guaranteed generalization bounds with respect to future adversaries that are not "too different" from the ones used during training. Note that A * need not restrict itself to perturbation with bounded ∞ norm for this to hold. We re-iterate our ImageNet experiments on MNIST. For this simpler task, show that training on iterative attacks conveys robustness to white-box attacks with bounded ∞ norm. Our goal is not to attain similarly strong white-box robustness on MNIST, but to show that our observations on limitations of single-step adversarial training, extend to other datasets than ImageNet. The MNIST dataset is a simple baseline for assessing the potential of a defense, but the obtained do not always generalize to harder tasks. We suggest that this is because achieving robustness to ∞ perturbations admits a simple "closed-form" solution, given the near-binary nature of the data. Indeed, for an average MNIST image, over 80% of the pixels are in {0, 1} and only 6% are in the range [0.2, 0.8]. Thus, for a perturbation with ≤ 0.3, binarized versions of x and x adv can differ in at most 6% of the input dimensions. By binarizing the inputs of a standard CNN trained without adversarial training, we obtain a model that enjoys robustness similar to the model trained by. Concretely, for a white-box I-FGSM attack, we get at most 11.4% error. The existence of such a simple robust representation begs the question of why learning a robust model with adversarial training takes so much effort. Finding techniques to improve the performance of adversarial training, even on simple tasks, could provide useful insights for more complex tasks such as ImageNet, where we do not know of a similarly simple "denoising" procedure. These positive on MNIST for the ∞ norm also leave open the question of defining a general norm for adversarial examples. Let us motivate the need for such a definition: we find that if we first rotate an MNIST digit by 20°, and then use the I-FGSM, our rounding model and the model from achieve only 65% accuracy (on "clean" rotated inputs, the error is < 5%). If we further randomly "flip" 5 pixels per image, the accuracy of both models drops to under 50%. Thus, we successfully evade the model by slightly extending the threat model (see FIG3).Of course, we could augment the training set with such perturbations (see BID13). An open question is whether we can enumerate all types of "adversarial" perturbations. In this work, we focus on the ∞ norm to illustrate our findings on the limitations of single-step adversarial training on ImageNet and MNIST, and to showcase the benefits of our Ensemble Adversarial Training variant. Our approach can easily be extended to consider multiple perturbation metrics. We leave such an evaluation to future work. We repeat experiments from Section 4 on MNIST. We use the architectures in TAB6. We train a standard model for 6 epochs, and an adversarial model with the FGSM (= 0.3) for 12 epochs. During adversarial training, we avoid the label leaking effect described by BID20 by using the model's predicted class arg max h(x) instead of the true label y true in the FGSM, We first analyze the "degenerate" minimum of adversarial training, described in Section 3.3. For each trained model, we compute the approximation-ratio of the FGSM for the inner maximization problem in equation FORMULA0. That is, we compare the loss produced by the FGSM with the loss of a strong iterative attack. The appear in TAB7. As we can see, for all model architectures, adversarial training degraded the quality of a linear approximation to the model's loss. We find that input dropout BID35 ) (i.e., randomly dropping a fraction of input features during training) as used in architecture B limits this unwarranted effect of adversarial training. If we omit the input dropout (we call this architecture B *) the single-step attack degrades significantly. We discuss this effect in more detail below. For the fully connected architecture D, we find that the learned model is very close to linear and thus also less prone to the degenerate solution to the min-max problem, as we postulated in Section 3.3.Attacks. TAB8 compares error rates of undefended and adversarially trained models on whitebox and black-box attacks, as in Section 4.1. Again, model B presents an anomaly. For all other models, we corroborate our findings on ImageNet for adversarial training: black-box attacks trump white-box single-step attacks; white-box single-step attacks are significantly stronger if prepended by a random step. For model B adv, the opposite holds true. We believe this is because input dropout increases diversity of attack samples similarly to Ensemble Adversarial Training. While training with input dropout helps avoid the degradation of the single-step attack, it also significantly delays convergence of the model. Indeed, model B adv retains relatively high error on white-box FGSM examples. Adversarial training with input dropout can be seen as comparable to training with a randomized single-step attack, as discussed in Section 4.1.The positive effect of input dropout is architecture and dataset specific: Adding an input dropout layer to models A, C and D confers only marginal benefit, and is outperformed by Ensemble Adversarial Training, discussed below. Moreover, BID27 find that input dropout significantly degrades accuracy on ImageNet. We thus did not incorporate it into our models on ImageNet., uses 3 pre-trained models ({A, C, D} or {B, C, D}). We train all models for 12 epochs. We evaluate our models on black-box attacks crafted on models A,B,C,D (for a fair comparison, we do not use the same pre-trained models for evaluation, but retrain them with different random seeds). The attacks we consider are the FGSM, I-FGSM and the PGD attack from with the loss function from BID7 ), all with = 0.3. The appear in TAB10.For each model, we report the worst-case and average-case error rate over all black-box attacks. Ensemble Adversarial Training significantly increases robustness to black-box attacks, except for architecture B, which we previously found to not suffer from the same overfitting phenomenon that affects the other adversarially trained networks. Nevertheless, model B adv-ens achieves slightly better robustness to white-box and black-box attacks than B adv. In the majority of cases, we find that using a single pre-trained model produces good , but that the extra diversity of including three pre-trained models can sometimes increase robustness even further. Our experiments confirm our conjecture that robustness to black-box attacks generalizes across models. Indeed, we find that when training with three external models, we attain very good robustness against attacks initiated from models with the same architecture (as evidenced by the average error on our attack suite), but also increased robustness to attacks initiated from the fourth holdout model D TRANSFERABILITY OF RANDOMIZED SINGLE-STEP PERTURBATIONS.In Section 4.1, we introduced the R+Step-LL attack, an extension of the Step-LL method that prepends the attack with a small random perturbation. In TAB11, we evaluate the transferability of R+Step-LL adversarial examples on ImageNet. We find that the randomized variant produces perturbations that transfer at a much lower rate (see TAB1 for the deterministic variant). BID40 consider the following task for a given model h: for a (correctly classified) point x, find k orthogonal vectors {r 1, . . ., r k} such that r i 2 ≤ and all the x + r i are adversarial (i.e., arg max h(x + r i) = y true ). By linearizing the model's loss function, this reduces to finding k orthogonal vectors r i that are maximally aligned with the model's gradient g = ∇ x L(h(x), y true ). BID40 left a construction for the ∞ norm as an open problem. We provide an optimal construction for the ∞ norm, based on Regular Hadamard Matrices BID11. Given the ∞ constraint, we find orthogonal vectors r i that are maximally aligned with the signed gradient, sign(g). We first prove an analog of BID40, Lemma 1).Lemma FORMULA5. Then, we have DISPLAYFORM0 from which we obtain α ≤ k DISPLAYFORM1 This bounds the number of orthogonal perturbations we can expect to find, for a given alignment with the signed gradient. As a warm-up consider the following trivial construction of k orthogonal vectors in {−1, 1} d that are "somewhat" aligned with sign(g). We split sign(g) into k"chunks" of size
Adversarial training with single-step methods overfits, and remains vulnerable to simple black-box and white-box attacks. We show that including adversarial examples from multiple sources helps defend against black-box attacks.
1,126
scitldr
An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier. It theoretically sounds in term of it maximize conditional entropy between attribute and representation. However, as shown in this paper, the AFL often causes unstable behavior that slows down the convergence. We propose an {\em attribute perception matching} as an alternative approach, based on the reformulation of conditional entropy maximization as {\em pair-wise distribution matching}. Although the naive approach for realizing the pair-wise distribution matching requires the significantly large number of parameters, the proposed method requires the same number of parameters with AFL but has a better convergence property. Experiments on both toy and real-world dataset prove that our proposed method converges to better invariant representation significantly faster than AFL. How to learn representations invariant to nuisance attribute a is technical challenges raised in domain generalizaton BID1 BID9 BID4 BID8, fair classification, privacy-protection BID2 BID5, and many other area. Assume that we are given a training dataset made of pairs S = (x i, y i, a i) DISPLAYFORM0, where x is an observation, y is a target of x, and a is a corresponding intrinsic attribute of K-way categorical variable A. The goal of invariant representation learning is to obtain an encoder E that reduces information about attribute a while maintaining information about y. An adversarial game between a feature extractor and an attribute classifier, called adversarial feature learning BID11, is a powerful framework for this purpose. The key of AFL is to measure the invariance by leveraging the discriminative power of neural network beyond the pre-defined metric such as l 2 distance or maximum mean discrepancy. That is, if the external network (also referred to as a discriminator) can predict a from z = E(x), AFL regards z to have considerable information about a. Formally, the AFL solves the following optimization problem: DISPLAYFORM1 where q M and q D is the conditional probability that M and D gives a correct estimation of y and a respectively. As BID11 explained, this alternating procedure can be regarded as a way to maximize the conditional entropy H(A|Z) = a∈A,z∈Z −p(a, z) log p(a|z), where A and Z is a support set of a and z. BID11 also showed that the min-max game has an equilibrium, in which E maximize the conditional entropy H(A|Z). It has been show superior performance in fairclassification, privacy-protection, and domain generalization tasks BID3 BID2 BID11 BID5, compared to the predifined metric approaches BID12 BID7 BID8.Despite the theoretical justifications, the above min-max formulation is suspicious for several practical issues. Namely, the gradient from the discriminator vanishes if the discriminator sufficiently trained since E[log q D (a|z=E(x))] is small then. Besides, in mathematical level, it only keeps away representations from the non-desired point where we can predict a label correctly, but not ensure that it approaches the desired invariant point. Please also refer FIG1 for visualization of the instability. Note that, Generative Adversarial Networks community, which utilize similar formulation to generate realistic images, evade similar issues by the incorporating alternative objectives, such as the Wasserstein distance BID0. However, the Wasserstein distance is defined over two distributions and applying to our setting (consisting of multiple distributions) is not trivial. This paper holds the following contributions to the invariant feature learning problem. First, we empirically show that AFL is suffered from practical issues that significantly slow down the convergence. We then reformulate the optimization problem of AFL as pair-wise distribution matching and derive parameter practical realization of pairwise distribution matching while inheriting the merit of AFL that leveraging the discriminative power to measure the invariance. It is worth mentioning that the reformulation enable us to use Wasserstein metric in theory, however, it is still computationally infeasible in practice because a naive way to calculate the Wasserstein distance between all the pair of the distributions requires O(K 2) discriminators, where K = |A|, which raise computational issues both in terms of parameter size and forward/backward time. Finally, we empirically validate the superior performance of our proposed method on both artificial dataset and real-world datasets.2 CONVERGENCE ISSUES OF AFL FIG1 -(a-e) visualize a behavior of AFL optimization on synthesized data. Each figure corresponds to the different timestep of the alternating optimization. The dataset consists of samples from three Gaussian distributions with different means ([sin( DISPLAYFORM2, for i ∈ 1, 2, 3, respectively) and the same variance, assuming that each distribution corresponds to different attributes. In each figure, dots represent the data point, color represents the attribute (domain id), and the contour plot represents the discriminator's decision boundary. A float value on the top of figures is the negative log-likelihood (NLL) of the dataset measured by the discriminator D (the multi-layer perceptron with 100 hidden units followed by a ReLU activation). Similarly, a float value in parenthesis on the top of figures is an NLL of a post-hoc classifier D eval that have the same architecture as D. To be more specific, we first train the discriminator 100 times with 128 batch size and train D and E iteratively with stochastic gradient descent with learning rate=0.1. FIG1 -(f,g) shows the gradient vector fields of different time steps for a = blue, where the arrow represents the direction of the gradient, and the norm represents its magnitude. For simplicity, we only show the vector fields of a = blue, but the observations are quite similar for the other a. The figure reveals two practical issues in AFL optimization. The distribution alignment is initially quite slow (compare with the behavior of the proposed method shown in Figure 2). This is because the gradient is small when the discriminator correctly distinguishes a a. AFL behavior is unstable. The distributions somewhat align after 40 steps (given 0.683 NLL with the post-hoc classifier), but it is catastrophically failed five steps later because the discriminator did not capture the true conditional entropy (implied by the mostly similar counterplot of D) and therefore gave a false gradient as shown in (f) and (g). The intuitive reason for this phenomenon is that AFLs loss essentially tries to pull a distribution apart from the non-desired point, i.e., the point where we can correctly predict the label. The problem of AFL is that it only keeps away a distribution from the non-desired point, but not ensure it approaches the desired invariant point. After several steps, D starts to follow the change of the distribution (as shown in FIG1 . The instability of the AFL also appears in the significant gap between the NLL of the D and D eval . Note that the second issue may be alleviated if D has a sufficiently large capacity and is trained many times at each iteration. However, this is not a realistic assumption since it is fair to say that real datasets are more complicated than this toy situations, making it more challenging to find the supremum. The next question we must answer is how to maximize conditional entropy while avoiding the issues mentioned above. Assume that A is a categorical random variable drawn from a uniform discrete distribution, and denote that support set of a and z as A and Z; then, the following theorem holds: Theorem 1. The maximum conditional entropy H(A|Z) is − log 1 K, and H(A|Z) is maximized if and only if p(z|a=i) = p(z|a=j) for all i = j ∈ 1, · · ·, K and z ∈ Z.The proof is followed in the appendix. One implication of the theorem is, the empirical measurements of conditional entropy should be boudned. It suggests another problematic point of AFL objective since it is not lower bounded. Another implication is that this theorem permits us to rethink the problem of the conditional entropy maximization as a problem of aligning all pairs of distributions p(z|a = i) and p(z|a = j). This reformulation is significant since we can now use any conventional measurement of two distributions. However, the naive approach requires to match all pairs of the distributions (= K C 2), which would be cumbersome to compute. We, therefore, propose a computationally efficient way to realize pairwise distribution matching, moving attribute perception matching. As with AFL, APM is based on the alternating training of attribute classifier D and feature extractor E, but APM deceives D differently. Formally, we propose the following APM matching objective: DISPLAYFORM0 where k D is some distance function defined over either a hidden representation of the discriminator D or output probability q D (a|E(x)) itself and C aj is the moving average of the centroid for attribute a j in the attribute perception. Although there are many valid choices for k D (., .), including the simple l 2 norm, we primarily use the well-known Kullback-Leibler (KL) divergence. We initialized C 0 aj via computing the centroids using all training data points. The key of the proposal is that it only requires the same number of parameters as AFL, but it ensures that representations of an attribute approach to the representations of the other attributes. Also, it inherits the merit of AFL that leveraging the discriminative power to measure the invariance. Figure 2-(a-c) shows the behavior of the APM under the exact same experimental settings shown in FIG1. The proposed method maximizes the conditional entropy significantly faster than AFL: 1.061 after only five iterations, but AFL gives 0.683 after 40 iterations. Note that the value match the theoretical maximum value of the conditional entropy (log Another merit of the proposed method is that we can enforce semantic alignment with a simple modification. Individually, semantic alignment can be carried out by merely computing the centroids for each (attribute, label) tuple and aligning the perceptions of {x, y, a} between only centroids of the same label y = y but different attributes a = a. Although this modification does not help to maximize the conditional entropy, it prevents performance degradation of predicting y. Since most applications of invariant feature learning require to keep a information about y we use this modification for all the later-described experiments. We use three domain generalization tasks and two user-anonymization tasks. MNISTR BID4 and PACS BID6 ) is a well known image-based datasets designed for the purpose of domain generalization. We also test the methods of noise-robust speech recognition scenarios using Google Speech Command Dataset (Speech). Regarding user-anonymization, we use two user-anonymization tasks on the data of wearables, OppG and USC BID5. The neural networks require to learn representations that help activity classification and at the same time, prevent to access the information about users (userID). As baselines, we use A CNN trained on the aggregation of data from all source domains. AFL BID11, which was explained in Section 3.1. FORMULA4 RevGrad (BID3) is a slightly modified version of AFL, which uses the gradient reversal layer to train all the networks. CrossGrad BID10 ) is regarded as a state-of-the-art method in domain generalization tasks. Activation Matching (AM), which trains the encoder with the regularization of the l 2 distance on a feature space. APM is our proposal. For all datasets and methods, we used RMSprop for each optimization. For all datasets except PACS, we set the learning rate to 0.001 and the batch size to 128. For PACS, we set the learning rate to 5e − 5 and the batch size to 64. For a fair comparison, hyperparameters were tuned on a validation set for each baseline. For the adversarial-training-based method, we optimized weighting parameter λ from {0.001, 0.01, 0.1, 1.0}, except for MNISTR, for which it was optimized from {0.01, 0.1, 1.0, 2.0}. The value of α for CrossGrad was selected from {0.1, 0.25, 0.5, 0.75, 0.9}. We set the decay rate γ to 0.7 for all experiments. In all the experiments, we selected the data of one or several domains for the test set and used the data of a disjoint domain as the training/validation data. Specifically, we split the data of the disjoint domain into groupings of 80% and 20%. We denote the test domain by a suffix (e.g., MNISTR-M0). We measured the label classification accuracy and the level of invariance. We measured the level of invariance by training a post-hoc classifier f eva following previous studies BID11; BID5. FIG2 compares the performance the invariance of representations. For each method, we used the largest weighting parameter λ on the condition that label classification accuracy did not significantly decrease. The show that the proposed method stably achieves better invariant representations except for speech dataset. For example, APM achieved 20% of the A-Acc in MNISTR, which is nearly perfect invariance as there are five domains in the validation data, while RevGrad and AFL achieved 30% at best. On speech dataset, the performance of AFL and APM is mostly similar. These confirm that the proposed method stably achieves more invariant representation compared with AFL. TAB0 summarizes the methods' classification performance on three different datasets: MNISTR, Speech, and PACS. The leftmost column of each table represents the test domain. We report the mean accuracy. We can make the following observations. APM demonstrates the best or comparable performance on all three datasets, including CrossGrad, which is regarded as the state-of-the-art method in domain generalization tasks. RevGrad and AFL often fail to improve performance even when compared with a standard CNN. These suggest that the previous adversarial-training-based method suffered from the lack of semantic alignment when applied to domain generalization. FORMULA4 The Wilcoxon rank sum test shows that APM is statistically better than CNN, RevGrad, and AFL with p < 0.01, and than CrossGrad with p < 0.05. This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues. Empirical on both toy and real-world datasets support the stable performance of the proposed method to learn invariant features and superior performance on domain generalization tasks. A PROOF OF THE THEOREM 1Proof. Using the Lagrange multiplier method, the derivative of DISPLAYFORM0 is equal to zero for the maximum entropy H(A|Z). Solving the simultaneous equations, we can say p(a=1|z) = p(a=2|z) = · · · = p(a=K|z) = 1 K for all z ∈ Z when the conditional entropy is maximized, and based on the definition, the conditional entropy become − log DISPLAYFORM1 holds ∀i = j ∈ A and z ∈ Z.
This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues.
1,127
scitldr
We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: the bulk centered near zero, and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et. al.: Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the *flatness* of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create *large* connected components at the bottom of the landscape. Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin. In this paper, we study the geometry of the loss surface of supervised learning problems through the lens of their second order properties. To introduce the framework, suppose we are given data in the form of input-label pairs, D = {(x i, y i)} N i=1 where x ∈ R d and y ∈ R that are sampled i.i.d. from a possibly unknown distribution ν, a model that is parametrized by w ∈ R M; so that the number of examples is N and the number of parameters of the system is M. Suppose also that there is a predictor f (w, x). The supervised learning process aims to solve for w so that f (w, x) ≈ y. To make the'≈' precise, we use a non-negative loss function that measures how close the predictor is to the true label, (f (w, x), y). We wish to find a parameter w * such that w * = arg min L(w) where, DISPLAYFORM0 (f (w, x i), y i ).In particular, one is curious about the relationship between L(w) andL(w):= d(ν). By the law of large numbers, at a given point w, L w →L w almost surely as N → ∞ for fixed M. However in modern applications, especially in deep learning, the number of parameters M is comparable to the number of examples N (if not much larger). And the behaviour of the two quantities may be drastically different (for a recent analysis on provable estimates see BID18).A classical algorithm to find w * is gradient descent (GD), in which the optimization process is carried out using the gradient of L. A new parameter is found iteratively by taking a step in the direction of the negative gradient whose size is scaled with a constant step size η that is chosen from line-search minimization. Two problems emerge: Gradient computation can be expensive, Line-search can be expensive. More involved algorithms, such as Newton-type methods, make use of second-order information BID19. Under sufficient regularity conditions we may observe: L(w + ∆w) ≈ L(w) + ∆w∇L(w) + ∆w T ∇ 2 L(w)∆w. A third problem emerges beyond an even more expansive computational cost of the Hessian: Most methods require the Hessian to be non-degenerate to a certain extent. When the gradients are computationally expensive, one can alternatively use its stochastic version (SGD) that replaces the above gradient with the gradient of averages of losses over subsets (such a subset will be called the mini-batch) of D (see BID5 for a classical reference). The benefit of SGD on real-life time limits is obvious, and GD may be impractical for practical purposes in many problems. In any case, the stochastic gradient can be seen as an approximation to the true gradient, and hence it is important to understand how the two directions are related to one another. Therefore, the discussion around the geometry of the loss surface can be enlightening in the comparison of the two algorithms: Does SGD locate solutions of a different nature than GD? Do they follow different paths? If so, which one is better in terms of generalization performance?For the second problem of expensive line-search, there are two classical solutions: using a small, constant step size, or scheduling the step size according to a certain rule. In practice, in the context of deep learning, the values for both approaches are determined heuristically, by trial and error. More involved optimal step size choices involve some kind of second-order information that can be obtained from the Hessian of the loss function BID24. From a computational point of view, obtaining the Hessian is extremely expensive, however obtaining some of its largest and smallest eigenvalues and eigenvectors are not that expensive. Is it enough to know only those eigenvalues and eigenvectors that are large in magnitude? How do they change through training? Would such a method work in SGD as well as it would on GD?For the third problem, let's look at the Hessian a little closer. A critical point is defined by w such that ||∇L(w)|| = 0 and the nature of it can be determined by looking at the signs of its Hessian matrix. If all eigenvalues are positive the point is called a local minimum, if r of them are negative and the rest are positive, then it is called a saddle point with index r. At the critical point, the eigenvectors indicate the directions in which the value of the function locally changes. Moreover, the changes are proportional to the corresponding -signed-eigenvalue. Under sufficient regularity conditions, it is rather straightforward to show that gradient-based methods converge to points where the gradient is zero. Recently BID15 showed that they indeed converge to minimizers. However, a significant and untested assumption to establish these convergence is that the Hessian of the loss is non-degenerate. A relaxation of the above convergence to the case of non-isolated critical points can be found in BID20. What about the critical points of machine learning loss functions? Do they satisfy the non-degeneracy assumptions? If they don't, can we still apply the of provable theorems to gain intuition? One of the first instances of the comparison of GD and SGD in the context of neural networks dates back to the late eighties and early nineties. BID4 points out that large eigenvalues of the Hessian of the loss can create the illusion of the existence of local minima and GD can get stuck there, it further claims that the help of the inherent noise in SGD may help to get out of this obstacle. The origin of this observation is due to BID6, as well as numerical justifications. However, given the computational limits of the time, these experiments relied on low-dimensional neural networks with few hidden units. The picture may be drastically different in higher dimensions. In fact, provable in statistical physics tell us that, in certain real-valued non-convex functions, the local minima concentrate at an error level near that of the global minima. A theoretical review on this can be found in BID0, while BID22 and BID2 provide an experimental simulation as well as a numerical study for neural networks. They notably find that high error local minima traps do not appear when the model is over-parametrized. These concentration can help explain why we find that the solutions attained by different optimizers like GD and SGD often have comparable training accuracies. However, while these methods find comparable solutions in terms of training error there is no guarantee they generalize equally. A recent work in this direction compares the generalization performance of small batch and large batch methods BID13. They demonstrate that the large batch methods always generalize a little bit worse even when they have similar training accuracies. The paper further makes the observation that the basins found by small batch methods are wider, thereby contributing to the claim that wide basins, as opposed to narrow ones, generalize better. The final part of the historical account is devoted to the observation of flatness of the landscape in neural networks and its consequences through the lens of the Hessian. In the early nineties, BID11 remarks that there are parts of the landscape in which the weights can be perturbed without significantly changing the loss value. Such regions at the bottom of the landscape are called the flat minima, which can be considered as another way of saying a very wide minima. It is further noted that such minima have better generalization properties and a new loss function that makes use of the Hessian of the loss function has been proposed that targets the flat minima. The computational complexity issues have been attempted to be resolved using the R-operator of. However, the new loss requires all the entries of the Hessian, and even with the R-operator, it is unimaginably slow for today's large networks. More recently, an exact numerical calculation of the Hessian has been carried out by. It turns out that the Hessian can have near zero eigenvalues even at a given random initial point, and that the spectrum of it is composed of two parts: the bulk, and the outliers. The bulk is mostly full of zero eigenvalues with a fast decaying tail, and the outliers are only a handful which appears to depend on the data. This implies that, locally, most directions in the weight space are flat, and leads to little or no change in the loss value, except for the directions of eigenvectors that correspond to the large eigenvalues of the Hessian. In this work, we present a phenomenological study in which we provide various observations on the local geometry at the bottom of the landscape and discuss their implications on certain features of solution spaces: Connectedness of basins found by large and small batch methods.1. Flatness at the bottom of the landscape: At the bottom, most of the eigenvalues in the spectrum of the Hessian are near zero, except for a small number of relatively larger ones.2. A possible explanation through over-parametrization: The decomposition of the Hessian as a sum of two matrices, where the first one is the sample covariance matrix of the gradients of model outputs and the second one is the Hessian of the function that describes the model outputs. We argue that the second term can be ignored as training progress which leaves us with the covariance term which leads to degeneracy in the Hessian when there are more parameters than samples. Dependence of eigenvalues to model-data-algorithm: We empirically examine the spectrum to uncover intricate dependencies within the data-architecture-algorithm triangle: more complex data produce more outliers, increasing the network size doesn't affect the density of large eigenvalues, large batch methods produce the same number of outliers that are larger in magnitude, and finally there are negative eigenvalues even after the training process appears to show no further progress but their magnitude is much smaller than the outliers. BID12, appear to draw the big-picture of isolated basins at the bottom of the landscape. One tool that is commonly used to demonstrate such claims is to evaluate the loss on a line that connects two solutions found by different methods. First of all, the notion of the basin itself can be misleading given the negative eigenvalues pointed out in the previous item. Moreover, we claim that this idea of isolated basins may be misleading based on the above observations of the dramatic level of flatness of the local geometry. In particular, we show that two solutions with different qualities can be shown to be in the same basin even with a loss evaluation on a straight line connecting the two. Remark 1: Throughout this paper we use the notions of data complexity and over-parametrization vaguely. The complexity of data can be defined in various ways and further research is required to determine the precise notion that is required that would link complexity to the spectrum. Overparametrization, similarly, can be defined in various ways: M >> N, M → ∞ for fixed N, or M/N → c for a certain constant, etc... However, more realistic notions of both complexity of the data and the over-parametrization should take the architecture into account, and detailed treatment of this should follow another work. Remark 2: The notion of the basin, also, can be defined precisely. However, it is unclear whether any algorithm used in practice actually locates the bottom of a basin described in classical ways. For instance, the norm of the gradients are small but not at the machine precision, and the eigenvalues of the Hessian still has a negative part even after SGD continues a long while without a meaningful decrease in the loss value. This is presumably the fault of the algorithm itself, however, it requires a further study, and hence such notions of sharp vs. wide minima in various recent work should be taken with a grain of salt. Remark 3: Even when one has a way to measure the width of a'basin', such ways of measuring the approximate width are all relative. In a recent study, BID8 shows how'sharp minima' can still generalize with proper modifications to the loss function. We note that it takes a non-linear transformation to deform relative widths of basins. And, in this work, we focus on relative values as opposed to absolute values to get a consistent comparison across different setups.2 SOME PROPERTIES OF THE HESSIAN We begin by an exact calculation of the spectrum of the Hessian at a random initial point, and at the end of training. Note that the plots are arranged in a way that they show the eigenvalues in the y-axis, and indicate the order of the eigenvalue in the x-axis. This choice is necessary to indicate the scale of the degeneracy while still showing all the eigenvalues in the same plot. FIG2 shows the full spectrum of the Hessian at the random initial point of training and after the final training point. The model of this example is a two hidden layer network with a total of 5K parameters that is trained using gradient descent. Also note that, throughout the paper, the exact full Hessian is computed via the Hessian-vector products BID21 up to the machine precision.: Ordered eigenvalues at a random initial point, and at the final point of GD. x-axis is the rank of the eigenvalue, and y-axis the value of the eigenvalue. In order to study its spectrum, we will describe how the Hessian can be decomposed into two meaningful matrices BID14 BID17. Suppose the loss function is given as a composition of two functions, the model function f •: R M −→ R is the real-valued output of a network that depends on the parameters; and the loss function •: R −→ R + is a convex function. Here, • refers to the given example. Examples include the regression: the mean-square loss composed with a real-valued output function, and classification: the negative log-likelihood loss composed with the dot product of the output of a softmax layer with the label vector. For ease of reading, we indicate the dependencies of functions and f to data by the index of the example, or omit it altogether in case it is not necessary, and unless noted otherwise the gradients are taken with respect to w. The gradient and the Hessian of the loss for a given example are given by DISPLAYFORM0 DISPLAYFORM1 where • T denotes the transpose operation (here the gradient is a column-vector). Note that since is convex (s) ≥ 0 and we can take its square root which allows us to rewrite the Hessian of the loss as follows: DISPLAYFORM2 In general, it isn't straightforward to discuss the spectrum of the sums of matrices by looking at the individual ones. Nevertheless, looking at the decomposition, we can still infer what we should expect. At a point close to a local minimum, the average gradient is close to zero. However, this doesn't necessarily imply that the gradients for individual samples are also zero. However, if (f (ŵ)) and ∇ 2 f (ŵ) are not correlated, then we can ignore the second term. And so the Hessian can be approximated by the first term: DISPLAYFORM3 Here, Equation 5 is the sum of rank one matrices (via the outer products of gradients of f multiplied by some non-negative number), therefore, the sum can be written as a product of an M × N matrix with its transpose where the columns of the matrix are formed by the scaled gradients of f. Immediately, this implies that there are at least M − N many trivial eigenvalues of the right-hand side in Equation FORMULA5.From a theoretical point of view, the tool that is required for the above problem should be a mapping of eigenvalues of the population matrix to the sample covariance matrix. Recent provable on this can be found in BID3 (please refer to the appendix for a review of this approach). We emphasize that this require independent inputs, and extensions to correlated data appear to be unavailable to the best of our knowledge. In this section, we leave the decomposition behind and focus on experimental of the spectrum of the full Hessian through the exact Hessian-vector products. We discuss how data, model, and algorithm affect the spectrum of the Hessian of the loss. In many cases of practical interest, the data contains redundancies. In such cases, the number of non-trivial eigenvalues could be even smaller than N. For instance, if one deals with a classification problem where the training data has k classes with relatively small deviation among each of them, it is reasonable to expect that there will be an order of k many non-trivial eigenvalues of the first term of the above decomposition of the Hessian in Equation 5. Then, if the second term is small (for instance when all the gradients per example are zero), we would expect to see k many outliers in the spectrum of the Hessian of the loss. To test this idea, we used a feed-forward neural network with a 100-dimensional input layer, two hidden layers each of which with 30 hidden units, and a k dimensional output layer that is combined with softmax for k-class classification. We randomly sampled from k Gaussian clusters in the input space and normalized the data globally. Then we carried out the training using SGD on ReLU network for the following number of clusters: k: {2, 5, 10, 20, 50}. The number of large eigenvalues that are above the gap in FIG3 match exactly the number of classes in the dataset. This experiment is repeated for various different setups. Please refer to Table 1 in the appendix for more experiments on this. We test the effect of growing the size of the network when data, architecture, and algorithm are fixed. In some sense, we make the system more and more over-parametrized. Based on the intutions developed above we should not observe a change in the size and number of the large eigenvalues of the Hessian at the bottom. To test this, we sample 1K examples from the MNIST dataset and fix them as the training set. Then we form four different networks each of which has a different number of nodes in its hidden layer (n hidden ∈ {10, 30, 50, 70}). All four networks are trained with the same step size and the same number of iterations and the exact Hessian is computed at the end. Figure 3 shows the largest 120 eigenvalues of each of the four Hessians ranked in an increasing order. For the right edge of the spectrum (that is, for the large positive eigenvalues), the shape of the plot remains invariant as the number of parameters increase (Figure 3). Finally, we turn to describing the nature of solutions found by the large and small batch methods for the training landscape. We train a convnet that is composed of 2 convolution layers with relu-maxpool followed by two fully-connected layers. The training set is a subsampled MNIST with 1K training examples. The small batch method uses a mini-batch size of 10, and the large-batch one uses 512. A learning rate for which both algorithms converge is fixed for both LB and SB. Note that the stochastic gradients are averaged over the mini-batch. Therefore, fixing the learning rate allows the algorithms to take steps whose lengths are proportional to the norms of the corresponding stochastic gradients averaged over the mini-batch. This way we ensure that both methods are compared fairly when we look at them after a fixed number of iterations. We train until the same number of iterations have been reached. Then, we calculate the Hessian and the spectrum of the Hessian in FIG0. The large batch method locates points with larger positive eigenvalues. This observation is consistent with BID13 as the way they measure flatness takes local rates of increase in a neighborhood of the solution into account which is intimately linked to the size of the large eigenvalues. Lastly, we observe that the negative eigenvalues at the end of the training are orders of magnitude smaller than the large ones. The very existence of the negative eigenvalues indicates that the algorithm didn't locate a local minimum, yet. Note that the stopping criterion in most practical cases is arbitrary. Training is stopped after there is no meaningful decrease in the loss value or increase in the test accuracy. In our experiments, the training ran well beyond the point of this saturation. In this time-scale, the loss decays in much smaller values. And we may expect convergence to a localminimum at large (possibly exponentially long) time-scales. However, from a practical point of view, it appears that the properties of the landscape at this fine-grained scale is less relevant in terms of its test performance. Anyhow, we observe that there are a number of negative eigenvalues but their magnitude is much smaller compared to the positive eigenvalues. The reason we look at the ranked negative eigenvalues in percentages rather than the order is the of the experiment in Section 3.2. Adding more weights to the system scale the small scale eigenvalues proportionally, the number of outliers remain unchanged whereas the ratio of negative x-axis indicates the order of the eigenvalue in percentages, y-axis indicates the eigenvalues.eigenvalues remain the same. Moreover, they appear to be converging to have the same shape (Figure 6). Also, note that the negative eigenvalues can only come from the second term of the decomposition in Equation 5. Unless the second term contributes to the spectrum in an asymmetrical way, the observation that the negative eigenvalues are small confirms our previous suspicions that the effect of the ignored term in the decomposition is small. Finally, we revisit the issue through the lens of the following question: What does overparametrization imply on the discussion around GD vs. SGD (or large batch vs small batch) especially for their generalization properties? In this final section, we will argue that, contrary to what is believed in BID13 and BID7 the two algorithms do not have to be falling into different basins. As noted in the introduction, for a while the common sense explanation on why SGD works well (in fact better) than GD (or large batch methods) was that the non-convex landscape had local minima at high energies which would trap large-batch or full-batch methods. Something that SGD with small batch shouldn't suffer due to the inherent noise in the algorithm. However, there are various experiments that have been carried out in the past that show that, for reasonable large systems, this is not the case. For instance, BID22 demonstrate that a two hidden layer fully connected network on MNIST can be trained by GD to reach at the same level of loss values as SGD 1. In fact, when the step size is fixed to the same value for both of the algorithms, they reach the same loss value at the same number of iterations. The training accuracy for both algorithms are the same, and the gap between test accuracies diminish as the size of the network increase with GD falling ever so slightly behind. It is also shown in BID13 that training accuracies for both large and small batch methods are comparably good. Furthermore, BID26 demonstrates that training landscape is easy to optimize even when there is no clear notion of generalization. Such observations are consistent with our observations: over-parametrization (due to the architecture of the model) leads to flatness at the bottom of the landscape which is easy to optimize. When we turn our attention to generalization, BID13 note that LB methods find a basin that is different than the one found by SB methods, and they are characterized by how wide the basin is. As noted in FIG0, indeed the large eigenvalues are larger in LB than in SB, but is it enough to justify that they are in different basins, especially given the fact that the number of flat directions are enormous. The observation that LB converges to sharper basins that are separated by wells from the wider basins found by SB has been an observation that triggered attention. In this section, we present two solutions with different qualities as measured by the generalization error and we show that they are in fact in the same'basin' by showing that the evaluation of the loss doesn't go through a barrier between the two solutions. We start by two common pitfalls that one may fall into in testing this:The problem with epoch based time scales: A common way to plot training profiles in larger scale neural networks is to stop every epoch to reserve extra computational power to calculate various statistics of the model at its current position. This becomes problematic when one compares training with different batch sizes, primarily because the larger batch model takes fewer steps in a given epoch. Recall that the overall loss is averaged, therefore, for a fixed point in the weight space, the empirical average of the gradients is an unbiased estimator for the expected gradient. Hence it is reasonable to expect that the norms of the large batch methods match to the ones of the small batch. And for a fair comparison, one should use the same learning rate for both training procedures. This suggests that a better comparison between GD and SGD (or LB and SB) should be scaled with the number of steps, so that, on average both algorithms are able to take similar number of steps of comparable sizes. The experiments we present use the number of iterations as the time-scale. The problem with line interpolations in the weight space: The architecture of typical neural networks have many internal symmetries, one of which is the flip symmetry: when one swaps two nodes (along with the weights connected to it) at a given layer, the ing network is identical to the original one. Therefore, when one trains two systems to compare, it may well be possible that the two fall into different flip symmetrical configurations that may look more similar when they are reordered. Therefore, training two systems with levels of randomness (seed, batch-size, choice of the initial point, etc.) may in two points in the weight space that present a barrier only because of such symmetries. In an attempt to partially alleviate this problem we switch dynamics of an already trained system. 1. Part I: Train full CIFAR10 data for a bare AlexNet (bare meaning: no momentum, no dropout, and no batch normalization) with a batch-size of 1, 000. Record every 100 steps for 250 times. 2. Part II: Continue training from the endpoint of the previous step with a smaller batch-size of 32. Everything else, including the constant learning rate is kept the same. And train another 250 periods each of which with 100 steps. The key observation is the jump in the training and test losses, and a drop in the corresponding accuracies (Figure 8). Toward the end of Part II the small batch reaches to a slightly better accuracy (about ∼ 1%). And this looks in line with the observations in BID13, in that, it appears that the LB solution and SB solutions are separated by a barrier and that the latter of which generalizes better. Moreover, the line interpolations extending away from either endpoint appear to be confirming the sharpness of LB solution. However, we find the straight line interpolation connecting the endpoints of Part I and Part II turns out to not contain any barriers (Figure 9). This suggests that while the Part I and Part II converge to two solutions with different properties, these solutions have been in the same basin all along. This raises the striking possibility that those other seemingly different solutions may be similarly connected by a flat region to form a larger basin (modulo internal symmetries).Another interpretation of this experiment, also, goes through the Gauss-Newton decomposition introduced in Equation 5. When we decrease the batch size, we increase the noise in the covariance of the gradients, and hence the first term starts to dominate. Even when the weight space has large flat regions, the fluctuations of the stochastic noise should be precisely in the directions of the large eigenvalues. Therefore, at the beginning of Part II, the loss increases because the point fluctuates in the directions corresponding to the large eigenvalues, and eventually settles at a point that lies at the interior of the same level set, essentially staying in the same basin.: Loss and accuracy evaluation on the straight line that contains LB and SB solutions. The accuracy of the SB solution is ∼ 1% better than the LB solution, but there is no barrier between the two points. One of the most striking implications of flatness may be the connected structure of the solution space. We may wonder whether two given solutions can be connected by a continuous path of solutions. This question has been explored in a recent work: in BID9 it is shown that for one hidden layer rectified neural networks the solution space is connected which is consistent with the flatness of the landscape. The classical notion of basins of attractions may not be the suitable objects to study for neural networks. Rather, we may look at the exploration of interiors of level sets of the landscape. We may be tempted to speculate that such an exploration may indeed in point that generalizes better. However, the flat space itself is very high dimensional which comes with its own computational issues. The training curve can be seen as composed of two parts: high gain part where the norm of the gradients are large, noise of the gradients is larger relative to the size of the stochastic gradients (see BID25 for a recent reference). We speculate that the first part is relatively easy and even a large batch method can locate a large level set that contains points that generalize better than what's initially found. From a practical point of view, using larger batches with larger step sizes can, in fact, accelerate training. An example of this can be found in BID10, where training Imagenet with a minibatch size of 8192 can match small batch performance. On a final note for further consideration, we remark that we used standard pre-processing and initialization methods that are commonly used in practice. Fixing these two aspects, we modified the data, model, and algorithm in order to study their relative effects. However, the effects of pre-processing and initialization on the Hessian is highly non-trivial and deserves a separate attention. We have shown that the level of the singularity of the Hessian cannot be ignored from theoretical considerations. Furthermore, we use the generalized Gauss-Newton decomposition of the Hessian to argue the cluster of zero eigenvalues are to be expected in practical applications. This allows us to reconsider the division between initial fast decay and final slow progress of training. We see that even large batch methods are able to get to the same basin where small batch methods go. As opposed to the common intuition, the observed generalization gap between the two is not due to small batch finding a different, better, wider basin. Instead, the two solutions appear to be in the same basin. This lack of a barrier between solutions is demonstrated by finding paths between the two points that lie in the same level set. To conclude, we propose a major shift in perspective on considerations of the energy landscape in deep learning problems. In the subsequent experiments, we used a feed-forward neural network with a 100 dimensional input layer, two hidden layers each of which with 30 hidden units, and a k dimensional output layer that is combined with softmax for k-class classification. We sampled random k Gaussian clusters in the input space and normalized the data globally. Then we carried out the training for the following sets of parameters: k: {2, 5, 10, 20, 50}, algorithm: {GD, SGD}, non-linearity: {tanh, ReLU}, initial multipler for the covariance of the input distribution: {1, 10}. Then we counted the number of large eigenvalues according to three different cutoff methods: largest consecutive gap, largest consecutive ratio, and a heuristic method of determining the threshold by searching for the elbow in the scree plot (see FIG3 . In Table 1 In this section, we will show that the spectrum of the Generalized Gauss-Newton matrix can be characterized theoretically under some conditions. Suppose that we can express the scaled gradient DISPLAYFORM0) as g = T x with the matrix T ∈ M × d depending only on the parameters w -which is the case for linear models. Then we can write: DISPLAYFORM1 Furthermore, without loss of generality, we assume that the examples are normalized such that the entries of X are independent with zero mean and unit variance. One of the first steps in studying G goes through understanding its principle components. In particular, we would like to understand how the eigenvalues and eigenvectors of G are related to the ones of Σ where DISPLAYFORM2 In the simplest case, we have Σ = Id so that the gradients are uncorrelated and the eigenvalues of G are distributed according to the Marčenko-Pastur law in the limit where N, M → ∞ and α:= M N. The dates back to sixties and can be found in BID16. Note that if M > N then there are M − N trivial eigenvalues of G at zero. Also, the width of the nontrivial distribution essentially depends on the ratio α. Clearly, setting the expected covariance to identity is very limiting. One of the earliest relaxations appear in BID1. They prove a phase transition for the largest eigenvalues of the sample covariance matrix which has been known as the BBP phase transition. A case that may be useful for our setup is as follows:Theorem 1 BID1. If Σ = diag(, 1, . . ., 1), > 1, and M, N → ∞ with M N = α ≥ 1. Let c = 1 + √ α, and let's call the top eigenvalue of the sample covariance matrix as λ max then:• If 1 ≤ < c then λ max is at the right edge of the spectrum with Tracy-Widom fluctuations.• If c < then λ max is an outlier that is away from bulk centered at (1 + α −1) with Gaussian fluctuations. Typically, due to the correlations in the problem we don't have Σ to be the identity matrix or a diagonal matrix with spikes. This makes the analysis of their spectrum a lot more difficult. A solution for this slightly more general case with non-trivial correlations has been provided only recently by BID3. We will briefly review these here see how they are related to the first term of the above decomposition. Theorem 2 BID3. If d = M, Σ − Id has bounded rank, log N is comparable to log M, and entries of X are independent with mean zero and variance one, then the spectrum of Σ can be precisely mapped to the one of G as M, N → ∞ for fixed α = M N. Let K = min{M, N}, and the decomposition of the spectrum can be described as follows:• Zeros: M − K many eigenvalues located at zero (if M > N).• Bulk: Order K many eigenvalues are distributed according to Marčenko-Pastur law.• Right outliers: All eigenvalues of Σ that exceed a certain value produce large-positive outlier eigenvalues to the right of the spectrum of G.• Left outliers: All eigenvalues of Σ that are close to zero produce small outlier eigenvalues between 0 and the left edge of the bulk of G.Moreover, the eigenvectors of outliers of G are close to the corresponding ones of Σ.This theorem essentially describes the way in which one obtains outlier eigenvalues in the sample covariance matrix assuming the population covariance is known. Here is an example: Figure 10: Spectrum of the logistic regression loss with tanh unit: when data has a single Gaussian blob (left), when data has two Gaussian blobs (right). In the latter case, the spectrum has outlier eigenvalues at 454.4, 819.5, and 92.7 for alpha = 1, 2, 0.02, respectively. Example 1 (Logistic regression). Consider the log-loss (s, y) = −y log 1 1+e −s − (1 − y) log(1 − 1 1+e −s) and a single neuron with the sigmoid non-linearity. Note that (s, y) is convex in s for fixed y, and we can apply the decomposition using and f (w, x) = w, x. In this case we have M = d, also, note that the second part of the Hessian in Equation 4 is zero since ∇ 2 f (w, x) = 0. So the Hessian of the loss is just the first term. It is straightforward to calculate that the gradient per sample is of the form g = c(w, x)Id M x for a positive constant c = c(w, x) that doesn't depend on y. This case falls into the classical Marčhenko-Pastur law (left pane of FIG2 . Example 2. Once we have more than one class this picture fails to hold. For, (s) = − log(s), and f (w; (x, y)) = exp − wy,x k exp − w y k,x the spectrum changes. It turns out that in that case the weights have one large outlier eigenvalue, and a bulk that's close to zero (right pane of FIG2).
The loss surface is *very* degenerate, and there are no barriers between large batch and small batch solutions.
1,128
scitldr
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CR-MobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding. Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (; ; a;). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in. To address this issue, there are many approaches either manually or automatically modify the backbone network. proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks? To answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output . The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position boost the performance of detector by affecting the ERF. Figure 1: Following the instructions in , we draw the ERF of FPN in different resolution features. The size of base plate is 512×512, with respective anchor boxes ({64, 128, 256} for {p 3, p 4, p 5}) drawn in. The classification CNNs ResNet50 tends to have redundant ERF for high resolution features p 3 and limited ERF for low resolution features p 5. After stage reallocation, our SCR-ResNet50 has more balanced ERF across all resolutions which leads to a high performance. Based on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements. Extensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet , MobileNetV2 , ResNeXt . On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline. To summarize, the contributions of our paper are three-fold: • We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution. • We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements. • Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN , other dataset, e.g. PASCAL VOC and other vision tasks, e.g. instance segmentation. Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (; are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy is proposed to reduce searing cost. One-shot NAS method (; ;) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution. NAS on object detection. There are some work use NAS methods on object detection task; ). search for scalable feature pyramid architectures and search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of is directly inherited from the classification task which is suboptimal for object detection task. search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches. Our search method is based on the Faster RCNN with FPN (a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison. For more efficient search, we adopt the idea of one-shot NAS method (; ;). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w. We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position. The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N, we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T 1, T 2, T 3, T 4}, where T i at the ith stage has K i branches, i.e. T i = {t Then an allocation strategy can be represented as τ = [τ 1, τ 2, τ 3, τ 4], where τ i denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. residual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the 32 3 possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails. On the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T 1 and T 2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T 3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T 4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 and ResNeXt can be found in Appendix A.2. To reallocate the computation across spatial position, we utilize dilated convolution,. Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 3 16 ≈ 4 × 10 7 possible architectures. We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation. To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) . After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as AP val (N (τ, w)). We set the block number constraint N. We can find the best allocation strategy in the following equation: Operations to be Sampled Figure 3: Evaluation of a choice in block operation search approach. As shown in figure, we have partial architecture of block 1 and block 2, and now we need to evaluate the performance of convolution with dilated rate 3 in the third block. We uniform sample the operation of rest blocks to generate a temporary architecture and then evaluate the choice through several temporary architectures. By introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block . For architecture search process, previous one-shot works use random search or evolutionary search . In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final . We decode network architecture o as a sequential of choices In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1. The hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in p extend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of D val. We validate each architecture on a mini batch and combine the to generate evaluate(arch). We finally choose the best architecture to obtain o *. Dataset We evaluate our method on the challenging MS COCO benchmark . We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (; a) on using ImageNet for pre-training the weights of this architecture. The final model is fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset. The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule . We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) ) are adopted for both baselines and our searched models. We denote the architecture using our computation reallocation by prefix'CR-', e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting ( shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent. Our CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for AP l). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round AP s to AP l improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway. Different dataset We transfer our searched model to another object detection dataset VOC . Training details can be found in Appendix A.3. We denote the VOC metric [email protected] as AP 50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP 50 improvement 1.0% and 0.7% comparing with the already high baseline. Different task Segmentation is another task that is highly sensitive to the ERF . Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN framework. The experimental on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively. Different head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN neck to strength our . The Res50 with NAS-FPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed can be found in Appendix A.4. Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object. Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks. In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design a two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CR-NAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources. A.1 SUPERNET TRAINING Both stage and operation supernets use exactly the same setting. The supernet training process adopt the'pre-training and fine-tuning' paradigm. For ResNet and ResNeXt, the supernet channel distribution is. Supernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay with the initial learning rate 0.4. Warming up and synchronized-BN ) are adopted to help convergence. Supernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule ). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) ) are adopted to help convergence. stage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in is defined by n=. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as Then we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as Our final model can be represnted as a series of allocation codes. We use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) ) are adopted to help convergence. longer schedule 2× schedule means training totally 25 epochs as indicated in. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×. Powerful detector The Cascade Mask RCNN is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN ) are adopted to help convergence. Powerful searched neck NAS-FPN is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (b) ). The detector is training under 1× setting as described in 4.1.
We propose CR-NAS to reallocate engaged computation resources in different resolution and spatial position.
1,129
scitldr
Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system. The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain. However, most of the current deep neural networks have not taken the uncertainty of the neuromorphic computing chip into consideration. Therefore, their performances on the neuromorphic computing chips are not as good as on the original platforms (CPUs/GPUs). In this work, we proposed the uncertainty adaptation training scheme (UATS) that tells the uncertainty to the neural network in the training process. The experimental show that the neural networks can achieve comparable inference performances on the uncertain neuromorphic computing chip compared to the on the original platforms, and much better than the performances without this training scheme. Uncertainty reasoning is the essence of human thinking activities and a key aspect of the intelligence. There are two kind of uncertainties in intelligent systems. One is the fuzziness, the other is the stochasticity. The fuzziness helps the brain deal with the real world efficiently by ignoring the enormous redundant information. When we try to distinguish a cat or a dog, we do not need to know the expressions and the number of the legs. Although such information can be easily captured by our visual system with a glance, it will be ignored for efficiency. The stochasticity endows the brain the creativity and enables us not always failed in an unfamiliar field. Our decisions may change when we do not sure. These characteristics are not available in most existing artificial intelligence (AI) systems, such as a classifier based on a deep neural network (DNN). The 32-bit or 64-bit floating numbers are used to describe the weights and activations. While some researchers found that the 8-bit integer is enough for many applications;. Moreover, after the training procedure, the will be the same no matter how many times it performs, although the margin is very small and the answer is wrong. There are some methods to address these issues, such as the network quantization and the Bayesian network. In addition, the neuromorphic computing chip has provide a hardware approach to supplement the missing uncertainty in DNN. In recent years, the emerging nanotechnology device and crossbar structure based neuromorphic computing chips have developed a lot;;. The Ohms law and Kirchhoffs law make the crossbar structure very efficient when doing the vectormatrix multiplication (VMM), and the emerging nanoscale nonvolatile memory (NVM) device at each cross point provides additional storage capability (Figure 1). The crossbar holds the devices conductances as memory in peacetime, and performs the computing function when applied voltages. The so-called computing in memory (CIM) architecture can relieve the memory bottleneck, which is the most serious problem in the von Neumann architecture, and make the neuromorphic computing chips more energy and area efficiency. Therefore, the neuromorphic computing has become a promising approach to realize the AI applications, which is full of VMMs and great memory requirement. Besides the energy and area efficiency, the uncertainty is also an important and intrinsic feature of the neuromorphic computing chips and is not well utilized. Figure 1: The crossbar structure. V is the applied voltage that correspond to the input x, G is the conductance of devices that correspond to the weight W, I is the output current, which can indicates the output y according to the Ohms law and Kirchhoffs law. The uncertainty in the neuromorphic computing chips comes from two aspects. The fuzziness is mainly caused by the analog to digital converters (ADCs) and the stochasticity is mainly induced by the NVM devices. According to the Kirchhoffs law, the VMM is indicated as the summarization of the currents, which is an analog output. It is necessary to use the ADC to convert the analog currents to digital voltages for data transferring. The function of ADC is similar as the activation quantization in the network. The stochasticity of the NVM device is due to the intrinsic physical mechanism;. The random movement of the particles in the device makes the conductance varied. The output current will be different even applying the same voltage. The stochasticity of the device is usually simulated as a non-ideal factor that makes the network perform worse;;. In this work, we proposed a training scheme that utilizes the stochasticity to improve the performance of the neuromorphic computing chips. There are several varieties of NVM devices including phase change memories , filamentary migrating oxide devices , ferroelectric tunnel junction synapses , and so on. The stochasticity of each types of devices is different due to the different intrinsic physical mechanisms. Without loss of generality, we used the Gaussian distribution to model the device stochasticity. Although the Gaussian distribution may not fit the stochasticity of each types of devices well, it has the basic characteristics, that is, a stable state and the farther away from the stable state, the probability is lower. The mean of the Gaussian distribution is the conductance value of the stable state. The variance of the Gaussian distribution is usually corresponded to the mean according to experimental . It is also hard to use a singular model the relations between the mean and the variance of various devices. For simplify and without loss of generality, we assumed that the standard deviation of the distribution is linearly and positively correlated to the mean. Furthermore, the real conductance of device is a positive value. Therefore, the conductances sampled from the distribution which are less than the given value G min will be cut off to G min, where G min > 0 denotes the minimum conductance that all devices could reach. The model of the devices stochasticity is as follow: where G s denotes the stochastic conductance, G denotes the value sampled from the Gaussian distribution N (µ, σ 2) with the a mean of µ and a variance of σ 2, G 0 denotes the conductance of the device when it is in the stable state, and α > 0 denotes the level of the devices stochasticity. The device fuzziness is a by-product of the writing process. Writing the conductance of each device in the crossbar is an essential step when using neuromorphic computing chip to realize an AI application. Before writing, we need to determine the target conductance of each device according to the weights of the neural network. That is what the mapping process do. Besides scaling the weights into the working range of devices conductance [G low, G high], we used the difference of two devices conductances G pos − G neg to express one weight w, which can be positive or negative. In order to achieve higher energy efficiency, it is better to use lower conductances to express the same weight. Moreover, in order to make full use of the entire conductance working range, the mapping algorithm is as follow: where |w| denotes the absolute value of a singular w, max w |w| denotes the maximum of all the |w|, G pos and G neg are the target conductances that we want to write. However, the conductance cannot be written accurately due to the stochasticity of the device and the fuzziness of the circuit. There is a variation when manipulate the conductance of the device and the measurement of the conductance is not accurate. The conductance value is obtained by dividing the read current by the applied voltage, which is obviously affected by the stochasticity of the device and the precision of the ADC. Therefore, we need a model to describe the fuzziness. Without loss of the generality, we also used the Gaussian distribution, which is as follow: where G f denotes fuzzy target conductance, G denotes the value sampled from the Gaussian distribution N µ, σ 2 with the a mean of µ and a variance of σ 2, G target denotes the target conductance determined in the mapping process, and β denotes the level of devices fuzziness, which is correspond to the level of the devices stochasticity, the precision of the ADC, and the writing strategy. For the sake of simplicity, we assumed β a constant in this work. If we program a well-trained DNN into a neuromorphic computing chip directly, the uncertainty will make the performance of the DNN worse, such as decrease the classification accuracy (, Fig. 2). However, the decrease of accuracy can be alleviated by using the proposed uncertainty adaptation training scheme (UATS), and in some cases the accuracy will improved. The core idea of UATS is to tell the uncertainty to the neural networks during training process and guide them to learn how to deal with this situation. The stochasticity model is introduced in every feed forward (FF) process. When a weight w participates in the calculation of the FF process, we use a sample of random variable w s to do the calculation instead of w. w s is defined as: Both G ps and G ns are obtained by the stochasticity model, and the conductances of the stable states G p0 and G n0 are calculated by and according to w, respectively. Actually, the w s can be approximated as if G is significantly larger than G min. where c = 2α The fuzziness model is introduced during the training process. After every k epochs of training, every weight w are replaced by a sample of random variable w f, which is defined as: Both G pf and G nf are obtained by the fuzziness model, and the target conductances G ptarget and G ntarget are calculated by and according to w, respectively. Actually, the w f can be approximated as if G is significantly larger than G min. Besides making the network train in an uncertain way, we also tried to teach it a better way to evaluate the networks performance when there is uncertainty, that is, how to calculate the loss function. The loss function is not calculated by the output of one FF process, but the average output of n FF processes with the same input batch. We evaluated the ideas of UATS on multiple models and datasets. We first investigated the effect of the uncertainty without UATS on MNIST dataset. Two multilayer perceptron (MLP) models (28 × 28-100-10 and 28 × 28-300-10) and a convolutional neural network (CNN) models (LeNet-5) are used. There are 60, 000 images in the training set of the MNIST dataset. We randomly selected 50, 000 images for training, and the other 10, 000 images for validation. The 10, 000 images in the test set were used to calculate the test error. G min = 1µS, G low = 5µS, G high = 50µS were used in the experiments. The models were trained in a normal way and then tested with different level of the uncertainty. The fuzziness model was first used to generate the weights that is actually written in the chip, and then the stochasticity model was used to simulate the read variation. We ran 20 trials for every model with every uncertainty level. The average test error and the standard deviation were reported. As shown in Figure 2, without using the UATS, the uncertainty makes the test errors of both MLP and CNN models higher. The higher level of the uncertainty, the higher the test error. The CNN model (LeNet-5) has the best performance without the uncertainty, but it is also the most affected by the uncertainty (t-test, p < 0.01). It is because the average width of LeNet-5 is smaller than the two MLP models, as the'mlp2' model is more robust to the uncertainty than the'mlp1'. Then we validated the power of UATS. We used UATS to tune the weight of the pre-trained model and also retrained the models from the initial. k=5,n=5 were used in the fine-tuning experiment. The number of epochs is 25. k = 10, n = 5 were used in the retraining experiment and the number of epochs is 100. As Figure 3 shown, UATS can significantly improve the accuracies with the same level of uncertainty in both retraining and fine-tuning experiments. Most of the retraining is better than the finetuning us UATS. When the uncertainty level is small (α = 0.1, β = 0.1), the UATS achieved a comparable to the ideal case. We also validated the power of UATS on CIFAR-10 dataset with a more complicated DNN model ResNet-44. All the training setting was same as the previous work except we used the UATS from the beginning. α = 0.1, β = 0.1, k = 10 were used in these experiments. The shows that UATS can even achieve a lower error rate than the ideal cases with proper hyper-parameters (Figure 4). The UATS performs better when the neural network has more layers. It can be seen as a regularization methods that make the training of DNN easier. The uncertainty is very important in the intelligent system. The Bayesian network is a very useful method to build an uncertain neural network. However, it usually requires that the distribution of each weight is controllable. This is hard to be realized by the neuromorphic computing chip due to the distribution is determined by the devices. Although there may be some methods to manipulate the conductance distribution of the device, it is not as convenient as UATS, which has no additional circuit required. We have tried a series of distributions to model the device stochasticity besides the Gaussian distribution, such as the Laplacian distribution, the uniform distribution, and the asymmetrical distributions, such as the lognormal distribution, the asymmetric Laplacian distribution, and the Bernoulli distribution for devices that have two stable states or the random telegraph noise (RTN). Although the modeled behavior of the device from different distributions is significantly different, the performance of network using each type of distribution with the same mean and variance is similar. It is because the VMM transform the individual distribution of each device to a summarization of a large number of random parameters.p The computation intension of UATS may be a little strong due to the requirement of a large number of random numbers. There are some methods to reduce the requirement of random numbers. Such as samples the weight for every input or every batch instead of the every VMM and using the uncertainty model of VMM instead of the weights. The simulation speed can be accelerated and achieve similar .
A training method that can make deep learning algorithms work better on neuromorphic computing chips with uncertainty
1,130
scitldr
An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of existing CNNs ensures their accuracy when classifying known objects, it also causes them to often assign an unknown to a target class with high confidence. As a , simply using low-confidence detections as a way to detect unknowns does not work well. In this work, we propose an Unknown-aware Deep Neural Network (UDN for short) to solve this challenging problem. The key idea of UDN is to enhance existing CNNs to support a product operation that models the product relationship among the features produced by convolutional layers. This way, missing a single key feature of a target class will greatly reduce the probability of assigning an object to this class. UDN uses a learned ensemble of these product operations, which allows it to balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. To further improve the performance of UDN at detecting unknowns, we propose an information-theoretic regularization strategy that incorporates the objective of rejecting unknowns into the learning process of UDN. We experiment on benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN, adding unknowns by injecting one dataset into another. Our demonstrate that UDN significantly outperforms state-of-the-art methods at rejecting unknowns by 25 percentage points improvement in accuracy, while still preserving the classification accuracy. Motivation. In recent years, Convolutional Neural Networks (CNN) have been used with great success for a rich variety of classification problems, particularly when dealing with high dimensional, complex data such as images or time series . A CNN classifier typically classifies test objects as one of the target classes supplied in the training set. In this, state-of-the-art classifiers make the implicit assumption that all testing objects belong to one of the target classes. However, this assumption is rarely true in real-world deployments of CNN classifiers. Consider for example, an autonomous car or healthcare system: it is extremely likely that the system will be exposed to objects that were not in its training set. We call such objects "unknowns". Clearly, blindly assigning these unknowns into one of the target classes degrades the prediction accuracy. Worst yet, it can lead to serious safety concerns. For example, in a collaboration with a top hospital in the US (name removed due to anonymity), we have been developing a seizure detector that classifies patients into different types of seizures based on EEG signals collected during the clinical observation of 4,000 patients. The detector was trained based on 6 types of seizures observed in the training data. However, when deployed, the CNN classifier may encounter patients who have types of seizures that do not exist in the training data because they are rare or even unknown by the medical community. Misclassifying these patients into the existing types of seizures brings serious risks and potential harm due to the potential for mistreatment of these patients. Ideally, in this case, the unknowns would be recognized and rejected by the classifier. In this work, we focus on this important problem, describing a deep neural network that not only accurately classifies test objects into known target classes, but also correctly rejects unknowns. State-of-the-Art. In a typical CNN, the output of the last fully connected layer is fed into a softmax layer to generate a class probability in for each target class. An object will then be assigned to the class with the maximal probability. Intuitively, unknowns would be detected by leveraging this confidence, as was done in;;. Since unknowns should not exhibit as many features of a target class versus known objects, the CNN should report a lower confidence. In prior work (; ;), the maximal probability or the largest value in the input vector to the softmax layer (maximal weighted sum) is used as a confidence to detect unknowns. In particular, an object will be rejected as an unknown if its confidence is smaller than a predetermined cutoff threshold ct. However, as shown in our experiments (Sec. 5), these state-of-the-art methods are not particularly effective at rejecting unknowns. This is because CNNs achieve high classification accuracy by providing a strong ability to generalize, allowing it to overcome the gap between the training and testing data . Unfortunately, this strength here is also a weakness, because it increases the chance of erroneously assigning an unknown to some target class even if it is quite different from the training objects in any target class. More specifically, the maximal probability (or maximal weighted sum) in a CNN is computed by the weighted sum operation on the multiple features produced by the convolutional layers. Because of this sum operation, an unknown can be classified to a target class with high confidence even if it matches some key features of a target class only by chance. Therefore, the requirements of accurately classifying the knowns and correctly rejecting the unknowns conflict with each other. Proposed Approach and Contributions. In this work we propose an Unknown-aware Deep Neural Network (UDN for short) to overcome this problem. The key intuition of UDN is to modify the CNN to use a product operation which models the product relationship among the features produced by the convolutional layers. This way, similar to the product rule in probability theory , if just one feature indicative of a target class is not matched, the probability of assigning an object to this class is greatly reduced. Since an unknown is unlikely to match most of the features of a target class, the chance of assigning an unknown to a target class with high confidence is reduced. Therefore, the confidence produced by UDN should more effectively detect unknowns than the typical maximal probability/maximal weighted sum produced by classical CNNs. In UDN, the product operations are learned as a set of product relationship (PR) subnets leveraging the hierarchical nature of the binary tree structure. The strong bias of the classification decisions made via the product operations and the generalization ability introduced by the ensemble nature of multiple PR subsets together balance the contradictory requirements of accurately classifying known objects and correctly rejecting unknowns. In addition, we propose an information-theoretic regularization strategy that actively incorporates the objective of unknown rejection into the learning process of UDN. This further improves the accuracy of UDN at rejecting unknowns by enlarging the confidence gap between unknown and known objects. We then show that the final loss function of UDN is fully differentiable. Therefore, UDN can be learned by following the common practice of back-propagation in deep neural networks. We demonstrate the effectiveness of UDN using a rich variety of benchmark datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN. UDN outperforms the state-of-the-art up to 20 points in the accuracy of unknown rejection -while preserving the accuracy of the underlying CNN at classifying objects from the target classes. Out-of-Distribution Detection. , CNNs were adapted to discover unknowns by adding one additional unknown class to the softmax layer. This method, called OpenMax, measures the distance between the maximal weighted sum vector produced for a test object and the mean maximal weighted sum vectors of the target classes that this test object is most likely assigned to. This distance is translated to a probability by the OpenMax layer. A testing object will be assigned to the unknown class if this probability is larger than a pre-defined cutoff threshold. detects out-of-distribution objects by parameterizing a prior distribution over predictive distributions, so called Prior Networks (PNs), while directly uses softmax probabilities to detect out-of-distribution objects. ODIN detects unknowns using a two-pass inference strategy. That is, each test image goes through the inference stage twice. During the second inference round, each input is perturbed based on the gradient of its loss acquired in the first inference. The goal is to make the maximal probability produced by softmax more effective at separating unknowns. One problem with ODIN is that it introduces two extra hyper-parameters to control the level of perturbation, which are hard to tune. As shown in experiments, our UDN method significantly outperforms OpenMax and ODIN, because UDN uses the maximal path probability as confidence measure, which as the product of multiple probabilities w.r.t. a set of nodes, is more effective in rejecting unknowns than using maximal weighted sum or maximal probability as a confidence measure. MC-Dropout uses Bayesian model to reason about model uncertainty by casting dropout training in deep neural networks as as approximate Bayesian inference in deep Gaussian processes. It then rejects a testing object as unknown if the uncertainty about this object is large. As shown in experiments, MC-Dropout in many cases outperforms OpenMax and ODIN in rejecting unknowns, although is still worse than our UDN. Further, it scarifies the accuracy of classifying the known classes. and , methods were proposed to detect the objects that do not belong to known classes in a "clean" training dataset. However, these methods rely on a "contaminated" training set that contains a fraction of unknowns, i.e., that were not in the original clean training data. In other words, they solve the problem of rejecting "known unknowns", while our UDN instead focuses on the problem of rejecting "unknown unknowns". Our UDN does not require any labeled unknowns in training. (Schölkopf et al., 2001; ;) and neighbor-based methods (; ; ;). However, unlike our work which focuses on enhancing the CNNs to reject the unknown objects during the inference process, these outlier detection methods do not take the classification objective into consideration. Instead, these methods detect outliers from a given dataset as the objects that significantly deviate from the majority of this dataset , without using any labeled outliers or normal objects. Deep Neural Decision Forest. Similar to the deep neural decision forest (DNDF) , nodes in UDN at the final FC layers are connected to the split nodes of multiple trees. However, UDN is different from DNDF in several important ways. First, conceptually UDN corresponds to a variation of CNN model by replacing the softmax layer with tree structures such that the CNN can model the product relationship among the features learned by CNN, while DNDF instead enhances a random forest classifier with the feature learning ability of a CNN. Second, UDN incorporates the objective of unknown rejection into the learning process, while DNDF only considers the classification of known classes. Third, DNDF estimates both the decision node parametrizations θ and the leaf predictions π by using a two-step optimization strategy, where θ and π are updated alternatively to minimize the log-loss, while UDN is fully differentiable and therefore can be optimized in one step. Forth, in DNDF different trees share the FC layers (except the final FC layer) of the CNNs, while UDN divides all FC layers into m independent components, each of which is connected to one individual tree. This ensures independence among the trees and improves the classification accuracy because of the excellent generalization capability. In this section, we first introduce the structure of unknown-aware deep neural network (UDN). Next, we show how UDN distinguishes unknown from known objects. A regularization strategy is introduced in Sec. 4 to further improve the accuracy of UDN at rejecting unknowns. Fully Connected Layers Figure 1: UDN Architecture. As depicted in Fig. 1, UDN is composed of the convolutional module and M independent product relationship (P R) subnets, where M corresponds to a user definable hyper-parameter. PR Subnet. The PR subnet is designed to model the product relationship among the features produced by the convolutional layers. Each PR subnet contains one fully connected (FC) component connected to a binary tree structure. Within each individual tree T, each split node of T is connected to one output node of the final layer of the FC component. The mapping between the FC output nodes and the split nodes can be arbitrary. Any FC output node can be connected to the root node of the tree. The set of split nodes of T is denoted as N. The set of leaf nodes is denoted as L. Each split node n i ∈ N converts the output x i of the FC node it consumes data from into a value in the range by applying the sigmoid function σ(x i), where σ(Each leaf node l ∈ L is parameterized using a C-dimensional probability distribution π l, where C denotes the number of classes. One element π i l of π l w.r.t. the ith class is computed as the softmax output of a to be learned parameter w i l : Product Relationship. Next, we show how a PR subnet models the product relationship using the path probability µ l (x) defined in Eq. 2, where x denotes an input object and l denotes a path from the root node to a leaf node l. In Eq. 2, N l denotes the set of split nodes on the path from root to a leaf l. Given a node For example, as shown in Fig. 1, on the path of n 11 → n 12 → n 15 → l 13, d 12 w.r.t. n 12 is set as 1 − σ(x 1 2), because n 12 is connected to its right child n 15 on this path. Therefore, d(x i) indicates the probability that input x will be routed from node n i down to the next node on path l. Accordingly, µ l (x) models the probability of input x reaching leaf l, i.e., the probability of path l. Since the d(x i) of each split n i corresponds to the output x i produced by the FC node w.r.t. n i, essentially the path probability µ l (x) is jointly determined by the output of multiple FC nodes. Therefore, µ l (x) successfully models the product relationship among the features produced by CNN. The existence of one FC node that leads to small d(x i) will make the probability of the whole path l small. Prediction. At each leaf node l of T, a PR subnet produces a prediction for a given input object x using Eq. 3. In Eq. 3, π = {π l |l ∈ L}, and π ly denotes the probability that leaf l believes an input sample x belongs to class y. µ l (x) denotes the path probability of l. Generally speaking, the prediction is given by the probability produced at the leaf node l weighted by the probability of input x reaching leaf l. Finally, as an ensemble of a set of PR subnets PR = {PR 1, . . ., PR M}, UDN produces a prediction for an input x by averaging the output of each subnet, i.e.: Similar to the typical ensemble structures like random forest , the ensemble of multiple PR subnets in good generalization performance in classifying objects from target classes, even if each individual PR subnet could be overfit to the training examples. Note when we setup the network structure of UDN, the mapping between the FC output nodes and the split nodes of the binary tree is arbitrary. The parameters w.r.t. each node is learned in an end-to-end fashion through back propagation. In the training process that minimizes the loss, the important features for a known class will be automatically learned and mapped to the split nodes on the same path. In other words, we do not have to group features explicitly. Instead, the grouping of the features is learned automatically. Fig. 1 shows the final of the training. For example, since nodes f 11, f 12, and f 1 6 are not on the same path, these nodes are not considered to correspond to the key features of any class. Max Path. Based on the above architecture, given a testing object x, each PR subnet in UDN produces a probability distribution µ l (x) over each path from the root to a leaf l. As shown in Eq. 2, the probability of a path is computed as the product of the probabilities (d(x n)) produced by all split nodes on that path. Given an object x, a given path will have an extremely small probability if x does not fit the features represented by the split nodes on this path (hence small d(x n)), because the product of multiple small probabilities will diminish quickly. One path will stand out when all its split nodes produce large probabilities on x, marked as bold line in Figure 1. We call this path the max path, because it has the maximal probability among all paths. Since the learned π l on each leaf is invariant w.r.t. input x, essentially it is the max path that determines the class of x by Eq. 3. Therefore, the probability of the max path µ l (x) (or max path probability) can be used to measure how confident the classifier is about its classification decision of object x. The larger the max path probability is, the more confident the classifier is about the object. More specially, given an object x and a PR subnet P R, the confidence is measured as: where max {µ l (x)|l ∈ L} denotes the max path probability of subnet PR for the given object x. Since UDN is an ensemble of a set of PR subnets, the final confidence of object x is measured as: Effectiveness of Using Max Path to Reject Unknowns. Intuitively this max path probability can be expected to be more effective at detecting unknowns than the maximal weighted sum in CNN. Typically an unknown will not get large probability on every split node on the max path, and one low probability node will limit the max path probability because of the product operation used in the computation. In contrast, the maximal weighted sum in CNN tends to fall off much more slowly because the score is computed based on the sum operation (with weights) on multiple features, such that a single matching feature can make the score high. This is confirmed in our experiments (Sec. 5.2, Appendix B, Appendix C). To ensure the effectiveness of using the max path probability to reject unknowns, we further incorporate the objective of unknown rejection into the learning process of UDN by introducing a regularization. The key idea is to use an information theory-based approach to prevent the generation of a PR subnet whose paths show uniform probability distribution. This ensures that the max path probability of each subnet will be generally much larger than the probabilities of other paths, making it more effective at rejecting unknowns. To achieve this, we penalize the paths whose probability distribution has a large entropy and hence is close to uniform . Given a subnet P R with |L| paths, each input x ∈ X can use any of |L| paths to reach leaf nodes. The entropy of the path probability distribution of input x is given by: We then apply a softmax function on the probability distribution as a normalization. The revised entropy of the path probability distribution is given by: where is the softmax transformation of µ l (x). To penalize the subnet whose path probability distribution is close to uniform, we add the entropy w.r.t. each training sample to the log-loss term. Given the training set X and the output Y, the penalized log-loss term of one subnet P R is represented as: where β controls the strength of the penalty and P PR [y|x, Θ, π] is defined in Eq. 3. Θ represents the learned parameters at the convolutional layers and FC layers. The total log-loss for the UDN composed of |PR| subnets is then defined as: Training a UDN requires finding a set of parameters Θ and π that minimize the total log loss defined in Eq. 10. To minimize Eq. 10, we can independently minimize the penalized loss (Eq. 9) of each individual subnet. In Appendix A, we show that the loss function is fully differentiable. As a , we are able to employ SGD to minimize the loss w.r.t. Θ and π, following the common practice of back-propagation in deep neural networks. Datasets. We demonstrate the effectiveness of UDN on several benchmark image datasets. Specifically, we train models on CIFAR-10 , CIFAR-100 , and SVHN datasets. Given a trained model on one dataset, we consider examples from other datasets as unknowns when testing the model. In addition, we also sample some classes from the training data as unknowns and test these samples on the model trained for the rest of the classes. Due to the lenght constraints we present the on the CIFAR-100 and SVHN models in Appendix B and Appendix C. Methodology. We evaluate: CNNs with Weighted-Sum as a baseline. The weighted sum score is utilized as the confidence measure; OpenMax , ODIN , Softmax , and MCDropout : the state-of-the-art unknown rejection methods described in related work (Sec. 2); our UDN model without the regularization term applied to the loss function and UDN-Penalty: our UDN with the regularization term (Eq. 9). The show that our UDN and UDN-Penalty significantly outperform OpenMax, ODIN, MC-Dropout and Softmax through a variety of unknown rejection experiments, while still preserving the accuracy of classifying objects from target classes. Experimental Setup. We ran experiments on 4 P100 GPU instances on Google cloud. All models are implemented in Pytorch . Hyper-parameter Settings. All networks are trained using mini-batches of size 128. The momentum is set to 0.9 for all models. The weighted decays are set to 0.0001. When testing MNIST on models trained for other datasets, we increase its color channel from 1 to 3 by copying the original gray images 3 times. Evaluation Metric. Following the literature of unknown rejection, we use two metrics to measure the effectiveness of UDN at distinguishing known and unknown images, namely true negative rate (TNR) at 95% true positive rate (TPR) and AUROC. TNR at 95% TPR can be interpreted as the probability that an unknown image (negative) is correctly recognized when the TPR is 95%. True positive rate can be computed by TPR = TP /(TP + FN), where TP and FN denote true positives (knowns are correctly classified as knowns) and false negatives (known images are misclassified as unknowns) respectively. The true negative rate (TNR) can be computed by TNR = TN/(FP+TN), where FP and TN denote false positives (unknowns are misclassified as knowns) and true negatives (unknowns are correctly recognized) respectively. AUROC corresponds to the Area Under the Receiver Operating Characteristic curve, which is a threshold independent metric . It can be interpreted as the probability that a positive example is assigned a higher detection score than a negative example. In addition, we also measure the accuracy of these methods at classifying the knowns into target classes. We tested all approaches on DenseNet . All methods use the same DenseNet architecture to ODIN . For our UDN and UDN-Penalty, the output is connected to 10 depth-5 trees. Specifically, the output of the convolutional layer is broadcast to 10 different sets of FC layers. Each set contains 3 FC layers. The final FC layer with 63 (2 (5 +1) − 1 ) hidden nodes is connected to a tree. The training time of UDN is 9.4 hours, slightly slower than training a DenseNet model (9.1 hour). For the evaluation of ODIN, we directly use the model published by the authors. The temperature parameter T and the perturbation magnitude η used by ODIN are set to 1000 and 0.0014. We set the drop rate of MC-Dropout as 0.2 and the number of forward passes as 100. We set these parameters by the suggestion of the authors or parameter tuning. As shown in Table 1, in almost all cases UDN and UDN-Penalty outperform Softmax, Weighted-sum, OpenMax, ODIN, and MC-Dropout in rejecting unknowns, without giving up our ability to correctly classify the CIFAR-10 images. Specifically, UDN and UDN-Penalty outperforms other methods up to 10 points in TNR. UDN and UDN-Penalty also significantly outperform other methods in AUROC. In particular, UDN and UDN-Penalty achieve about 98% AUROC when detecting SVHN as unknowns. This indicates that our methods can effectively separate knowns and unknowns under a wide range of parameter settings. The performance gain from our UDN architecture, where the confidence of each image is computed as the product of the probabilities produced at the split nodes on the path. This multiplication of probabilities enlarges the confidence gap, making it able to better reject unknowns than alternate approaches. The only exception is that MC-Dropout performs the best on rejecting MNIST as unknowns, showing that MC-Dropout is probably good at separating unknowns which are simplistic yet very different from known objects. In addition, UDN achieves slightly better classification accuracy compared to Softmax, Weighted-Sum, OpenMax, and ODIN that use the classical DenseNet. The classification accuracy of MC-Dropout is worse than other methods because of the Bayesian inference. Our UDN-Penalty outperforms UDN in rejecting unknowns in all cases. This is because by introducing regularization to penalize path probability distributions that have a large entropy, UDN-Penalty leads to larger maximal path probabilities for inliers. At the same time, its classification accuracy decreases slightly, because the regularization introduces overfitting . However, this is effectively alleviated because of the ensemble of the PR subnets. Therefore, the drop in classification accuracy is very small. In this work, we proposed an augmentation to CNNs, UDN, which effectively rejects unknown objects that do not belong to any class seen in the training data. UDN achieves this by replacing softmax layer in traditional CNNs with a novel tree ensemble that takes the product of feature values, balancing the contradictory requirements of accurately classifying knowns and correctly rejecting unknowns in one network structure. A regularization strategy is proposed for UDN to further enhance its unknown rejection capacity. Given a decision tree, the gradient of the loss L with respect to Θ can be decomposed by the chain rule as follows: Here, the derivative of the second part ∂fn (x ;Θ) ∂Θ is identical to the back-propagation process of traditional CNN modules and thus is omit here. Now let's show how to compute the first part: where and By using the chain rule, we get: where where Given a decision tree, the gradient of the Loss L w.r.t. the weights w of the prediction nodes (defined in Eq. 1) can be decomposed by the chain rule as follows: where and Therefore, Similar to the CIFAR-10 experiments, all methods use the same DenseNet architecture to ODIN . For our UDN and UDN-Penalty, the output is connected to 10 trees. The depth of each tree is 6. Again, for the evaluation of ODIN, we directly use the model published by the authors. The temperature parameter T and the perturbation magnitude η used by ODIN are set to 1000 and 0.0014, as recommended by the authors. We set the drop rate of MC-Dropout as 0.2 after parameter tuning and set the number of forward passes as 100 as suggested by the authors. In this set of experiments we use the images in CIFAR-10 and SVHN as the unknowns. Moreover, we also randomly pick 10 classes from the CIFAR-100 training data. We then use these samples as unknowns and test them on the model trained using the rest of the CIFAR-100 training data. We run this process for 10 times and report the average TNR and AUROC. As shown in Table 2, our UDN significantly outperform other methods in all cases by at least 9 points in TNR and 5 points in AUROC. Same to the CIFAR-100 experiments above, we tested all approaches based on the same DenseNet architecture. For our UDN and UDN-Penalty, the output is connected to 10 trees. The depth of each tree is 4. For ODIN, the temperature parameter T and the perturbation magnitude η used by ODIN are set to 1000 and 0.0014 after parameter tuning. We set the drop rate of MC-Dropout as 0.5 and the number of forward passes as 100 as suggested by the authors. We use the images in CIFAR-10 and CIFAR-100 as the unknowns. At the same time we also randomly pick 1 classes from the SVHN training data and use these samples as unknowns. We then test them on the model trained using the rest of the SVHN training data. We run this process for 10 times and report the average TNR and AUROC. As shown in Table 3, in all cases our UDN outperform other methods in both TNR and AUROC. In particular, UDN outperforms all other methods by at least 12 points in TNR. In our UDN, The key hyper-parameters are the number of PR subnets and the depth of the binary tree. In general, the depth of the tree depends on the number of the classes. The more classes the dataset has, the higher the tree should be. In our experiments, when the depth of the tree is no smaller than 6 and the number of PR subnet is above 10, our method works well in general. When the number of classes in the training data is small such as MINIST and CIFAR-10, setting the depth of the three as a smaller value such as 4 also works. But a larger depth of the tree or more subnets does not harm the effectiveness of our UDN method. Therefore, these parameters are easy to tune.
A CNN architecture that can effective rejects the unknowns in test objects
1,131
scitldr
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naive applications of deep learning fall short in giving reliable confidence estimates and lack interpretability. We propose a new deep sequential latent variable model for dimensionality reduction and data imputation. Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process. The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation. We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates. Multivariate medical time series, consisting of multiple correlated univariate time series or channels, give rise to two distinct ways of imputing missing information: by exploiting temporal correlations within each channel, and by exploiting correlations across channels, for example by using lower-dimensional representations of the data. An ideal imputation model for medical time series should take both of these sources of information into account. Another desirable property of such models is to offer a probabilistic interpretation, allowing for uncertainty estimation. Unfortunately, current imputation approaches fall short with respect to at least one of these desiderata. While there are many time-tested statistical methods for multivariate time series analysis (e.g., Gaussian processes ), these methods are generally not applicable when features are missing. On the other hand, classical methods for time series imputation often do not take the potentially complex interactions between the different channels into account . Finally, recent work has explored the use of non-linear dimensionality reduction using variational autoencoders for i.i.d. data points with missing values (; ;), but this work has not considered temporal data and strategies for sharing statistical strength across time. A more comprehensive analysis of existing approaches and their shortcomings is deferred to the appendix (Sec. A). In this paper, we propose an architecture that combines deep variational autoencoders (VAEs) with Gaussian process (GP) to efficiently model the latent dynamics at multiple time scales. Moreover, our inference approach makes use of efficient structured variational approximations, where we fit another multivariate Gaussian process in order to approximate the intractable true posterior. We make the following contributions: • A new model. We propose a VAE architecture for multivariate time series imputation with a GP prior in the latent space to capture temporal dynamics. • Efficient inference. We use a structured variational approximation that models posterior correlations in the time domain. • Benchmarking on real-world data. We carry out extensive comparisons to classical imputation methods as well as state-of-the-art deep learning approaches, and perform experiments on data from different domains. We propose a novel architecture for missing value imputation in medical time series. Our model can be seen as a way to perform amortized approximate inference on a latent Gaussian process model. Specifically, we combine ideas from VAEs , GPs , Cauchy kernels (Jähnichen et al., 2018), structured variational distributions with efficient inference (b), and a special ELBO for missing data and synthesize these ideas into a general framework for missing data imputation on time series. In the following, we will outline the assumed generative model and derive our proposed inference scheme. We use standard notation (similar to ), which is detailed in the appendix (Sec. B.1). In this work, we overcome the problem of defining a suitable GP kernel in the data space with missing observations by instead applying the GP in the latent space of a variational autoencoder where the encoded feature representations are complete. That is, we assign a latent variable z t ∈ R k for every x t, and model temporal correlations in this reduced representation using a GP, z(τ) ∼ GP(m z (·), k z (·, ·)). This way, we decouple the step of filling in missing values and capturing instantaneous correlations between the different feature dimensions from modeling dynamical aspects. The graphical model is depicted in the appendix (Fig. S2). In order to model data that varies at multiple time scales, we consider the Cauchy kernel, which has previously been successfully used in the context of robust dynamic topic modeling where similar multi-scale time dynamics occur (Jähnichen et al., 2018). It corresponds to an infinite mixture of RBF kernels with different length scales . Given the latent time series z 1:T, the observations x t are generated time-point-wise by where g θ (·) is a potentially nonlinear function parameterized by the parameter vector θ. In our experiments, the function g θ is implemented by a deep neural network. In order to learn the parameters of the deep generative model described above, and in order to efficiently infer its latent state, we are interested in the posterior distribution p(z 1:T | x 1:T). Since the exact posterior is intractable, we use variational inference (; ;). Furthermore, to avoid inference over per-datapoint (local) variational parameters, we apply inference amortization . To make our variational distribution more expressive and capture the temporal correlations of the data, we employ a structured variational distribution with efficient inference that leads to an approximate posterior which is also a GP. We approximate the true posterior p(z 1:T,j | x 1:T) with a multivariate Gaussian variational distribution q(z 1:T,j | x where j indexes the dimensions in the latent space. Our approximation implies that our variational posterior is able to reflect correlations in time, but breaks dependencies across the different dimensions in z-space (which is typical in VAE training ). We choose the variational family to be the family of multivariate Gaussian distributions in the time domain, where the precision matrix Λ j is parameterized as a tridiagonal matrix. -0.177 ± 0.000 0.935 ± 0.000 0.028 ± 0.000 VAE 0.599 ± 0.002 0.232 ± 0.000 0.922 ± 0.000 0.034 ± 0.000 HI-VAE 0.372 ± 0.008 0.134 ± 0.003 0.962 ± 0.001 0.035 ± 0.000 GP-VAE (proposed) 0.341 ± 0.007 0.117 ± 0.002 0.960 ± 0.002 0.002 ± 0.000 Samples from q can thus be generated in O(T) time (b; ;) as opposed to the O(T 3) time complexity for a full-rank matrix. Moreover, compared to a fully factorized variational approximation, the number of variational parameters is merely doubled. Note that while the precision matrix is sparse, the covariance matrix can still be dense, allowing to reflect long-range dependencies in time. We amortize the inference over m j and Λ j using an inference network q ψ (·). As in standard VAE training, the parameters of the generative model and of the inference network can be jointly trained by optimizing the evidence lower bound (ELBO), (see Sec. A), we evaluate the ELBO only on the observed features of the data since the remaining features are unknown, and set these missing features to a fixed value (zero) during inference. We also include an additional tradeoff parameter β into our ELBO, similar to the β-VAE . This parameter takes care of balancing the influence between the likelihood on the observed data features and the latent prior. Our training objective is thus the RHS of. We performed experiments on the benchmark data set Healing MNIST , which combines the classical MNIST data set with properties common to medical time series, the SPRITES data set , and on a real-world medical data set from the 2012 Physionet Challenge . We compared our model against conventional single imputation methods , GP-based imputation , VAE-based methods that are not specifically designed to handle temporal data , and modern state-of-the-art deep learning methods for temporal data imputation . We found strong quantitative (Tab. 1, 2) and qualitative (Fig. 1, 2) evidence that our proposed model outperforms most baseline methods in terms of imputation quality on all BRITS (red) and forward imputation (green) yield single imputations, while the GP-VAE (blue) allows to draw samples from the posterior. The GP-VAE produces smoother curves, reducing noise from the original input, and exhibits an interpretable posterior uncertainty. Table 2: Performance of the different models on the Physionet data set in terms of AUROC of a logistic regression trained on the imputed time series. We observe that the proposed model performs comparably to the state of the art. Mean imputation 0.703 ± 0.000 Forward imputation 0.710 ± 0.000 GP 0.704 ± 0.007 VAE 0.677 ± 0.002 HI-VAE 0.686 ± 0.010 GRUI-GAN 0.702 ± 0.009 BRITS 0.742 ± 0.008 GP-VAE (proposed) 0.730 ± 0.006 three tasks and performs comparable to the state of the art (BRITS) on the medical data. This extends even to different missingness mechanisms, as is described in the appendix (Tab. S1). For the real medical time series task, no ground-truth data exists, so we cannot report the mean squared error (MSE) or the negative log likelihoood (NLL). Following , we instead use a downstream classifier as a proxy measure. We use a linear SVM to predict mortality based on the imputed time series, since this was also one of the original tasks in the 2012 Physionet challenge . We find that this proxy measure correlates well with the likelihood in cases where ground-truth data is available (see Healing MNIST AUROC in Tab. 1), lending credence to the metric. More details about these experiments can be found in the appendix (Sec. C). Classical statistical approaches. The problem of missing values has been a long-standing challenge in many time series applications, especially in the field of medicine . The earliest approaches to deal with this problem often relied on heuristics, such as mean imputation or forward imputation. Despite their simplicity, these methods are still widely applied today due to their efficiency and interpretability . Orthogonal to these ideas, methods along the lines of expectation-maximization (EM) have been proposed, but they often require additional modeling assumptions . Bayesian methods. When it comes to estimating likelihoods and uncertainties relating to the imputations, Bayesian methods, such as Gaussian processes (GPs) , have a clear advantage over non-Bayesian methods such as single imputation . There has been much recent work in making these methods more expressive and incorporating prior knowledge from the domain (e.g., medical time series) (Fortuin and Rätsch, 2019;) or adapting them to work on discrete domains (a), but their wide-spread adoption is hindered by their limited scalability and the challenges in designing kernels that are robust to missing values. Our latent GP prior bears certain similarities to the GP latent variable model (GP-LVM) , but in contrast to this line of work, we propose an efficient amortized inference scheme. Deep learning techniques. Another avenue of research in this area uses deep learning techniques, such as variational autoencoders (VAEs) (; ; ;) or generative adversarial networks (GANs) . It should be noted that VAEs allow for tractable likelihoods, while GANs generally do not and have to rely on additional optimization processes to find latent representations of a given input . Unfortunately, none of these models explicitly take the temporal dynamics of time series data into account. Conversely, there are deep probabilistic models for time series (e.g., b; Krishnan et al.,, 2017, but those do not explicitly handle missing data. There are also some VAE-based imputation methods that are designed for a setting where the data is complete at training time and the missingness only occurs at test time (a,b;). We do not regard this setting in our work. HI-VAE. Our approach borrows some ideas from the HI-VAE . This model deals with missing data by defining an ELBO whose reconstruction error term only sums over the observed part of the data. For inference, the incomplete data are filled with arbitrary values (e.g., zeros) before they are fed into the inference network, which induces an unavoidable bias. The main difference to our approach is that the HI-VAE was not formulated for sequential data and therefore does not exploit temporal information in the imputation task. Deep learning for time series imputation. While the mentioned deep learning approaches are very promising, most of them do not take the time series nature of the data directly into account, that is, they do not model the temporal dynamics of the data when l G B r r P / w h V r t m F t F g K p o Q F 0 Q J P Q u r + 6 3 z m V 2 e D H 3 F 0 H T F Z c E 4 p n h J 6 j K r 1 q Y 7 Z 9 7 a + 6 R 4 F B T W E / U v R t Y j z D e D n s 9 T 9 z T A 7 m p o t o o + v v S G 5 p + 5 C j / s n p l Z 1 B A + 7 B o i 8 h R p 9 E A X H + k T v 6 R V r J u 3 h 3 q Q G 3 t 9 c N m i / y H o e 1 e X 3 c T i X 2 Q R 1 E 0 M z t T y A X E y c z r P P / d E n O p v p P k h 5 h k 3 m N j F f N t N 5 1 N 0 i 1 z 3 O d Q f 3 8 p l q o j v D a 1 w R d 8 H K Z X P p L P 9 P 1 C e i 6 8 7 P J 6 + 5 j H + Y O f v H u q m c N L Z N 6 B 9 3 a o e t 4 m + 2 S s 9 p g 1 5 y P w + p R c f 4 w h z 6 T j / o J / 0 y j g x p T I w 0 d y 2 t F H u e 0 b X L + P Y X S H 0 f z Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " d M T 0 j I Y N / p J w + Q q v k a S v 9 a z V L x I = " > A A A F y n i c j Z R L b 9 N A E M e n A U M J j 6 Z w 5 B I 1 Q u L Q V n H S 1 7 H i m U O R C i J t p a a K / N g k V v z S e l M S L N 8 4 8 G m 4 w l f h u 3 D g v 2 N T p S l J Y 2 s 9 s 7 O z v 3 m s b T v 2 v U T V 6 7 9 X S n f u G v f u r z 4 o P 3 z 0 + M l a Z f 3 p S R K N p C P a T u R H 8 s y 2 E u F 7 o W g r T / n i L J b C C m x f n N r D 1 3 r 9 9 F L I x I v C z 2 o S i 4 v A 6 o d e z 3 M s B V O 3 s t F R Y q y Y k 7 q W H G 5 J 4 W Z p J 7 D U w O 6 l 4 6 x r Z t 1 K r b 5 d 5 6 t 6 U z E L p U b F d R y t l 1 r U I Z c i c m h E A Q k K S U H 3 y a I E 9 z m Z V K c Y t g t K Y Z P Q P F 4 X l F E Z e 0 f w E v C w Y B 3 i 2 c f s v L C G m G t m w r s d R P E x J H Z W 6 Q X G O y b a 8 N Z R B f Q E 8 g / G V 7 b 1 5 0 Z I m a w z n E D a T M y Z H 7 C i a A C f 2 / Y G h W e 2 9 E 5 d l 6 I e H X A 9 H j K M 2 a I r d a 4 4 b 7 A i Y R v y S p X e s m c f D J v n l + h B C N l G B r r P / w h V r t m F t F g K p o Q F 0 Q J P Q u r + 6 3 z m V 2 e D H 3 F 0 H T F Z c E 4 p n h J 6 j K r 1 q Y 7 Z 9 7 a + 6 R 4 F B T W E / U v R t Y j z D e D n s 9 T 9 z T A 7 m p o t o o + v v S G 5 p + 5 C j / s n p l Z 1 B A + 7 B o i 8 h R p 9 E A X H + k T v 6 R V r J u 3 h 3 q Q G 3 t 9 c N m i / y H o e 1 e X 3 c T i X 2 Q R 1 E 0 M z t T y A X E y c z r P P / d E n O p v p P k h 5 h k 3 m N j F f N t N 5 1 N 0 i 1 z 3 O d Q f 3 8 p l q o j v D a 1 w R d 8 H K Z X P p L P 9 P 1 C e i 6 8 7 P J 6 + 5 j H + Y O f v H u q m c N L Z N 6 B 9 3 a o e t 4 m + 2 S s 9 p g 1 5 y P w + p R c f 4 w h z 6 T j / o J / 0 y j g x p T I w 0 d y 2 t F H u e 0 b X L + P Y X S H 0 f z Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " d M T 0 j I Y N / p J w + Q q v k a S v 9 a z V L x I = " > A A A F y n i c j Z R L b 9 N A E M e n A U M J j 6 Z w 5 B I 1 Q u L Q V n H S 1 7 H i m U O R C i J t p a a K / N g k V v z S e l M S L N 8 4 8 G m 4 w l f h u 3 D g v 2 N T p S l J Y 2 s 9 s 7 O z v 3 m s b T v 2 v U T V 6 7 9 X S n f u G v f u r z 4 o P 3 z 0 + M l a Z f 3 p S R K N p C P a T u R H 8 s y 2 E u F 7 o W g r T / n i L J b C C m x f n N r D 1 3 r 9 9 F L I x I v C z 2 o S i 4 v A 6 o d e z 3 M s B V O 3 s t F R Y q y Y k 7 q W H G 5 J 4 W Z p J 7 D U w O 6 l 4 6 x r Z t 1 K r b 5 d 5 6 t 6 U z E L p U b F d R y t l 1 r U I Z c i c m h E A Q k K S U H 3 y a I E 9 z m Z V K c Y t g t K Y Z P Q P F 4 X l F E Z e 0 f w E v C w Y B 3 i 2 c f s v L C G m G t m w r s d R P E x J H Z W 6 Q X G O y b a 8 N Z R B f Q E 8 g / G V 7 b 1 5 0 Z I m a w z n E D a T M y Z H 7 C i a A C f 2 / Y G h W e 2 9 E 5 d l 6 I e H X A 9 H j K M 2 a I r d a 4 4 b 7 A i Y R v y S p X e s m c f D J v n l + h B C N l G B r r P / w h V r t m F t F g K p o Q F 0 Q J P Q u r + 6 3 z m V 2 e D H 3 F 0 H T F Z c E 4 p n h J 6 j K r 1 q Y 7 Z 9 7 a + 6 R 4 F B T W E / U v R t Y j z D e D n s 9 T 9 z T A 7 m p o t o o + v v S G 5 p + 5 C j / s n p l Z 1 B A + 7 B o i 8 h R p 9 E A X H + k T v 6 R V r J u 3 h 3 q Q G 3 t 9 c N m i / y H o e 1 e X 3 c T i X 2 Q R 1 E 0 M z t T y A X E y c z r P P / d E n O p v p P k h 5 h k 3 m N j F f N t N 5 1 N 0 i 1 z 3 O d Q f 3 8 p l q o j v D a 1 w R d 8 H K Z X P p L P 9 P 1 C e i 6 8 7 P J 6 + 5 j H + Y O f v H u q m c N L Z N 6 B 9 3 a o e t 4 m + 2 S s 9 p g 1 5 y P w + p R c f 4 w h z 6 T j / o J / 0 y j g x p T I w 0 d y 2 t F H u e 0 b X L + P Y X S H 0 f z Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " E c p c s w N p s h R M C T T R B E 9 C q v 6 r f K Z X Z 4 E f c n Q V M Z 5 x T i m e E n q E q t W p j t j 3 q r 6 p H v m a G s D + U X c t 5 H x 9 + H k s V X 8 z z F 6 P z W b R R x f e k N x T d a H L / R N j q y q C i 1 1 9 R F 5 F j R 6 I g m O 9 o 1 e 0 y 1 q N N n G v U B 3 v b y 7 r t K W z n k Z 1 + H 0 c T G U 2 Q F 3 B U E w l t y F n E 8 f z 7 H F / 1 I l O Z r o F U p 5 h g 7 k N z O f N d B p 1 Q + e 6 y b m u 4 5 4 / U 0 V 0 J n j 1 c + I G W L l s z J 3 l / 4 n q R F T d + f n k N R f x D 6 t N / r E u K w f 1 t R r 0 t + u V n a b + m y 3 S Q 3 p E T 7 i f O 9 S k P X x h N n 2 n H / S T f h n v j c / G F + N r 7 l p Y 0 H u W 6 c J l f P s L u 6 M m G w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " y 1 C o V H x t s 1 j Z / P C b F V p 0 + W y s N / g = " > A A A F 1 3 i c j Z R L b 9 N A E M e n A U M J r 5 Q e u U R E S B z a k k d f x 6 q 8 c g C p o K Y t a k r k x y a x 4 p f W T k m x L G 6 I K w c + D V f 4 E H w X D v x 3 v F R p S t L Y W s / s 7 O x v H m v b i j w 3 T q r V 3 w u F a 9 e N G z c X b x V v 3 7 l 7 7 3 5 p 6 c F B H A 6 l L V p 2 6 I X y y D J j 4 b m B a C V u 4 o m j S A r T t z x x a A 2 e q f X D U y F j N w z 2 k 7 N I n P h m L 3 C 7 r m 0 m M H V K T 9 u J G C X M S R 1 T D l a l c L K 0 7 Z t J 3 + q m o + x D y g 6 p F H a W d f a z T q l S X a v y V b 6 s 1 L R S I X 3 t h U u F J r X J o Z B s G p J P g g J K o H t k U o z 7 m G p U p Q i 2 E 0 p h k 9 B c X h e U U R F 7 h / A S 8 D B h H e D Z w + x Y W w P M F T P m 3 T a i e B g S O 8 v 0 G O M l E y 1 4 q 6 g C e g z 5 B + M T 2 3 p T I 6 R M V h m e Q V p M z J l v s J J Q H z 5 X 7 f W 1 Z z b 3 T l V X Q l 3 a 5 n p c Z B i x R V V q n 3 O e Y 0 X C N u C V M r 1 g z x 4 Y F s 9 P 0 Y M A s o U M V J / / E c p c s w N p s h R M C T T R B E 9 C q v 6 r f K Z X Z 4 E f c n Q V M Z 5 x T i m e E n q E q t W p j t j 3 q r 6 p H v m a G s D + U X c t 5 H x 9 + H k s V X 8 z z F 6 P z W b R R x f e k N x T d a H L / R N j q y q C i 1 1 9 R F 5 F j R 6 I g m O 9 o 1 e 0 y 1 q N N n G v U B 3 v b y 7 r t K W z n k Z 1 + H 0 c T G U 2 Q F 3 B U E w l t y F n E 8 f z 7 H F / 1 I l O Z r o F U p 5 h g 7 k N z O f N d B p 1 Q + e 6 y b m u 4 5 4 / U 0 V 0 J n j 1 c + I G W L l s z J 3 l / 4 n q R F T d + f n k N R f x D 6 t N / r E u K w f 1 t R r 0 t + u V n a b + m y 3 S Q 3 p E T 7 i f O 9 S k P X x h N n 2 n H / S T f h n v j c / G F + N r 7 l p Y 0 H u W 6 c J l f P s L u 6 M m G w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " y 1 C o V H x t s 1 j Z / P C b F V p 0 + W y s N / g = " > A A A F 1 3 i c j Z R L b 9 N A E M e n A U M J r 5 Q e u U R E S B z a k k d f x 6 q 8 c g C p o K Y t a k r k x y a x 4 p f W T k m x L G 6 I K w c + D V f 4 E H w X D v x 3 v F R p S t L Y W s / s 7 O x v H m v b i j w 3 T q r V 3 w u F a 9 e N G z c X b x V v 3 7 l 7 7 3 5 p 6 c F B H A 6 l L V p 2 6 I X y y D J j 4 b m B a C V u 4 o m j S A r T t z x x a A 2 e q f X D U y F j N w z 2 k 7 N I n P h m L 3 C 7 r m 0 m M H V K T 9 u J G C X M S R 1 T D l a l c L K 0 7 Z t J 3 + q m o + x D y g 6 p F H a W d f a z T q l S X a v y V b 6 s 1 L R S I X 3 t h U u F J r X J o Z B s G p J P g g J K o H t k U o z 7 m G p U p Q i 2 E 0 p h k 9 B c X h e U U R F 7 h / A S 8 D B h H e D Z w + x Y W w P M F T P m 3 T a i e B g S O 8 v 0 G O M l E y 1 4 q 6 g C e g z 5 B + M T 2 3 p T I 6 R M V h m e Q V p M z J l v s J J Q H z 5 X 7 f W 1 Z z b 3 T l V X Q l 3 a 5 n p c Z B i x R V V q n 3 O e Y 0 X C N u C V M r 1 g z x 4 Y F s 9 P 0 Y M A s o U M V J / / E c p c s w N p s h R M C T T R B E 9 C q v 6 r f K Z X Z 4 E f c n Q V M Z 5 x T i m e E n q E q t W p j t j 3 q r 6 p H v m a G s D + U X c t 5 H x 9 + H k s V X 8 z z F 6 P z W b R R x f e k N x T d a H L / R N j q y q C i 1 1 9 R F 5 F j R 6 I g m O 9 o 1 e 0 y 1 q N N n G v U B 3 v b y 7 r t K W z n k Z 1 + H 0 c T G U 2 Q F 3 B U E w l t y F n E 8 f z 7 H F / 1 I l O Z r o F U p 5 h g 7 k N z O f N d B p 1 Q + e 6 y b m u 4 5 4 / U 0 V 0 J n j 1 c + I G W L l s z J 3 l / 4 n q R F T d + f n k N R f x D 6 t N / r E u K w f 1 t R r 0 t + u V n a b + m y 3 S Q 3 p E T 7 i f O 9 S k P X x h N n 2 n H / S T f h n v j c / G F + N r 7 l p Y 0 H u W 6 c J l f P s L u 6 M m G w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " y 1 C o V H x t s 1 j Z / P C b F V p 0 + W y s N / g = " > A A A F 1 3 i c j Z R L b 9 N A E M e n A U M J r 5 Q e u U R E S B z a k k d f x 6 q 8 c g C p o K Y t a k r k x y a x 4 p f W T k m x L G 6 I K w c + D V f 4 E H w X D v x 3 v F R p S t L Y W s / s 7 O x v H m v b i j w 3 T q r V 3 w u F a 9 e N G z c X b x V v 3 7 l 7 7 3 5 p 6 c F B H A 6 l L V p 2 6 I X y y D J j 4 b m B a C V u 4 o m j S A r T t z x x a A 2 e q f X D U y F j N w z 2 k 7 N I n P h m L 3 C 7 r m 0 m M H V K T 9 u J G C X M S R 1 T D l a l c L K 0 7 Z t J 3 + q m o + x D y g 6 p F H a W d f a z T q l S X a v y V b 6 s 1 L R S I X 3 t h U u F J r X J o Z B s G p J P g g J K o H t k U o z 7 m G p U p Q i 2 E 0 p h k 9 B c X h e U U R F 7 h / A S 8 D B h H e D Z w + x Y W w P M F T P m 3 T a i e B g S O 8 v 0 G O M l E y 1 4 q 6 g C e g z 5 B + M T 2 3 p T I 6 R M V h m e Q V p M z J l v s J J Q H z 5 X 7 f W 1 Z z b 3 T l V X Q l 3 a 5 n p c Z B i x R V V q n 3 O e Y 0 X C N u C V M r 1 g z x 4 Y F s 9 P 0 Y M A s o U M V J / / E c p c s w N p s h R M C T T R B E 9 C q v 6 r f K Z X Z 4 E f c n Q V M Z 5 x T i m e E n q E q t W p j t j 3 q r 6 p H v m a G s D + U X c t 5 H x 9 + H k s V X 8 z z F 6 P z W b R R x f e k N x T d a H L / R N j q y q C i 1 1 9 R F 5 F j R 6 I g m O 9 o 1 e 0 y 1 q N N n G v U B 3 v b y 7 r t K W z n k Z 1 + H 0 c T G U 2 Q F 3 B U E w l t y F n E 8 f z 7 H F / 1 I l O Z r o F U p 5 h g 7 k N z O f N d B p 1 Q + e 6 y b m u 4 5 4 / U 0 V 0 J n j 1 c + I G W L l s z J 3 l / 4 n q R F T d + f n k N R f x D 6 t N / r E u K w f 1 t R r 0 t + u V n a b + m y 3 S Q 3 p E T 7 i f O 9 S k P X x h N n 2 n H / S T f h n v j c / G F + N r 7 l p Y 0 H u W 6 c J l f P s L u 6 M m G w = = < / l a t e x i t > x rec 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 X I 0 Z T j M f p T 9 7 + P / s n D w E W u J Q 2 w = " > A A A F 1 3 i c j Z R L b 9 N A E M e n A U M J j 6 Z w 5 B I R I X F o S 5 z 0 d a x 4 5 g B S Q a Q t a k r k x y a x 4 p f W 2 5 J i W d w Q V w 5 8 G q 7 w I f g u 9 C q v 6 r f K Z X Z 4 E f c n Q V M Z 5 x T i m e E n q E q t W p j t j 3 q r 6 p H v m a G s D + U X c t 5 H x 9 + H k s V X 8 z z F 6 N z W b R R x f e k N x T d a H L / R N j q y q C i 1 1 9 R F 5 D j R 6 I g m O 9 p Z e 0 x 1 q N t n C v U h 3 v b y 7 r t K 2 z n k Z 1 + H 0 c T G U 2 Q F 3 F U E w l d y B n E 8 f z 7 H F / 1 I l O Z r o N U p 5 h g 7 k N z O f N d B p 1 U + e 6 x b l u 4 J 4 / U 0 V 0 J n j 1 c + I m W L l s z J 3 l / 4 n q R F T d + f n k N R f x D 6 t N / r E u K w f 1 9 R r 0 N x u V 3 a b + m y 3 R A 3 p I j 7 m f u 9 S k f X x h N n 2 n H / S T f h n v j c / G F + N r 7 l p Y 0 H v u 0 4 X L + P Y X D d 8 l + g = = < / l a t e x i t >... t ∈ {1, . . ., T} Figure S2: Graphical model. dealing with missing values. To the best of our knowledge, the only deep generative model for missing value imputation that does account for the time series nature of the data is the GRUI-GAN , which we describe in Sec. 3. Another deep learning model for time series imputation is BRITS , which uses recurrent neural networks (RNNs). It is trained in a self-supervised way, predicting the observations in a time series sequentially. We compare against both of these models in our experiments. Other related work. Our proposed model combines several ideas from the domains of Bayesian deep learning and classical probabilistic modeling; thus, removing elements from our model naturally relates to other approaches. For example, removing the latent GP for modeling dynamics as well as our proposed structured variational distribution in the HI-VAE described above. Furthermore, our idea of using a latent GP in the context of a deep generative model bears similarities to the GPPVAE , but note that the GPPVAE was not proposed to model time series data and does not take missing values into account. Lastly, the GP prior with the Cauchy kernel is reminiscent of Jähnichen et al. and the structured variational distribution is similar to the one used by Bamler and Mandt (2017b) in the context of modeling word embeddings over time, none of which used amortized inference. We choose the variational family to be the family of multivariate Gaussian distributions in the time domain, where the precision matrix Λ j is parameterized in terms of a product of bidiagonal matrices, Λ j:= B j B j, with {B j} tt = b j tt if t ∈ {t, t + 1} 0 otherwise. Above, the b j tt's are local variational parameters and B j is an upper triangular band matrix. Similar structured distributions were also employed by Bamler and Mandt (2017a);. This parameterization automatically leads to Λ j being positive definite, symmetric, and tridiagonal. Samples from q can thus be generated in linear time in T (b; ;) as opposed to the cubic time complexity for a full-rank matrix. Moreover, compared to a fully factorized variational approximation, the number of variational parameters are merely doubled. Note that while the precision matrix is sparse, the covariance matrix can still be dense, allowing to reflect long-range dependencies in time. Instead of optimizing m and B separately for every data point, we amortize the inference through an inference network with parameters ψ that computes the variational parameters based on the inputs as (m, B) = h ψ (x o 1:T). In the following, we accordingly denote the variational distribution as q ψ (·). Following standard VAE training, the parameters of the generative model θ and of the inference network ψ can be jointly trained by optimizing the evidence lower bound (ELBO). (see Sec. A), we evaluate the ELBO only on the observed features of the data since the remaining features are unknown, and set these missing features to a fixed value (zero) during inference. Our training objective is thus the RHS of. Neural network architectures. We use a convolutional neural network (CNN) as an inference network and a fully connected multilayer perceptron (MLP) as a generative network. The inference network convolves over the time dimension of the input data and allows for sequences of variable lengths. It consists of a number of convolutional layers that integrate information from neighboring time steps into a joint representation using a fixed receptive field (see Figure S1). The CNN outputs a tensor of size R T ×3k, where k is the dimensionality of the latent space. Every row corresponds to a time step t and contains 3k parameters, which are used to predict the mean vector m t as well as the diagonal and off-diagonal elements {b j t,t, b j t,t+1} j=1:k that characterize B at the given time step. More details about the network structure are given in the appendix (Sec. C). Appendix C. Experimental details C.1. Baseline methods Forward imputation and mean imputation. Forward and mean imputation are socalled single imputation methods, which means that they do not attempt to fit a distribution over possible values for the missing features, but only predict one estimate . Forward imputation always predicts the last observed value for any given feature, while mean imputation predicts the mean of all the observations of the feature in a given time series. Gaussian process in data space. One option to deal with missingness in multivariate time series is to fit independent Gaussian processes to each channel. As discussed previously (Sec. 2.1), this ignores the correlation between channels. The missing values are then imputed by taking the mean of the respective posterior of the GP for that feature. VAE and HI-VAE. The VAE and HI-VAE are fit to the data using the same training procedure as the proposed GP-VAE model. The VAE uses a standard ELBO that is defined over all the features, while the HI-VAE uses the ELBO from, which is only evaluated on the observed part of the feature space. During inference, missing features are filled with constant values, such as zero. The GRUI-GAN uses a recurrent neural network (RNN), namely a gated recurrent unit (GRU). Once the network is trained, a time series is imputed by optimizing the latent vector in the input space of the generator, such that the generator's output on the observed features is closest to the true values. Time series with missing values play a crucial role in the medical field, but are often hard to obtain. generated a data set called Healing MNIST, which is designed to reflect many properties that one also finds in real medical data. We benchmark our method on a variant of this data set. It was designed to incorporate some properties that one also finds in real medical data, and consists of short sequences of moving MNIST digits that rotate randomly between frames. The analogy to healthcare is that every frame may represent the collection of measurements that describe a patient's health state, which contains many missing measurements at each moment in time. The temporal evolution represents the non-linear evolution of the patient's health state. The image frames contain around 60 % missing pixels and the rotations between two consecutive frames are normally distributed. The benefit of this data set is that we know the ground truth of the imputation task. We compare our model against a standard VAE (no latent GP and standard ELBO over all features), the HI-VAE , as well as mean imputation and forward imputation. The models were trained on time series of digits from the Healing MNIST training set (50,000 time series) and tested on digits from the Healing MNIST test set (10,000 time series). Negative log likelihoods on the ground truth values of the missing pixels and mean squared errors (MSE) are reported in Table 1, and qualitative shown in Figure 1. To assess the usefulness of the imputations for downstream tasks, we also trained a linear classifier on the imputed MNIST digits to predict the digit class and measured its performance in terms of area under the receiver-operator-characteristic curve (AUROC) (Tab. 1). Our approach outperforms the baselines in terms of likelihood and MSE. The reconstructions (Fig. 1) reveal the benefits of the GP-VAE approach: related approaches yield unstable reconstructions over time, while our approach offers more stable reconstructions, Table S1: Performance of different models on Healing MNIST data with artificial missingness and different missingness mechanisms. We report mean squared error (lower is better). The reported values are means and their respective standard errors over the test set. Mean imp. Forward imp. VAE HI-VAE GP-VAE (proposed) Random 0.069 ± 0.000 0.099 ± 0.000 0.066 ± 0.000 0.042 ± 0.000 0.037 ± 0.000 Spatial 0.069 ± 0.000 0.099 ± 0.000 0.101 ± 0.000 0.060 ± 0.000 0.052 ± 0.000 Temporal + 0.091 ± 0.000 0.116 ± 0.000 0.065 ± 0.000 0.042 ± 0.000 0.037 ± 0.000 Temporal − 0.064 ± 0.000 0.093 ± 0.000 0.066 ± 0.000 0.042 ± 0.000 0.037 ± 0.000 MNAR 0.178 ± 0.000 0.174 ± 0.000 0.152 ± 0.001 0.088 ± 0.000 0.078 ± 0.000 using temporal information from neighboring frames. Moreover, our model also yields the most useful imputations for downstream classification in terms of AUROC. The downstream classification performance correlates well with the test likelihood on the ground truth data, supporting the intuition that it is a good proxy measure in cases where the ground truth likelihood is not available. We also observe that our model outperforms the baselines on different missingness mechanisms (Tab. S1). To assess our model's performance on more complex data, we applied it to the SPRITES data set, which has previously been used with sequential autoencoders . The dataset consists of 9,000 sequences of animated characters with different clothes, hair styles, and skin colors, performing different actions. Each frame has a size of 64 × 64 pixels and each time series features 8 frames. We again introduced about 60 % of missing pixels and compared the same methods as above. The are reported in Table 1 and example reconstructions are shown in Figure 1. As in the previous experiment, our model outperforms the baselines in terms of likelihood and MSE and also yields the most convincing reconstructions. The HI-VAE seems to suffer from posterior collapse in this setting, which might be due to the large dimensionality of the input data. We also applied our model to the data set from the 2012 Physionet Challenge . The data set contains 12,000 patients which were monitored on the intensive care unit (ICU) for 48 hours each. At each hour, there is a measurement of 36 different variables (heart rate, blood pressure, etc.), any number of which might be missing. We again compare our model against the standard VAE and HI-VAE, as well as a GP fit feature-wise in the data space and the GRUI-GAN model , which reported state-of-the-art imputation performance. The main challenge is the absence of ground truth data for the missing values. This cannot easily be circumvented by introducing additional missingness since the mechanism by which measurements were omitted is not random, and the data set is already very sparse with about 90 % of the features missing. To overcome this issue, proposed a downstream task as a proxy for the imputation quality. They chose the task of mortality prediction, which was one of the main tasks of the Physionet Challenge on this data set, and measured the performance in terms of AUROC. In this paper, we adopt this measure. For sake of interpretability, we used a linear support vector machine (SVM) as a downstream classification model. This model tries to optimally separate the whole time series in the input space using a linear hyperplane. The choice of model follows the intuition that under a perfect imputation similar patients should be located close to each other in the input space, while that is not necessarily the case when features are missing, or when the imputation is poor. Note that it is unrealistic to ask for high accuracies in this task, as the clean data are unlikely to be perfectly separable. As seen in Table 1, this proxy measure correlates well with the ground truth likelihood. The performances of the different methods under this measure are reported in Table 2. Our model outperforms all baselines, including the GRUI-GAN, which provides strong evidence that our model is well suited for real-world medical time series imputations.
We perform amortized variational inference on a latent Gaussian process model to achieve superior imputation performance on multivariate time series with missing data.
1,132
scitldr
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with different architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the input-output Jacobian of the network, and that this correlates well with generalization. We further establish that factors associated with poor generalization -- such as full-batch training or using random labels -- correspond to higher sensitivity, while factors associated with good generalization -- such as data augmentation and ReLU non-linearities -- give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points. The empirical success of deep learning has thus far eluded interpretation through existing lenses of computational complexity BID2, numerical optimization BID4 BID8 BID5 and classical statistical learning theory : neural networks are highly non-convex models with extreme capacity that train fast and generalize well. In fact, not only do large networks demonstrate good test performance, but larger networks often generalize better, counter to what would be expected from classical measures, such as VC dimension. This phenomenon has been observed in targeted experiments BID29, historical trends of Deep Learning competitions BID3, and in the course of this work (Figure 1).This observation is at odds with Occam's razor, the principle of parsimony, as applied to the intuitive notion of function complexity (see §A.2 for extended discussion). One resolution of the apparent contradiction is to examine complexity of functions in conjunction with the input domain. f (x) = x 3 sin(x) may seem decisively more complex than g(x) = x. But restrained to a narrow input domain of [−0.01, 0 .01] they appear differently: g remains a linear function of the input, while f (x) = O x 4 resembles a constant 0. In this work we find that such intuition applies to neural networks, that behave very differently close to the data manifold than away from it (§4.1).We therefore analyze the complexity of models through their capacity to distinguish different inputs in the neighborhood of datapoints, or, in other words, their sensitivity. We study two simple metrics presented in §3 and find that one of them, the norm of the input-output Jacobian, correlates with generalization in a very wide variety of scenarios. Train loss Figure 1: 2160 networks trained to 100% training accuracy on CIFAR10 (see §A.5.5 for experimental details). Left: while increasing capacity of the model allows for overfitting (top), very few models do, and a model with the maximum parameter count yields the best generalization (bottom right). Right: train loss does not correlate well with generalization, and the best model (minimum along the y-axis) has training loss many orders of magnitude higher than models that generalize worse (left). This observation rules out underfitting as the reason for poor generalization in low-capacity models. See BID29 for similar findings in the case of achievable 0 training loss. This work considers sensitivity only in the context of image classification tasks. We interpret the observed correlation with generalization as an expression of a universal prior on (natural) image classification functions that favor robustness (see §A.2 for details). While we expect a similar prior to exist in many other perceptual settings, care should be taken when extrapolating our findings to tasks where such a prior may not be justified (e.g. weather forecasting). We first define sensitivity metrics for fully-connected neural networks in §3. We then relate them to generalization through a sequence of experiments of increasing level of nuance:• In §4.1 we begin by comparing the sensitivity of trained neural networks on and off the training data manifold, i.e. in the regions of best and typical (over the whole input space) generalization.• In §4.2 we compare sensitivity of identical trained networks that differ in a single hyperparameter which is important for generalization.• Further, §4.3 associates sensitivity and generalization in an unrestricted manner, i.e. comparing networks of a wide variety of hyper-parameters such as width, depth, non-linearity, weight initialization, optimizer, learning rate and batch size.• Finally, §4.4 explores how predictive sensitivity (as measured by the Jacobian norm) is for individual test points. The novelty of this work can be summarized as follows:• Study of the behavior of trained neural networks on and off the data manifold through sensitivity metrics (§4.1).• Evaluation of sensitivity metrics on trained neural networks in a very large-scale experimental setting and finding that they correlate with generalization (§4. 2, §4.3, §4.4). §2 puts our work in context of related research studying complexity, generalization, or sensitivity metrics similar to ours. We analyze complexity of fully-connected neural networks for the purpose of model comparison through the following sensitivity measures (see §3 for details):• estimating the number of linear regions a network splits the input space into;• measuring the norm of the input-output Jacobian within such regions. A few prior works have examined measures related to the ones we consider. In particular, BID33; BID24; have investigated the expressive power of fully-connected neural networks built out of piecewise-linear activation functions. Such functions are themselves piecewise-linear over their input domain, so that the number of linear regions into which input space is divided is one measure of how nonlinear the function is. A function with many linear regions has the capacity to build complex, flexible decision boundaries. It was argued in BID33 BID24 that an upper bound to the number of linear regions scales exponentially with depth but polynomially in width, and a specific construction was examined. derived a tight analytic bound and considered the number of linear regions for generic networks with random weights, as would be appropriate, for instance, at initialization. However, the evolution of this measure after training has not been investigated before. We examine a related measure, the number of hidden unit transitions along one-dimensional trajectories in input space, for trained networks. Further motivation for this measure is discussed in §3.Another perspective on function complexity can be gained by studying their robustness to perturbations to the input. Indeed, BID36 demonstrate on a toy problem how complexity as measured by the number of parameters may be of limited utility for model selection, while measuring the output variation allows the invocation of Occam's razor. In this work we apply related ideas to a large-scale practical context of neural networks with up to a billion free parameters (§4.2, §4.3) and discuss potential ways in which sensitivity permits the application of Occam's razor to neural networks (§A.2). provide theoretical support for the relevance of robustness, as measured by the input-output Jacobian, to generalization. They derive bounds for the generalization gap in terms of the Jacobian norm within the framework of algorithmic robustness . Our provide empirical support for their through an extensive number of experiments. Several other recent papers have also focused on deriving tight generalization bounds for neural networks BID1 BID6 BID31. We do not propose theoretical bounds in this paper but establish a correlation between our metrics and generalization in a substantially larger experimental setting than undertaken in prior works. In the context of regularization, increasing robustness to perturbations is a widely-used strategy: data augmentation, noise injection BID15, weight decay BID19, and max-pooling all indirectly reduce sensitivity of the model to perturbations, while BID37 explicitly penalize the Frobenius norm of the Jacobian in the training objective. In this work we relate several of the above mentioned regularizing techniques to sensitivity, demonstrating through extensive experiments that improved generalization is consistently coupled with better robustness as measured by a single metric, the input-output Jacobian norm (§4.2). While some of these findings confirm common-sense expectations (random labels increase sensitivity, Figure 4, top row), others challenge our intuition of what makes a neural network robust (ReLU-networks, with unbounded activations, tend to be more robust than saturating HardSigmoid-networks, Figure 4, third row). One of our findings demonstrates an inductive bias towards robustness in stochastic mini-batch optimization compared to full-batch training (Figure 4, bottom row). Interpreting this regularizing effect in terms of some measure of sensitivity, such as curvature, is not new BID13 BID16 ), yet we provide a novel perspective by relating it to reduced sensitivity to inputs instead of parameters. The inductive bias of SGD ("implicit regularization") has been previously studied in BID29, where it was shown through rigorous experiments how increasing the width of a singlehidden-layer network improves generalization, and an analogy with matrix factorization was drawn to motivate constraining the norm of the weights instead of their count. BID30 further explored several weight-norm based measures of complexity that do not scale with the size of the model. One of our measures, the Frobenius norm of the Jacobian is of similar nature (since the Jacobian matrix size is determined by the task and not by a particular network architecture). However, this particular metric was not considered, and, to the best of our knowledge, we are the first to evaluate it in a large-scale setting (e.g. our networks are up to 65 layers deep and up to 2 16 units wide). Sensitivity to inputs has attracted a lot of interest in the context of adversarial examples . Several attacks locate points of poor generalization in the directions of high sensitivity of the network BID32 BID25, while certain defences regularize the model by penalizing sensitivity BID10 or employing decaying (hence more robust) non-linearities BID20. However, work on adversarial examples relates highly specific perturbations to a similarly specific kind of generalization (i.e. performance on a very small, adversarial subset of the data manifold), while this paper analyzes average-case sensitivity (§3) and typical generalization. We propose two simple measures of sensitivity for a fully-connected neural network (without biases) f: R d → R n with respect to its input x ∈ R d (the output being unnormalized logits of the n classes). Assume f employs a piecewise-linear activation function, like ReLU. Then f itself, as a composition of linear and piecewise-linear functions, is a piecewise-linear map, splitting the input space R d into disjoint regions, implementing a single affine mapping on each. Then we can measure two aspects of sensitivity by answering 1. How does the output of the network change as the input is perturbed within the linear region?2. How likely is the linear region to change in response to change in the input?We quantify these qualities as follows:1. For a local sensitivity measure we adopt the Frobenius norm of the class probabilities Jacobian DISPLAYFORM0, where f σ = σ • f with σ being the softmax function BID39. Given points of interest x test, we estimate the sensitivity of the function in those regions with the average Jacobian norm: DISPLAYFORM1 that we will further refer to as simply "Jacobian norm". Note that this does not require the labels for x test.Interpretation. The Frobenius norm J(x) F = ij J ij (x) 2 estimates the averagecase sensitivity of f σ around x. Indeed, consider an infinitesimal Gaussian perturbation BID39 The norm of the Jacobian with respect to logits ∂f (x) /∂x T experimentally turned out less predictive of test performance (not shown). See §A.3 for discussion of why the softmax Jacobian is related to generalization. ∆x ∼ N (0, I): the expected magnitude of the output change is then DISPLAYFORM2 2. To detect a change in linear region (further called a "transition"), we need to be able to identify it. We do this analogously to. For a network with piecewiselinear activations, we can, given an input x, assign a code to each neuron in the network f, that identifies the linear region of the pre-activation of that neuron. E.g. each ReLU unit will have 0 or 1 assigned to it if the pre-activation value is less or greater than 0 respectively. Similarly, a ReLU6 unit (see definition in §A.4) will have a code of 0, 1, or 2 assigned, since it has 3 linear regions 2. Then, a concatenation of codes of all neurons in the network (denoted by c(x)) uniquely identifies the linear region of the input x (see §A.1.1 for discussion of edge cases). Given this encoding scheme, we can detect a transition by detecting a change in the code. We then sample k equidistant points z 0,..., z k−1 on a closed one-dimensional trajectory T (x) (generated from a data point x and lying close to the data manifold; see below for details) and count transitions t(x) along it to quantify the number of linear regions: DISPLAYFORM3 where the norm of the directional derivative ∂c(z)/∂ (dz) 1 amounts to a Dirac delta function at each transition (see §A.1.2 for further details).By sampling multiple such trajectories around different points, we estimate the sensitivity metric: DISPLAYFORM4 that we will further refer to as simply "transitions" or "number of transitions." To assure the sampling trajectory T (x test) is close to the data manifold (since this is the region of interest), we construct it through horizontal translations of the image x test in pixel space (Figure App.7, right). We similarly augment our training data with horizontal and vertical translations in the corresponding experiments (Figure 4, second row). As earlier, this metric does not require knowing the label of x test. Interpretation. We can draw a qualitative parallel between the number of transitions and curvature of the function. One measure of curvature of a function f is the total norm of the directional derivative of its first derivative f along a path: DISPLAYFORM5 A piecewise-linear function f has a constant first derivative f everywhere except for the transition boundaries. Therefore, for a sufficiently large k, curvature can be expressed as DISPLAYFORM6 where z 0,..., z k−1 are equidistant samples on T (x). This sum is similar to t(x) as defined in Equation 1, but quantifies the amount of change in between two linear regions in a nonbinary way. However, estimating it on a densely sampled trajectory is a computationallyintensive task, which is one reason we instead count transitions. As such, on a qualitative level, the two metrics (Jacobian norm and number of transitions) track the first and second order terms of the Taylor expansion of the function. Above we have defined two sensitivity metrics to describe the learned function around the data, on average. In §4.1 we analyze these measures on and off the data manifold by simply measuring them along circular trajectories in input space that intersect the data manifold at certain points, but generally lie away from it (Figure 2, left). In the following subsections (§4.2, §4.3) each study analyzes performance of a large number (usually thousands) of fully-connected neural networks having different hyper-parameters and optimization procedures. Except where specified, we include only models which achieve 100% training accuracy. This allows us to study generalization disentangled from properties like expressivity and trainability, which are outside the scope of this work. In order to efficiently evaluate the compute-intensive metrics (§3) in a very wide range of hyperparameters settings (see e.g. §A.5.5) we only consider fully-connected networks. Extending the investigation to more complex architectures is left for future work. We analyze the behavior of a trained neural network near and away from training data. We do this by comparing sensitivity of the function along 3 types of trajectories:1. A random ellipse. This trajectory is extremely unlikely to pass anywhere near the real data, and indicates how the function behaves in random locations of the input space that it never encountered during training.2. An ellipse passing through three training points of different class (Figure 2, left). This trajectory does pass through the three data points, but in between it traverses images that are linear combinations of different-class images, and are expected to lie outside of the natural image space. Sensitivity of the function along this trajectory allows comparison of its behavior on and off the data manifold, as it approaches and moves away from the three anchor points.3. An ellipse through three training points of the same class. This trajectory is similar to the previous one, but, given the dataset used in the experiment (MNIST), is expected to traverse overall closer to the data manifold, since linear combinations of the same digit are more likely to resemble a realistic image. Comparing transition density along this trajectory to the one through points of different classes allows further assessment of how sensitivity changes in response to approaching the data manifold. We find that, according to both the Jacobian norm and transitions metrics, functions exhibit much more robust behavior around the training data (Figure 2, center and right). We further visualize this effect in 2D in Figure 3, where we plot the transition boundaries of the last (pre-logit) layer of a neural network before and after training. After training we observe that training points lie in regions of low transition density. The observed contrast between the neural network behavior near and away from data further strengthens the empirical connection we draw between sensitivity and generalization in §4.2, §4.3 and §4.4; it also confirms that, as mentioned in §1, if a certain quality of a function is to be used for model comparison, input domain should always be accounted for. In §4.1 we established that neural networks implement more robust functions in the vicinity of the training data manifold than away from it. We now consider the more practical context of model selection. Given two perfectly trained neural networks, does the model with better generalization implement a less sensitive function? Figure 2: A 100%-accurate (on training data) MNIST network implements a function that is much more stable near training data than away from it. Left: depiction of a hypothetical circular trajectory in input space passing through three digits of different classes, highlighting the training point locations (π/3, π, 5π/3). Center: Jacobian norm as the input traverses an elliptical trajectory. Sensitivity drops significantly in the vicinity of training data while remaining uniform along random ellipses. Right: transition density behaves analogously. According to both metrics, as the input moves between points of different classes, the function becomes less stable than when it moves between points of the same class. This is consistent with the intuition that linear combinations of different digits lie further from the data manifold than those of same-class digits (which need not hold for more complex datasets). See §A.5.2 for experimental details. After Training Figure 3: Transition boundaries of the last (pre-logits) layer over a 2-dimensional slice through the input space defined by 3 training points (indicated by inset squares). Left: boundaries before training. Right: after training, transition boundaries become highly non-isotropic, with training points lying in regions of lower transition density. See §A.5.3 for experimental details. We study approaches in the machine learning community that are commonly believed to influence generalization (Figure 4, top to bottom):• random labels;• data augmentation;• ReLUs;• full-batch training. We find that in each case, the change in generalization is coupled with the respective change in sensitivity (i.e. lower sensitivity corresponds to smaller generalization gap) as measured by the Jacobian norm (and almost always for the transitions metric). Figure 4: Improvement in generalization (left column) due to using correct labels, data augmentation, ReLUs, mini-batch optimization (top to bottom) is consistently coupled with reduced sensitivity as measured by the Jacobian norm (center column). Transitions (right column) correlate with generalization in all considered scenarios except for comparing optimizers (bottom right). Each point on the plot corresponds to two neural networks that share all hyper-parameters and the same optimization procedure, but differ in a certain property as indicated by axes titles. The coordinates along each axis reflect the values of the quantity in the title of the plot in the respective setting (i.e. with true or random labels). All networks have reached 100% training accuracy on CIFAR10 in both settings (except for the data-augmentation study, second row; see §A.5.4 for details). See §A.5.5 for experimental details (§A.5.4 for the data-augmentation study) and §4.2.1 for plot interpretation. In Figure 4, for many possible hyper-parameter configurations, we train two models that share all parameters and optimization procedure, but differ in a single binary setting (i.e. trained on true or random labels; with or without data augmentation; etc). Out of all such network pairs, we select only those where each network reached 100% training accuracy on the whole training set (apart from the data augmentation study). The two generalization or sensitivity values are then used as the x and y coordinates of a point corresponding to this pair of networks (with the plot axes labels denoting the respective value of the binary parameter considered). The position of the point with respect to the diagonal y = x visually demonstrates which configuration has smaller generalization gap / lower sensitivity. We now perform a large-scale experiment to establish direct relationships between sensitivity and generalization in a realistic setting. In contrast to §4.1, where we selected locations in the input space, and §4.2, where we varied a single binary parameter impacting generalization, we now sweep simultaneously over many different architectural and optimization choices (§A.5.5).Our main is presented in FIG3, indicating a strong relationship between the Jacobian norm and generalization. In contrast, Figure App.8 demonstrates that the number of transitions is not alone sufficient to compare networks of different sizes, as the number of neurons in the networks has a strong influence on transition count. In §4.3 we established a correlation between sensitivity (as measured by the Jacobian norm) and generalization averaged over a large test set (10 4 points). We now investigate whether the Jacobian norm can be predictive of generalization at individual points. As demonstrated in FIG4 (top), Jacobian norm at a point is predictive of the cross-entropy loss, but the relationship is not a linear one, and not even bijective (see §A.3 for analytic expressions explaining it). In particular, certain misclassified points (right sides of the plots) have a Jacobian norm many orders of magnitude smaller than that of the correctly classified points (left sides). However, we do remark a consistent tendency for points having the highest values of the Jacobian norm to be mostly misclassified. A similar yet noisier trend is observed in networks trained using 2 -loss as depicted in FIG4 (bottom). These observations make the Jacobian norm a promising quantity to consider in the contexts of active learning and confidence estimation in future research. We have investigated sensitivity of trained neural networks through the input-output Jacobian norm and linear regions counting in the context of image classification tasks. We have presented extensive experimental evidence indicating that the local geometry of the trained function as captured by the input-output Jacobian can be predictive of generalization in many different contexts, and that it varies drastically depending on how close to the training data manifold the function is evaluated. We further established a connection between the cross-entropy loss and the Jacobian norm, indicating that it can remain informative of generalization even at the level of individual test points. Interesting directions for future work include extending our investigation to more complex architectures and other machine learning tasks. The way of encoding a linear region c (z) of a point z described in §3 guarantees that different regions obtain different codes, but different codes might be assigned to the same region if all the neurons in any layer of the network are saturated (or if weights leading from the transitioning unit to active units are exactly zero, or exactly cancel). However, the probability of such an arrangement drops exponentially with width and hence is ignored in this work. The equality between the discrete and continuous versions of t (x) in Equation 1 becomes exact with a high-enough sampling density k such that there are no narrow linear regions missed in between consecutive points (precisely, the encoding c (z) has to only change at most once on the line between two consecutive points z i and z i+1 ).For computational efficiency we also assume that no two neurons transitions simultaneously, which is extremely unlikely in the context of random initialization and stochastic optimization. Figure App.7: Depiction of a trajectory in input space used to count transitions as defined in §3. An interpolation between 28 horizontal translations of a single digit in a complex trajectory that constrains all points to lie close to the translation-augmented data, and allows for a tractable estimate of transition density around the data manifold. This metric is used to compare models in §4.2 and §4.3. Straight lines indicate boundaries between different linear regions (straight-line boundaries between linear regions is accurate for the case of a single-layer piecewise-linear network. The partition into linear regions is more complex for deeper networks). FIG4, each plot shows 5 random networks that fit the respective training set to a 100% with each network having a unique color. See §A.5.6 for experimental details. Here we briefly discuss the motivation of this work in the context of Occam's razor. Occam's razor is a heuristic for model comparison based on their complexity. Given a dataset D, Occam's razor gives preference to simpler models H. In the Bayesian interpretation of the heuristic BID14 simplicity is defined as evidence P [D|H] and is often computed using the Laplace approximation. Under further assumptions BID22, this evidence can be shown to be inversely proportional to the number of parameters in the model. Therefore, given a uniform prior P [H] on two competing hypothesis classes, the class posterior P [H|D] ∼ P [D|H] P [H] is higher for a model with fewer parameters. An alternative, qualitative justification of the heuristic is through considering the evidence as a normalized probability distribution over the whole dataset space: DISPLAYFORM0 and remarking that models with more parameters have to spread the probability mass more evenly across all the datasets by virtue of being able to fit more of them (Figure App.10, left). This similarly suggests (under a uniform prior on competing hypothesis classes) preferring models with fewer parameters, assuming that evidence is unimodal and peaks close to the dataset of interest. Occam's razor for neural networks. As seen in Figure 1, the above reasoning does not apply to neural networks: the best achieved generalization is obtained by a model that has around 10 4 times as many parameters as the simplest model capable of fitting the dataset (within the evaluated search space).On one hand, BID26; demonstrate on concrete examples that a high number of free parameters in the model doesn't necessarily entail high complexity. On the other hand, a large body of work on the expressivity of neural networks BID33 BID24 shows that their ability to compute complex functions increases rapidly with size, while validates that they also easily fit complex (even random) functions with stochastic optimization. Classical metrics like VC dimension or Rademacher complexity increase with size of the network as well. This indicates that weights of a neural network may actually correspond to its usable capacity, and hence "smear" the evidence P [D|H] along a very large space of datasets D, making the dataset of interest D less likely. Potential issues. We conjecture the Laplace approximation of the evidence P [D|H] and the simplified estimation of the "Occam's factor" in terms of the accessible volume of the parameter space might not hold for neural networks in the context of stochastic optimization, and, in particular, do not account for the combinatorial growth of the accessible volume of parameter space as width increases BID23. Similarly, when comparing evidence as probability distributions over datasets, the difference between two neural networks may not be as drastic as in Figure App.10 (left), but more nuanced as depicted in Figure App.10 (right), with the evidence ratio being highly dependent on the particular dataset. We interpret our work as defining hypothesis classes based on sensitivity of the hypothesis (which yielded promising in BID36 on a toy task) and observing a strongly non-uniform prior on these classes that enables model comparison. Indeed, at least in the context of natural images classification, putting a prior on the number of parameters or Kolmogorov complexity of the hypothesis is extremely difficult. However, a statement that the true classification function is robust to small perturbations in the input is much easier to justify. As such, a prior P [H] in favor of robustness over sensitivity might fare better than a prior on specific network hyper-parameters. Above is one way to interpret the correlation between sensitivity and generalization that we observe in this work. It does not explain why large networks tend to converge to less sensitive functions. We conjecture large networks to have access to a larger space of robust solutions due to solving a highly-underdetermined system when fitting a dataset, while small models converge to more extreme weight values due to being overconstrained by the data. However, further investigation is needed to confirm this hypothesis. Reality? might nonetheless concentrate the majority of probability mass on simple functions and the evidence curves might intersect at a small angle. In this case, while a dataset D lying close to the intersection can be fit by both models, the Bayesian evidence ratio depends on its exact position with respect to the intersection. DISPLAYFORM0 Here we analyze the relationship between the Jacobian norm and the cross-entropy loss at individual test points as studied in §4.4.Target class Jacobian. We begin by relating the derivative of the target class probability J y(x) to per-point cross-entropy loss l(x) = − log [f σ (x)] y(x) (where y(x) is the correct integer class).We will denote f σ (x) by σ and drop the x argument to de-clutter notation (i.e. write f instead of f (x)). Then the Jacobian can be expressed as DISPLAYFORM0 where is the Hadamard element-wise product. Then indexing both sides of the equation at the correct class y yields DISPLAYFORM1 where e y is a vector of zeros everywhere except for e y = 1. Taking the norm of both sides in DISPLAYFORM2 We now assume that magnitudes of the individual logit derivatives vary little in between logits and over the input space 3: DISPLAYFORM3 /n. Since σ lies on the (n − 1)-simplex ∆ n−1, under these assumptions we can bound:(DISPLAYFORM4 and finally DISPLAYFORM5 or, in terms of the cross-entropy loss l = − log σ y : DISPLAYFORM6 We validate these approximate bounds in Figure App .11 (top).Full Jacobian. Equation 5 establishes a close relationship between J y and loss l = − log σ y, but of course, at test time we do not know the target class y. This allows us to only bound the full Jacobian norm from below: DISPLAYFORM7 For the upper bound, we assume the maximum-entropy case of σ y: σ i ≈ (1 − σ y)/(n − 1), for i = y. The Jacobian norm is DISPLAYFORM8 where the first summand becomes: All reported values, when applicable, were evaluated on the whole training and test sets of sizes 50000 and 10000 respectively. E.g. "generalization gap" is defined as the difference between train and test accuracies evaluated on the whole train and test sets. DISPLAYFORM9 When applicable, all trajectories/surfaces in input space were sampled with 2 20 points. All figures except for 6 and App.11 are plotted with (pale) error bars (when applicable). The reported quantity was usually evaluated 8 times with random seeds from 1 to 8 4, unless specified otherwise. E.g. if a network is said to be 100%-accurate on the training set, it means that each of the 8 randomlyinitialized networks is 100%-accurate after training. The error bar is centered at the mean value of the quantity and spans the standard error of the mean in each direction. If the bar appears to not be visible, it may be smaller than the mean value marker. Weight initialization, training set shuffling, data augmentation, picking anchor points of data-fitted trajectories, selecting axes of a zero-centered elliptic trajectory depend on the random seed. A random zero-centered ellipse was obtained by generating two axis vectors with normallydistributed entries of zero mean and unit variance (as such making points on the trajectory have an expected norm equal to that of training data) and sampling points on the ellipse with given axes. A random data-fitted ellipse was generated by projecting three arbitrary input points onto a plane where they fall into vertices of an equilateral triangle, and then projecting their circumcircle back into the input space. Relevant figure 3.A 15-layer ReLU6-network of width 300 was trained on MNIST for 2 18 steps using SGD with momentum BID38; images were randomly translated with wrapping by up to 4 pixels in each direction, horizontally and vertically, as well as randomly flipped along each axis, and randomly rotated by 90 degrees clockwise and counter-clockwise. The sampling grid in input space was obtain by projecting three arbitrary input points into a plane as described in §A.5.2 such that the ing triangle was centered at 0 and it's vertices were at a distance 0.8 form the origin. Then, a sampling grid of points in the [−1 ; 1] ×2 square was projected back into the input space. Relevant figures: 4 (second row) and 5 (bottom).All networks were trained for 2 18 steps of batch size of 256 using SGD with momentum. Learning rate was set to 0.005 and momentum term coefficient to 0.9.Data augmentation consisted of random translation of the input by up to 4 pixels in each direction with wrapping, horizontally and vertically. The input was also flipped horizontally with probability 0.5. When applying data augmentation (second row of Figure 4), the network is unlikely to encounter the canonical training data, hence few configurations achieved 100% training accuracy. However, we verified that all networks trained with data augmentation reached a higher test accuracy than their analogues without, ensuring that the generalization gap shrinks not simply because of lower training accuracy. For each dataset, networks of width {100, 200, 500, 1000, 2000, 3000}, depth {2, 3, 5, 10, 15, 20} and activation function {ReLU, ReLU6, HardTanh, HardSigmoid} were evaluated on 8 random seeds from 1 to 8. Relevant figures: 1, 4 (except for the second row), 5 (top), App.8. 335671 networks were trained for 2 19 steps with random hyper-parameters; if training did not complete, a checkpoint at step 2 18 was used instead, if available. When using L-BFGS, the maximum number of iterations was set to 2684. The space of available hyper-parameters included 5:
We perform massive experimental studies characterizing the relationships between Jacobian norms, linear regions, and generalization.
1,133
scitldr
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Exploration remains a key challenge in contemporary deep reinforcement learning (RL). Its main purpose is to ensure that the agent's behavior does not converge prematurely to a local optimum. Enabling efficient and effective exploration is, however, not trivial since it is not directed by the reward function of the underlying Markov decision process (MDP). Although a plethora of methods have been proposed to tackle this challenge in high-dimensional and/or continuous-action MDPs, they often rely on complex additional structures such as counting tables BID6, density modeling of the state space BID22, learned dynamics models BID0 BID29, or self-supervised curiosity BID23 ).An orthogonal way of increasing the exploratory nature of these algorithms is through the addition of temporally-correlated noise, for example as done in bootstrapped DQN BID20. Along the same lines, it was shown that the addition of parameter noise leads to better exploration by obtaining a policy that exhibits a larger variety of behaviors BID32 ). We discuss these related approaches in greater detail in Section 5. Their main limitation, however, is that they are either only proposed and evaluated for the on-policy setting with relatively small and shallow function approximators (Rückstieß et al., 2008) or disregard all temporal structure and gradient information (; BID17 BID28 . This paper investigates how parameter space noise can be effectively combined with off-the-shelf deep RL algorithms such as DQN BID19, DDPG BID18, and TRPO (b) to improve their exploratory behavior. Experiments show that this form of exploration is applicable to both high-dimensional discrete environments and continuous control tasks, using on-and off-policy methods. Our indicate that parameter noise outperforms traditional action space noise-based baselines, especially in tasks where the reward signal is extremely sparse. We consider the standard RL framework consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is modeled as a Markov decision process (MDP) and is defined by a set of states S, a set of actions A, a distribution over initial states p(s 0), a reward function r: S × A → R, transition probabilities p(s t+1 |s t, a t), a time horizon T, and a discount factor γ ∈. We denote by π θ a policy parametrized by θ, which can be either deterministic, π: S → A, or stochastic, π: S → P(A). The agent's goal is to maximize the expected discounted return η(π θ) = E τ [T t=0 γ t r(s t, a t)], where τ = (s 0, a 0, . . ., s T) denotes a trajectory with s 0 ∼ p(s 0), a t ∼ π θ (a t |s t), and s t+1 ∼ p(s t+1 |s t, a t). Experimental evaluation is based on the undiscounted return E τ [T t=0 r(s t, a t)]. Off-policy RL methods allow learning based on data captured by arbitrary policies. This paper considers two popular off-policy algorithms, namely Deep Q-Networks (DQN, BID19) and Deep Deterministic Policy Gradients (DDPG, BID18).Deep Q-Networks (DQN) DQN uses a deep neural network as a function approximator to estimate the optimal Q-value function, which conforms to the Bellman optimality equation:Q(s t, a t) = r(s t, a t) + γ max a ∈A Q(s t+1, a).The policy is implicitly defined by Q as π(s t) = argmax a ∈A Q(s t, a). Typically, a stochasticgreedy or Boltzmann policy is derived from the Q-value function to encourage exploration, which relies on sampling noise in the action space. The Q-network predicts a Q-value for each action and is updated using off-policy data from a replay buffer. DDPG is an actor-critic algorithm, applicable to continuous action spaces. Similar to DQN, the critic estimates the Q-value function using off-policy data and the recursive Bellman equation: DISPLAYFORM0 where π θ is the actor or policy. The actor is trained to maximize the critic's estimated Q-values by back-propagating through both networks. For exploration, DDPG uses a stochastic policy of the form π θ (s t) = π θ (s t) + w, where w is either w ∼ N (0, σ 2 I) (uncorrelated) or w ∼ OU(0, σ 2) (correlated).2 Again, exploration is realized through action space noise. In contrast to off-policy algorithms, on-policy methods require updating function approximators according to the currently followed policy. In particular, we will consider Trust Region Policy Optimization (TRPO, Schulman et al. (2015a) ), an extension of traditional policy gradient methods (b) using the natural gradient direction BID24 BID13.Trust Region Policy Optimization (TRPO) TRPO improves upon REINFORCE (b) by computing an ascent direction that ensures a small change in the policy distribution. More specifically, TRPO solves the following constrained optimization problem: DISPLAYFORM0 where ρ θ = ρ π θ is the discounted state-visitation frequencies induced by π θ, A(s, a) denotes the advantage function estimated by the empirical return minus the baseline, and δ KL is a step size parameter which controls how much the policy is allowed to change per iteration. This work considers policies that are realized as parameterized functions, which we denote as π θ, with θ being the parameter vector. We represent policies as neural networks but our technique can be applied to arbitrary parametric models. To achieve structured exploration, we sample from a set of policies by applying additive Gaussian noise to the parameter vector of the current policy: θ = θ + N (0, σ 2 I). Importantly, the perturbed policy is sampled at the beginning of each episode and kept fixed for the entire rollout. For convenience and readability, we denote this perturbed policy as π:= π θ and analogously define π:= π θ.State-dependent exploration As pointed out by Rückstieß et al., there is a crucial difference between action space noise and parameter space noise. Consider the continuous action space case. When using Gaussian action noise, actions are sampled according to some stochastic policy, generating a t = π(s t) + N (0, σ 2 I). Therefore, even for a fixed state s, we will almost certainly obtain a different action whenever that state is sampled again in the rollout, since action space noise is completely independent of the current state s t (notice that this is equally true for correlated action space noise). In contrast, if the parameters of the policy are perturbed at the beginning of each episode, we get a t = π(s t). In this case, the same action will be taken every time the same state s t is sampled in the rollout. This ensures consistency in actions, and directly introduces a dependence between the state and the exploratory action taken. Perturbing deep neural networks It is not immediately obvious that deep neural networks, with potentially millions of parameters and complicated nonlinear interactions, can be perturbed in meaningful ways by applying spherical Gaussian noise. However, as recently shown by , a simple reparameterization of the network achieves exactly this. More concretely, we use layer normalization BID2 between perturbed layers.3 Due to this normalizing across activations within a layer, the same perturbation scale can be used across all layers, even though different layers may exhibit different sensitivities to noise. Adaptive noise scaling Parameter space noise requires us to pick a suitable scale σ. This can be problematic since the scale will strongly depend on the specific network architecture, and is likely to vary over time as parameters become more sensitive to noise as learning progresses. Additionally, while it is easy to intuitively grasp the scale of action space noise, it is far harder to understand the scale in parameter space. We propose a simple solution that resolves all aforementioned limitations in an easy and straightforward way. This is achieved by adapting the scale of the parameter space noise over time and relating it to the variance in action space that it induces. More concretely, we can define a distance measure between perturbed and non-perturbed policy in action space and adaptively increase or decrease the parameter space noise depending on whether it is below or above a certain threshold: DISPLAYFORM0 where α ∈ R >0 is a scaling factor and δ ∈ R >0 a threshold value. The concrete realization of d(·, ·) depends on the algorithm at hand and we describe appropriate distance measures for DQN, DDPG, and TRPO in Appendix C.Parameter space noise for off-policy methods In the off-policy case, parameter space noise can be applied straightforwardly since, by definition, data that was collected off-policy can be used. More concretely, we only perturb the policy for exploration and train the non-perturbed network on this data by replaying it. Parameter space noise for on-policy methods Parameter noise can be incorporated in an onpolicy setting, using an adapted policy gradient, as set forth by Rückstieß et al.. Policy gradient methods optimize DISPLAYFORM1 Given a stochastic policy π θ (a|s) with θ ∼ N (φ, Σ), the expected return can be expanded using likelihood ratios and the re-parametrization trick BID16 as DISPLAYFORM2 for N samples i ∼ N (0, I) and τ i ∼ (π DISPLAYFORM3, p) (see Appendix B for a full derivation). Rather than updating Σ according to the previously derived policy gradient, we fix its value to σ 2 I and scale it adaptively as described in Appendix C. This section answers the following questions:(i) Do existing state-of-the-art RL algorithms benefit from incorporating parameter space noise?(ii) Does parameter space noise aid in exploring sparse reward environments more effectively?(iii) How does parameter space noise exploration compare against evolution strategies for deep policies with respect to sample efficiency?Reference implementations of DQN and DDPG with adaptive parameter space noise are available online. The added value of parameter space noise over action space noise is measured on both highdimensional discrete-action environments and continuous control tasks. For the discrete environments, comparisons are made using DQN, while DDPG and TRPO are used on the continuous control tasks. Discrete-action environments For discrete-action environments, we use the Arcade Learning Environment (ALE, BID3) benchmark along with a standard DQN implementation. We compare a baseline DQN agent with -greedy action noise against a version of DQN with parameter noise. We linearly anneal from 1.0 to 0.1 over the first 1 million timesteps. For parameter noise, we adapt the scale using a simple heuristic that increases the scale if the KL divergence between perturbed and non-perturbed policy is less than the KL divergence between greedy and -greedy policy and decreases it otherwise (see Section C.1 for details). By using this approach, we achieve a fair comparison between action space noise and parameter space noise since the magnitude of the noise is similar and also avoid the introduction of an additional hyperparameter. For parameter perturbation, we found it useful to reparametrize the network in terms of an explicit policy that represents the greedy policy π implied by the Q-values, rather than perturbing the Qfunction directly. To represent the policy π(a|s), we add a single fully connected layer after the convolutional part of the network, followed by a softmax output layer. Thus, π predicts a discrete probability distribution over actions, given a state. We find that perturbing π instead of Q in more meaningful changes since we now define an explicit behavioral policy. In this setting, the Q-network is trained according to standard DQN practices. The policy π is trained by maximizing the probability of outputting the greedy action accordingly to the current Q-network. Essentially, the policy is trained to exhibit the same behavior as running greedy DQN. To rule out this double-headed version of DQN alone exhibits significantly different behavior, we always compare our parameter space noise approach against two baselines, regular DQN and two-headed DQN, both with -greedy exploration. We furthermore randomly sample actions for the first 50 thousand timesteps in all cases to fill the replay buffer before starting training. Moreover, we found that parameter space noise performs better if it is combined with a bit of action space noise (we use a -greedy behavioral policy with = 0.01 for the parameter space noise experiments). Full experimental details are described in Section A.1.We chose 21 games of varying complexity, according to the taxonomy presented by BID4. The learning curves are shown in FIG2 for a selection of games (see Appendix D for full ). Each agent is trained for 40 M frames. The overall performance is estimated by running each configuration with three different random seeds, and we plot the median return (line) as well as the interquartile range (shaded area). Note that performance is evaluated on the exploratory policy since we are interested in its behavior especially. Overall, our show that parameter space noise often outperforms action space noise, especially on games that require consistency (e.g. Enduro, Freeway) and performs comparably on the remaining ones. Additionally, learning progress usually starts much sooner when using parameter space noise. Finally, we also compare against a double-headed version of DQN with -greedy exploration to ensure that this change in architecture is not responsible for improved exploration, which our confirm. Full are available in Appendix D.That being said, parameter space noise is unable to sufficiently explore in extremely challenging games like Montezuma's Revenge. More sophisticated exploration methods like BID4 are likely necessary to successfully learn these games. However, such methods often rely on some form of "inner" exploration method, which is usually traditional action space noise. It would be interesting to evaluate the effect of parameter space noise when combined with exploration methods. On a final note, proposed improvements to DQN like double DQN , prioritized experience replay , and dueling networks are orthogonal to our improvements and would therefore likely improve further. We leave the experimental validation of this theory to future work. We now compare parameter noise with action noise on the continuous control environments implemented in OpenAI Gym BID6. We use DDPG BID18 as the RL algorithm for all environments with similar hyperparameters as outlined in the original paper except for the fact that layer normalization BID2 ) is applied after each layer before the nonlinearity, which we found to be useful in either case and especially important for parameter space noise. We compare the performance of the following configurations: (a) no noise at all, (b) uncorrelated additive Gaussian action space noise (σ = 0.2), (c) correlated additive Gaussian action space noise (Ornstein-Uhlenbeck process with σ = 0.2), and (d) adaptive parameter space noise. In the case of parameter space noise, we adapt the scale so that the ing change in action space is comparable to our baselines with uncorrelated Gaussian action space noise (see Section C.2 for full details).We evaluate the performance on several continuous control tasks. FIG3 depicts the for three exemplary environments. Each agent is trained for 1 M timesteps, where 1 epoch consists of 10 thousand timesteps. In order to make comparable between configurations, we evaluate the performance of the agent every 10 thousand steps by using no noise for 20 episodes. On HalfCheetah, parameter space noise achieves significantly higher returns than all other configurations. We find that, in this environment, all other exploration schemes quickly converge to a local optimum (in which the agent learns to flip on its back and then "wiggles" its way forward). Parameter space noise behaves similarly initially but still explores other options and quickly learns to break out of this sub-optimal behavior. Also notice that parameter space noise vastly outperforms correlated action space noise on this environment, clearly indicating that there is a significant difference between the two. On the remaining two environments, parameter space noise performs on par with other exploration strategies. Notice, however, that even if no noise is present, DDPG is capable of learning good policies. We find that this is representative for the remaining environments (see Appendix E for full ), which indicates that these environments do not require a lot of exploration to begin with due to their well-shaped reward function. The for TRPO are depicted in FIG4. Interestingly, in the Walker2D environment, we see that adding parameter noise decreases the performance variance between seeds. This indicates that parameter noise aids in escaping local optima. The environments in the previous section required relatively little exploration. In this section, we evaluate whether parameter noise enables existing RL algorithms to learn on environments with very sparse rewards, where uncorrelated action noise generally fails BID20 BID0.A scalable toy example We first evaluate parameter noise on a well-known toy problem, following the setup described by BID20 as closely as possible. The environment consists of a chain of N states and the agent always starts in state s 2, from where it can either move left or right. In state s 1, the agent receives a small reward of r = 0.001 and a larger reward r = 1 in state s N. Obviously, it is much easier to discover the small reward in s 1 than the large reward in s N, with increasing difficulty as N grows. The environment is described in greater detail in Section A.3.We compare adaptive parameter space noise DQN, bootstrapped DQN, and -greedy DQN. The chain length N is varied and for each N three different seeds are trained and evaluated. After each episode, we evaluate the performance of the current policy by performing a rollout with all noise disabled (in the case of bootstrapped DQN, we perform majority voting over all heads). The problem is considered solved if one hundred subsequent rollouts achieve the optimal return. We plot the median number of episodes before the problem is considered solved (we abort if the problem is still unsolved after 2 thousand episodes). Full experimental details are available in Section A.3. Green indicates that the problem was solved whereas blue indicates that no solution was found within 2 K episodes. Note that less number of episodes before solved is better. FIG5 shows that parameter space noise clearly outperforms action space noise (which completely fails for moderately large N) and even outperforms the more computational expensive bootstrapped DQN. However, it is important to note that this environment is extremely simple in the sense that the optimal strategy is to always go right. In a case where the agent needs to select a different optimal action depending on the current state, parameter space noise would likely work less well since weight randomization of the policy is less likely to yield this behavior. Our thus only highlight the difference in exploration behavior compared to action space noise in this specific case. In the general case, parameter space noise does not guarantee optimal exploration. Continuous control with sparse rewards We now make the continuous control environments more challenging for exploration. Instead of providing a reward at every timestep, we use environments that only yield a non-zero reward after significant progress towards a goal. More concretely, we consider the following environments from rllab 5, modified according to: (a) SparseCartpoleSwingup, which only yields a reward if the paddle is raised above a given threshold, (b) SparseDoublePendulum, which only yields a reward if the agent reaches the upright position, and (c) SparseHalfCheetah, which only yields a reward if the agent crosses a target distance, (d) SparseMountainCar, which only yields a reward if the agent drives up the hill, (e) SwimmerGather, yields a positive or negative reward upon reaching targets. For all tasks, we use a time horizon of T = 500 steps before resetting. We consider both DDPG and TRPO to solve these environments (the exact experimental setup is described in Section A.2). FIG6 shows the performance of DDPG, while the for TRPO have been moved to Appendix F. The overall performance is estimated by running each configuration with five different random seeds, after which we plot the median return (line) as well as the interquartile range (shaded area).For DDPG, SparseDoublePendulum seems to be easy to solve in general, with even no noise finding a successful policy relatively quickly. The for SparseCartpoleSwingup and SparseMountainCar are more interesting: Here, only parameter space noise is capable of learning successful policies since all other forms of noise, including correlated action space noise, never find states with nonzero rewards. For SparseHalfCheetah, DDPG at least finds the non-zero reward but never learns a successful policy from that signal. On the challenging SwimmerGather task, all configurations of DDPG fail. Our clearly show that parameter space noise can be used to improve the exploration behavior of these off-the-shelf algorithms. However, it is important to note that improvements in exploration are not guaranteed for the general case. It is therefore necessary to evaluate the potential benefit of parameter space noise on a case-by-case basis. Evolution strategies (ES) are closely related to our approach since both explore by introducing noise in the parameter space, which can lead to improved exploration behavior . 6 However, ES disregards temporal information and uses black-box optimization to train the neural network. By combining parameter space noise with traditional RL algorithms, we can include temporal information as well rely on gradients computed by back-propagation for optimization while still benefiting from improved exploratory behavior. We now compare ES and traditional RL with parameter space noise directly. We compare performance on the 21 ALE games that were used in Section 4.1. The performance is estimated by running 10 episodes for each seed using the final policy with exploration disabled and computing the median returns. For ES, we use the obtained by , which were obtained after training on 1 000 M frames. For DQN, we use the same parameter space noise for exploration that was previously described and train on 40 M frames. Even though DQN with parameter space noise has been exposed to 25 times less data, it outperforms ES on 15 out of 21 Atari games (full are available in Appendix D). Combined with the previously described , this demonstrates that parameter space noise combines the desirable exploration properties of ES with the sample efficiency of traditional RL. The problem of exploration in reinforcement has been studied extensively. A range of algorithms BID14 BID5 BID1 have been proposed that guarantee near-optimal solutions after a number of steps that are polynomial in the number of states, number of actions, and the horizon time. However, in many real-world reinforcements learning problems both the state and action space are continuous and high dimensional so that, even with discretization, these algorithms become impractical. In the context of deep reinforcement learning, a large variety of techniques have been proposed to improve exploration BID29 BID6 BID20 BID22 BID30 BID21. However, all are non-trivial to implement and are often computational expensive. The idea of perturbing the parameters of a policy has been proposed by Rückstieß et al. for policy gradient methods. The authors show that this form of perturbation generally outperforms random exploration and evaluate their exploration strategy with the REINFORCE (a) and Natural Actor-Critic BID24 algorithms. However, their policies are relatively lowdimensional compared to modern deep architectures, they use environments with low-dimensional state spaces, and their contribution is strictly limited to the policy gradient case. In contrast, our method is applied and evaluated for both on and off-policy setting, we use high-dimensional policies, and environments with large state spaces. Our work is also closely related to evolution strategies (ES, BID26 BID27 FORMULA2). In the context of policy optimization, our work is closely related to BID17 and BID28. More recently, showed that ES can work for high-dimensional environments like Atari and OpenAI Gym continuous control problems. However, ES generally disregards any temporal structure that may be present in trajectories and typically suffers from sample inefficiency. Bootstrapped DQN BID20 has been proposed to aid with more directed and consistent exploration by using a network with multiple heads, where one specific head is selected at the beginning of each episode. In contrast, our approach perturbs the parameters of the network directly, thus achieving similar yet simpler (and as shown in Section 4.2, sometimes superior) exploration behavior. Concurrently to our work, BID8 have proposed a similar approach that utilizes parameter perturbations for more efficient exploration. In this work, we propose parameter space noise as a conceptually simple yet effective replacement for traditional action space noise like -greedy and additive Gaussian noise. This work shows that parameter perturbations can successfully be combined with contemporary on-and off-policy deep RL algorithms such as DQN, DDPG, and TRPO and often in improved performance compared to action noise. Experimental further demonstrate that using parameter noise allows solving environments with very sparse rewards, in which action noise is unlikely to succeed. Our indicate that parameter space noise is a viable and interesting alternative to action space noise, which is still the de facto standard in most reinforcement learning applications. A EXPERIMENTAL SETUP For ALE BID3, the network architecture as described in BID19 is used. This consists of 3 convolutional layers (32 filters of size 8 × 8 and stride 4, 64 filters of size 4 × 4 and stride 2, 64 filters of size 3 × 3 and stride 1) followed by 1 hidden layer with 512 units followed by a linear output layer with one unit for each action. ReLUs are used in each layer, while layer normalization BID2 ) is used in the fully connected part of the network. For parameter space noise, we also include a second head after the convolutional stack of layers. This head determines a policy network with the same architecture as the Q-value network, except for a softmax output layer. The target networks are updated every 10 K timesteps. The Q-value network is trained using the Adam optimizer BID15 with a learning rate of 10 −4 and a batch size of 32. The replay buffer can hold 1 M state transitions. For the -greedy baseline, we linearly anneal from 1 to 0.1 over the first 1 M timesteps. For parameter space noise, we adaptively scale the noise to have a similar effect in action space (see Section C.1 for details), effectively ensuring that the maximum KL divergence between perturbed and non-perturbed π is softly enforced. The policy is perturbed at the beginning of each episode and the standard deviation is adapted as described in Appendix C every 50 timesteps. Notice that we only perturb the policy head after the convolutional part of the network (i.e. the fully connected part, which is also why we only include layer normalization in this part of the network). To avoid getting stuck (which can potentially happen for a perturbed policy), we also use -greedy action selection with = 0.01. In all cases, we perform 50 K random actions to collect initial data for the replay buffer before training starts. We set γ = 0.99, clip rewards to be in [−1, 1], and clip gradients for the output layer of Q to be within [−1, 1]. For observations, each frame is down-sampled to 84 × 84 pixels, after which it is converted to grayscale. The actual observation to the network consists of a concatenation of 4 subsequent frames. Additionally, we use up to 30 noop actions at the beginning of the episode. This setup is identical to what is described by BID19. For DDPG, we use a similar network architecture as described by BID18: both the actor and critic use 2 hidden layers with 64 ReLU units each. For the critic, actions are not included until the second hidden layer. Layer normalization BID2 ) is applied to all layers. The target networks are soft-updated with τ = 0.001. The critic is trained with a learning rate of 10 −3 while the actor uses a learning rate of 10 −4. Both actor and critic are updated using the Adam optimizer BID15 with batch sizes of 128. The critic is regularized using an L2 penalty with 10 −2. The replay buffer holds 100 K state transitions and γ = 0.99 is used. Each observation dimension is normalized by an online estimate of the mean and variance. For parameter space noise with DDPG, we adaptively scale the noise to be comparable to the respective action space noise (see Section C.2). For dense environments, we use action space noise with σ = 0.2 (and a comparable adaptive noise scale). Sparse environments use an action space noise with σ = 0.6 (and a comparable adaptive noise scale).TRPO uses a step size of δ KL = 0.01, a policy network of 2 hidden layers with 32 tanh units for the nonlocomotion tasks, and 2 hidden layers of 64 tanh units for the locomotion tasks. The Hessian calculation is subsampled with a factor of 0.1, γ = 0.99, and the batch size per epoch is set to 5 K timesteps. The baseline is a learned linear transformation of the observations. The following environments from OpenAI Gym 7 BID6 are used: DISPLAYFORM0 • Swimmer (S ⊂ R 8, A ⊂ R 2), and DISPLAYFORM1 For the sparse tasks, we use the following environments from rllab 8, modified as described by: DISPLAYFORM2, which only yields a reward if the paddle is raised above a given threshold, DISPLAYFORM3, which only yields a reward if the agent crosses a distance threshold, DISPLAYFORM4, which only yields a reward if the agent drives up the hill,• SparseDoublePendulum (S ⊂ R 6, A ⊂ R), which only yields a reward if the agent reaches the upright position, and DISPLAYFORM5, which yields a positive or negative reward upon reaching targets. We follow the state encoding proposed by BID20 and use φ(s t) = (1{x ≤ s t}) as the observation, where 1 denotes the indicator function. DQN is used with a very simple network to approximate the Q-value function that consists of 2 hidden layers with 16 ReLU units. Layer normalization BID2 ) is used for all hidden layers before applying the nonlinearity. Each agent is then trained for up to 2 K episodes. The chain length N is varied and for each N three different seeds are trained and evaluated. After each episode, the performance of the current policy is evaluated by sampling a trajectory with noise disabled (in the case of bootstrapped DQN, majority voting over all heads is performed). The problem is considered solved if one hundred subsequent trajectories achieve the optimal episode return. Figure 6 depicts the environment. DISPLAYFORM0 Figure 6: Simple and scalable environment to test for exploratory behavior BID20.We compare adaptive parameter space noise DQN, bootstrapped DQN BID20 ) (with K = 20 heads and Bernoulli masking with p = 0.5), and -greedy DQN (with linearly annealed from 1.0 to 0.1 over the first one hundred episodes). For adaptive parameter space noise, we only use a single head and perturb Q directly, which works well in this setting. Parameter space noise is adaptively scaled so that δ ≈ 0.05. In all cases, γ = 0.999, the replay buffer holds 100 K state transitions, learning starts after 5 initial episodes, the target network is updated every 100 timesteps, and the network is trained using the Adam optimizer BID15 with a learning rate of 10 −3 and a batch size of 32. DISPLAYFORM1 Given a stochastic policy π θ (a|s) with θ ∼ N (φ, Σ), the expected return can be expanded using likelihood ratios and the reparametrization trick BID16 as DISPLAYFORM2 DISPLAYFORM3 for N samples i ∼ N (0, I) and τ i ∼ (π DISPLAYFORM4 This also allows us to subtract a variance-reducing baseline b i t, leading to DISPLAYFORM5 In our case, we set Σ := σ 2 I and use our proposed adaption method to re-scale as appropriate. Parameter space noise requires us to pick a suitable scale σ. This can be problematic since the scale will highly depend on the specific network architecture, and is likely to vary over time as parameters become more sensitive as learning progresses. Additionally, while it is easy to intuitively grasp the scale of action space noise, it is far harder to understand the scale in parameter space. We propose a simple solution that resolves all aforementioned limitations in an easy and straightforward way. This is achieved by adapting the scale of the parameter space noise over time, thus using a time-varying scale σ k . Furthermore, σ k is related to the action space variance that it induces, and updated accordingly. Concretely, we use the following simple heuristic to update σ k every K timesteps: DISPLAYFORM0 where d(·, ·) denotes some distance between the non-perturbed and perturbed policy (thus measuring in action space), α ∈ R >0 is used to rescale σ k, and δ ∈ R >0 denotes some threshold value. This idea is based on the Levenberg-Marquardt heuristic BID25. The concrete distance measure and appropriate choice of δ depends on the policy representation. In the following sections, we outline our choice of d(·, ·) for methods that do (DDPG and TRPO) and do not (DQN) use behavioral policies. In our experiments, we always use α = 1.01. For DQN, the policy is defined implicitly by the Q-value function. Unfortunately, this means that a naïve distance measure between Q and Q has pitfalls. For example, assume that the perturbed policy has only changed the bias of the final layer, thus adding a constant value to each action's Q-value. In this case, a naïve distance measure like the norm Q − Q 2 would be nonzero, although the policies π and π (implied by Q and Q, respectively) are exactly equal. This equally applies to the case where DQN as two heads, one for Q and one for π. We therefore use a probabilistic formulation 9 for both the non-perturbed and perturbed policies: π, π: S × A → by applying the softmax function over predicted Q values: DISPLAYFORM0, where Q i (·) denotes the Q-value of the i-th action. π is defined analogously but uses the perturbed Q instead (or the perturbed head for π). Using this probabilistic formulation of the policies, we can now measure the distance in action space: DISPLAYFORM1 where D KL (· ·) denotes the Kullback-Leibler (KL) divergence. This formulation effectively normalizes the Q-values and therefore does not suffer from the problem previously outlined. We can further relate this distance measure to -greedy action space noise, which allows us to fairly compare the two approaches and also avoids the need to pick an additional hyperparameter δ. More concretely, the KL divergence between a greedy policy π(s, a) = 1 for a = argmax a Q(s, a) and π(s, a) = 0 otherwise and an -greedy policy π(s, a) = 1 − + |A| for a = argmax a Q(s, a) and π(s, a) = |A| otherwise is D KL (π π) = − log (1 − + |A|), where |A| denotes the number of actions (this follows immediately from the definition of the KL divergence for discrete probability distributions). We can use this distance measure to relate action space noise and parameter space noise to have similar distances, by adaptively scaling σ so that it matches the KL divergence between greedy and -greedy policy, thus setting δ:= − log (1 − + |A|). For DDPG, we relate noise induced by parameter space perturbations to noise induced by additive Gaussian noise. To do so, we use the following distance measure between the non-perturbed and perturbed policy: DISPLAYFORM0 where E s [·] is estimated from a batch of states from the replay buffer and N denotes the dimension of the action space (i.e. A ⊂ R N). It is easy to show that d(π, π + N (0, σ 2 I)) = σ. Setting δ:= σ as the adaptive parameter space threshold thus in effective action space noise that has the same standard deviation as regular Gaussian action space noise. In order to scale the noise for TRPO, we adapt the sampled noise vectors σ by computing a natural step H −1 σ. We essentially compute a trust region around the noise direction to ensure that the perturbed policy π remains sufficiently close to the non-perturbed version via DISPLAYFORM0 Concretely, this is computed through the conjugate gradient algorithm, combined with a line search along the noise direction to ensure constraint conformation, as described in Appendix C of Schulman et al. (2015b). FIG8 provide the learning curves for all 21 Atari games. TAB2 compares the final performance of ES after 1 000 M frames to the final performance of DQN with -greedy exploration and parameter space noise exploration after 40 M frames. In all cases, the performance is estimated by running 10 episodes with exploration disabled. We use the numbers reported by for ES and report the median return across three seeds for DQN. FIG9. The for InvertedPendulum and InvertedDoublePendulum are very noisy due to the fact that a small change in policy can easily degrade performance significantly, and thus hard to read. Interestingly, adaptive parameter space noise achieves the most stable performance on InvertedDoublePendulum. Overall, performance is comparable to other exploration approaches. Again, no noise in either the action nor the parameter space achieves comparable , indicating that these environments combined with DDPG are not well-suited to test for exploration. The performance of TRPO with noise scaled according to the parameter curvature, as defined in Section C.3 is shown in FIG10. The TRPO baseline uses only action noise by using a policy network that outputs the mean of a Gaussian distribution, while the variance is learned. These show that adding parameter space noise aids in either learning much more consistently on these challenging sparse environments.
Parameter space noise allows reinforcement learning algorithms to explore by perturbing parameters instead of actions, often leading to significantly improved exploration performance.
1,134
scitldr
Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT-BASE model by more than 60x, with only a minor drop in downstream task metrics, ing in a language model with a footprint of under 7MB. Experimental also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques. Recently, contextual-aware language models such as ELMo , GPT , BERT and XLNet have shown to greatly outperform traditional word embedding models including Word2Vec and GloVe in a variety of NLP tasks. These pre-trained language models, when finetuned on downstream language understanding tasks such as sentiment classification , natural language inference and reading comprehension , have achieved state-of-the-art performance. However, the large number of parameters in these models, often above hundreds of millions, makes it impossible to host them on resource-constrained tasks such as doing real-time inference on mobile and edge devices. Besides utilizing model quantization techniques which aim to reduce the floating-point accuracy of the parameters, significant recent research has focused on knowledge distillation techniques. Here, the goal is to train a small-footprint student model by borrowing knowledge, such as through a soft predicted label distribution, from a larger pre-trained teacher model. However, a significant bottleneck that has been overlooked by previous efforts is the input vocabulary size and its corresponding word embedding matrix, often accounting for a significant proportion of all model parameters. For instance, the embedding table of the BERT BASE model, comprising over 30K WordPiece tokens (b), accounts for over 21% of the model size. While there has been existing work on reducing NLP model vocabulary sizes , distillation techniques cannot utilize these, since they require the student and teacher models to share the same vocabulary and output space. This profoundly limits their potential to further reduce model sizes. We present two novel ideas to improve the effectiveness of knowledge distillation, in particular for BERT, with the focus on bringing down model sizes to as much as a few mega-bytes. Our model is among the first to propose to use a significantly smaller vocabulary for the student model learned during distillation. In addition, instead of distilling solely on the teacher model's final-layer outputs, our model leverages layer-wise teacher model parameters to directly optimize the parameters of the corresponding layers in the student model. Specifically, our contributions are: • Dual Training: Our teacher and student models have different vocabularies and incompatible tokenizations for the same sequence. To address this during distillation, we feed the teacher model a mix of teacher vocabulary-tokenized and student vocabulary-tokenized words within a single sequence. Coupled with the masked language modeling task, this encourages an implicit alignment of the teacher and student WordPiece embeddings, since the student vocabulary embedding may be used as context to predict a word tokenized by the teacher vocabulary and vice versa. • Shared Variable Projections: To minimize the loss of information from reducing the hidden state dimension, we introduce a separate loss to align the teacher and student models' trainable variables. This allows for more direct layer-wise transfer of knowledge to the student model. Using the combination of dual training and shared variable projections, we train a 12-layer highlycompressed student BERT model, achieving a maximum compression ratio of ∼61.94x (with 48 dimension size) compared to the teacher BERT BASE model. We conduct experiments for measuring both generalized language modeling perspective and for downstream tasks, demonstrating competitive performance with high compression ratios for both families of tasks. Research in neural network model compression has been concomitant with the rise in popularity of neural networks themselves, since these models have often been memory-intensive for the hardware of their time. Work in model compression for NLP applications falls broadly into four categories: matrix approximation, parameter pruning/sharing, weight quantization and knowledge distillation. A family of approaches seeks to compress the matrix parameters of the models by low-rank approximation i.e. the full-rank matrix parameter is approximated using multiple low-rank matrices, thereby reducing the effective number of model parameters. Another line of work explores parameter pruning and sharing-based methods (; ; ;), which explore the redundancy in model parameters and try to remove redundant weights as well as neurons, for a variety of neural network architectures. Model weight quantization techniques (; a;) focus on mapping model weights to lower-precision integers and floating-point numbers. These can be especially effective with hardware supporting efficient low-precision calculations. More recently, apply quantization to BERT-based transformer models. Knowledge distillation differs from the other discussed approaches: the smaller student model may be parametrized differently from the bigger teacher model, affording more modeling freedom. Teaching a student model to match the soft output label distributions from a larger model alongside the hard ground-truth distribution works well for many tasks, such as machine translation and language modeling . Not limited to the teacher model outputs, some approaches perform knowledge distillation via attention transfer , or via feature maps or intermediate model outputs (; ;). More relevant to current work, and employ variants of these techniques to BERT model compression by reducing the number of transformer layers. However, as explained before, these approaches are not immediately applicable to our setting due to incompatible teacher and student model vocabularies, and do not focus sufficiently on the embedding matrix size . Our knowledge distillation approach is centered around reducing the number of WordPiece tokens in the model vocabulary. In this section, we first discuss the rationale behind this reduction and the challenges it introduces, followed by our techniques, namely dual training and shared projection. (Right) A student BERT model trained from scratch with smaller vocab (5K) and hidden state dimension (e.g., 48). During distillation, the teacher model randomly selects a vocabulary to segment each input word. The red and green square nexts to the transformer layers indicate trainable parameters for both the student and teacher models -note that our student models have smaller model dimensions. The projection matrices U and V, shown as having representative shapes, are shared across all layers for model parameters that have the same dimensions. We follow the general knowledge distillation paradigm of training a small student model from a large teacher model. Our teacher model is a 12-layer uncased BERT BASE, trained with 30522 WordPiece tokens and 768-dimensional embeddings and hidden states. We denote the teacher model parameters by θ t. Our student model consists of an equal number of transformer layers with parameters denoted by θ s, but with a smaller vocabulary as well as embedding/hidden dimensions, illustrated in Figure 1. Using the same WordPiece algorithm and training corpus as BERT, we obtain a vocabulary of 4928 WordPieces, which we use for the student model. WordPiece tokens (b) are sub-word units obtained by applying a greedy segmentation algorithm to the training corpus: a desired number (say, D) of WordPieces are chosen such that the segmented corpus is minimal in the number of WordPieces used. A cursory look at both vocabularies reveals that 93.9% of the WordPieces in the student vocabulary also exist in the teacher vocabulary, suggesting room for a reduction in the WordPiece vocabulary size from 30K tokens. Since we seek to train a general-purpose student language model, we elect to reuse the teacher model's original training objective to optimize the student model, i.e., masked language modeling and next sentence prediction, before any fine-tuning. In the former task, words in context are randomly masked, and the language model needs to predict those words given the masked context. In the latter task, given a pair of sentences, the language model predicts whether the pair is consistent. However, since the student vocabulary is not a complete subset of the teacher vocabulary, the two vocabularies may tokenize the same words differently. As a , the outputs of the teacher and student model for the masked language modeling task may not align. Even with the high overlap between the two vocabularies, the need to train the student embedding from scratch, and the change in embedding dimension precludes existing knowledge distillation techniques, which rely on the alignment of both models' output spaces. As a , we explore two alternative approaches that enable implicit transfer of knowledge to the student model, which we describe below. During distillation, for a given training sequence input to the teacher model, we propose to mix the teacher and student vocabularies by randomly selecting (with a probability p DT, a hyperparameter) tokens from the sequence to segment using the student vocabulary, with the other tokens segmented using the teacher vocabulary. As illustrated in Figure 1, given the input context ['I', 'like', 'machine', 'learning'], the words'I' and'machine' are segmented using the teacher vocabulary (in green), while'like' and'learning' are segmented using the student vocabulary (in blue). Similar to cross-lingual training in , this encourages alignment of the representations for the same word as per the teacher and student vocabularies. This is effected through the masked language modeling task: the model now needs to learn to predict words from the student vocabulary using context words segmented using the teacher vocabulary, and vice versa. The expectation is that the student embeddings can be learned effectively this way from the teacher embeddings as well as model parameters θ t. Note that we perform dual training only for the teacher model inputs: the student model receives words segmented exclusively using the student vocabulary. Also, during masked language modeling, the model uses different softmax layers for the teacher and the student vocabularies depending on which one was used to segment the word in question. Relying solely on teacher model outputs to train the student model may not generalize well . Therefore, some approaches utilize and try to align the student model's intermediate predictions to those of the teacher . In our setting, however, since the student and teacher model output spaces are not identical, intermediate model outputs may prove hard to align. Instead, we seek to directly minimize the loss of information from the teacher model parameters θ t to the student parameters θ s with smaller dimensions. We achieve this by projecting the model parameters into the same space, to encourage alignment. More specifically, as in Figure 1, we project each trainable variable in θ t to the same shape as the corresponding variable in θ s. For example, for all the trainable variables θ t with shape 768×768, we learn two projection matrices U ∈ R d×768 and V ∈ R 768×d to project them into the corresponding space of the student model variable θ t, where d is the student model's hidden dimension. U and V are common to all BERT model parameters of that dimensionality; in addition, U and V are not needed for fine-tuning or inference after distillation. In order to align the student variable and the teacher variable's projection, we introduce a separate mean square error loss defined in Equation 1, where ↓ stands for down projection (since the projection is to a lower dimension). The above loss function aligns the trainable variables in the student space. Alternatively, we can project trainable variables in θ s to the same shape as in θ t. This way, the loss function in Equation 2, (↑ denotes up projection) can compare the trainable variables in the teacher space. 3.4 OPTIMIZATION OBJECTIVE Our final loss function includes, in addition to an optional projection loss, masked language modeling cross-entropy losses for the student as well as the teacher models, since the teacher model is trained with dual-vocabulary inputs and is not static. P (y i = c|θ s) and P (y i = c|θ t) denote the stu-dent and teacher model prediction probabilities for class c respectively, and 1 denotes an indicator function. Equations 3 and 4 below define the final loss L f inal, where is a hyperparameter. To evaluate our knowledge distillation approach, we design two classes of experiments. First, we evaluate the distilled student language models using the masked word prediction task on an unseen evaluation corpus, for an explicit evaluation of the language model. Second, we fine-tune the language model by adding a task-specific affine layer on top of the student language model outputs, on a suite of downstream sentence and sentence pair classification tasks. This is meant to be an implicit evaluation of the quality of the representations learned by the student language model. We describe these experiments, along with details on training, implementation and our baselines below. During the distillation of the teacher BERT model to train the student BERT language model, we utilize the same corpus as was used to train the teacher i.e. BooksCorpus and English Wikipedia, with whitespaces used tokenize the text into words. We only use the masked language modeling task to calculate the overall distillation loss from Section 3.3, since the next sentence prediction loss hurt performance slightly. Dual training is enabled for teacher model inputs, with p DT, the probability of segmenting a teacher model input word using the student vocabulary, set to 0.5. For experiments including shared projection, the projection matrices U and V utilized Xavier initialization . The loss weight coefficient is set to 1 after tuning. It is worth noting that in contrast to a number of existing approaches, we directly distill the teacher BERT language model, not yet fine-tuned on a downstream task, to obtain a student language model that is task-agnostic. For downstream tasks, we fine-tune this distilled student language model. Distillation is carried out on Cloud TPUs in a 4x4 pod configuration 1 (32 TPU cores overall). We optimized the loss using LAMB for 250K steps, with a learning rate of 0.00125 and batch size of 4096. Depending on the student model dimension, training took between 2-4 days. We evaluate three variants of our distilled student models: with only dual training of the teacher and student vocabularies (DualTrain) and with dual training along with down-projection (DualTrain + SharedProjDown) or up-projection (DualTrain + SharedProjUp) of the teacher model parameters. For each of these configurations, we train student models with embedding and hidden dimensions 48, 96 and 192, for 9 total variants, each using a compact 5K-WordPiece vocabulary. Table 1 presents some statistics on these models' sizes: our smallest model contains two orders of magnitude fewer parameters, and requires only 1% floating-point operations when compared to the BERT BASE model. For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from , who distill the 12-layer BERT BASE model into 3 and 6-layer BERT models by using the teacher model's hidden states. Table 1: A summary of our student models' sizes compared to BERT BASE. #Params indicates the number of parameters in the student model, model size is measured in megabytes, and FLOPS ratio measures the relative ratio of floating point operations required for inference on the model. For explicit evaluation of the generalized language perspective of the distilled student language models, we use the Reddit dataset 2 to measure word mask prediction accuracy of the student models, since the language used on Reddit is different from that in the training corpora. The dataset is preprocessed similarly to the training corpora, except we do not need to tokenize it using the teacher vocabulary, since we only run and evaluate the student models. For implicit evaluation on downstream language understanding tasks, we fine-tune and evaluate the distilled student models on three tasks from the GLUE benchmark : • Stanford Sentiment Treebank (SST-2) , a two-way sentence sentiment classification task with 67K training instances, • Microsoft Research Paraphrase Corpus (MRPC) , a two-way sentence pair classification task to identify paraphrases, with 3.7K training instances, and • Multi-Genre Natural Language Inference (MNLI) , a three-way sentence pair classification task with 393K training instances, to identify premise-hypothesis relations. There are separate development and test sets for genre-matched and genre-mismatched premisehypothesis pairs; we tune our models solely on the genre-matched development set. For all downstream task evaluations, we fine-tune for 10 epochs using LAMB with a learning rate of 0.0002 and batch size of 32. Since our language models are trained with a maximum sequence length of 128 tokens, we do not evaluate on reading comprehension datasets such as SQuAD or RACE , which require models supporting longer sequences. Table 2: Masked language modeling task accuracy for the distilled student models and a fine-tunefrom-scratch baseline. We observe consistently better performance for our proposed approaches. Table 2 contains masked word prediction accuracy figures for the different models and the NoKD baseline. We observe that dual training significantly improves over the baseline for all model dimensions, and that both shared projection losses added to dual training further improve the word prediction accuracy. It is interesting to note that for all model dimensions, SharedProjUp projecting into the teacher space outperforms SharedProjDown, significantly so for dimension 48. Expectedly, there is a noticeable performance drop going from 192 to 96 to 48-dimensional hidden state models. Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERT BASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI. Note that because of the differing teacher and student model vocabularies, masked word prediction accuracy for the teacher BERT BASE model is not directly comparable with the student models. Table 3 shows on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERT BASE teacher model, and the PKD and NoKD baselines. We note that models trained with our proposed approaches perform strongly and consistently improve upon the identically parametrized NoKD baselines, indicating that the dual training and shared projection techniques are effective, without incurring significant losses against the BERT BASE teacher model. Comparing with the PKD baseline, our 192-dimensional models, achieving a higher compression rate than either of the PKD models, perform better than the 3-layer PKD baseline and are competitive with the larger 6-layer baseline on task accuracy while being nearly 5 times as small. Another observation we make is that the performance drop from 192-dimensional to 96-dimensional models is minimal (less than 2% for most tasks). For the MRPC task, in fact, the 96-dimensional model trained with dual training achieves an accuracy of 80.5%, which is higher than even the PKD 6-layer baseline with nearly 12 times as many parameters. Finally, our highly-compressed 48-dimensional models also perform respectably: the best 48-dimensional models are in a similar performance bracket as the 3-layer PKD model, a model 25 times larger by memory footprint. Shared projections and model performance: We see that for downstream task performance, dual training still consistently improves upon the direct fine-tuning approach for virtually all experiments. The effect of shared variable projection, however, is less pronounced, with consistent improvements visible only for MRPC and for the 48-dimensional models i.e. the smallest dataset and models respectively in our experiments. This aligns with our intuition for variable projection as a more direct way to provide a training signal from the teacher model internals, which can assume more importance for a low-data or small-model scenario. However, for larger models and more data, the linear projection of parameters may be reducing the degrees of freedom available to the model, since linear projection is a fairly simple function to align the teacher and student parameter spaces. A related comparison of interest is between up-projection and down-projection of the model variables: we note up-projection does visibly better on the language modeling task and slightly better on the downstream tasks. The parameters of a well-trained teacher model represent a high-quality local minimum in the teacher space, which may be easier to search for during up-projection. Vocabulary size tradeoffs: Issues with input vocabulary size are peculiar to problems in natural language processing: they do not always apply to other areas such as computer vision, where a small fixed number of symbols can encode most inputs. There has been some work on reducing input vocabulary sizes for NLP, but typically not targeting model compression. One concern with reducing the vocabularies of NLP models is it pushes the average tokenized sequence lengths up, making model training harder. In this work, however, we consider classification tasks on shorter texts, which are not as affected by input sequence lengths as, say, tasks such as machine translation are. Furthermore, many real-world applications revolve around short text inputs, which is why a better trade-off between vocabulary size and sequence lengths may be worthwhile for such applications. Order of distillation and fine-tuning: Most of the existing work on distilling language models such as BERT and reporting on downstream tasks, including some of the baselines in this work, first fine-tune a teacher model on the downstream tasks, and then distill this model. Our goal in this work, however, is to explore the limits to which BERT's language modeling capacity itself, and how much of it is driven by its large WordPiece vocabulary. We leave experiments on distilling fine-tuned teacher models, potentially yielding better on downstream tasks, to future work. We proposed two novel ideas to improve the effectiveness of knowledge distillation for BERT, focusing on using a significantly smaller vocabulary, as well as smaller embedding and hidden dimensions for the student BERT language models. Our dual training mechanism encourages implicit alignment of the teacher and student WordPiece embeddings, and shared variable projection allows for the faster and direct layer-wise transfer of knowledge to the student BERT model. Combining the two techniques, we trained a series of highly-compressed 12-layer student BERT models. Experiments on these models, to evaluate both generalized language perspective and four standardized downstream tasks, demonstrate the effectiveness of our proposed methods on both model accuracy and compression efficiency. One future direction of interest is to combine our approach with existing work to reduce the number of layers in the student models and explore other approaches such as low-rank matrix factorization to transfer model parameters from the teacher space to the student space. In addition, taking into account the frequency distribution of the WordPiece tokens while training embeddings may help optimize the model size further.
We present novel distillation techniques that enable training student models with different vocabularies and compress BERT by 60x with minor performance drop.
1,135
scitldr
Human reasoning involves recognising common underlying principles across many examples by utilising variables. The by-products of such reasoning are invariants that capture patterns across examples such as "if someone went somewhere then they are there" without mentioning specific people or places. Humans learn what variables are and how to use them at a young age, and the question this paper addresses is whether machines can also learn and use variables solely from examples without requiring human pre-engineering. We propose Unification Networks that incorporate soft unification into neural networks to learn variables and by doing so lift examples into invariants that can then be used to solve a given task. We evaluate our approach on four datasets to demonstrate that learning invariants captures patterns in the data and can improve performance over baselines. Humans have the ability to process symbolic knowledge and maintain symbolic thought . When reasoning, humans do not require combinatorial enumeration of examples but instead utilise invariant patterns with placeholders replacing specific entities. Symbolic cognitive models embrace this perspective with the human mind seen as an information processing system operating on formal symbols such as reading a stream of tokens in natural language. The language of thought hypothesis frames human thought as a structural construct with varying sub-components such as "X went to Y". By recognising what varies across examples, humans are capable of lifting examples into invariant principles that account for other instances. This symbolic thought with variables is learned at a young age through symbolic play . For instance a child learns that a sword can be substituted with a stick and engage in pretend play. Although variables are inherent in models of computation and symbolic formalisms, as in first-order logic , they are pre-engineered and used to solve specific tasks by means of unification or assignments that bound variables to given values. However, when learning from data only, being able to recognise when and which symbols should take on different values, i.e. symbols that can act as variables, is crucial for lifting examples into general principles that are invariant across multiple instances. Figure 1 shows the invariant learned by our approach: if someone is the same thing as someone else then they have the same colour. With this invariant, our approach can solve all of the training and test examples in task 16 of the bAbI dataset . In this paper we address the question of whether a machine can learn and use the notion of a variable, i.e. a symbol that can take on different values. For instance, given an example of the form "bernhard is a frog" the machine would learn that the token "bernhard" could be someone else and the token "frog" could be something else. If we consider unification a selection of the most appropriate value for a variable given a choice of values, we can reframe it as a form of attention. Attention models (; ;) allow neural networks to focus, attend to certain parts of the input often for the purpose of selecting a relevant portion. Since attention mechanisms are also differentiable they are often jointly learned within a task. This perspective motivates our idea of a unification mechanism that utilises attention and is therefore fully differentiable which we refer to as soft unification. Hence, we propose an end-to-end differentiable neural network approach for learning and utilising the notion of a variable that in return can lift examples into invariants used by the network to perform reasoning tasks. Specifically, we (i) propose a novel architecture capable of learning and using variables by lifting a given example through soft unification, (ii) present the empirical of our approach on four datasets and (iii) analyse the learned invariants that capture the underlying patterns present in the tasks. Our implementation using Chainer is publicly available at [link removed](anonymous link provided with submission). Reasoning with variables involves identifying what variables are, the setting in which they are used as well as the process by which they are assigned values. When the varying components, i.e. variables, of an example are identified, the remaining structure can be lifted into an invariant which then accounts for multiple other instances. Definition 1 (Variable). Given a set of symbols S, a variable X is defined as a pair X (x, s d) where s d ∈ S is the default symbol of the variable and x is a discrete random variable of which the support is S. The representation of a variable φ V (X:s d) is equal to the expected value of the corresponding random variable x given the default symbol s d: where For example, φ could be an embedding and φ V (X:s d) would become a weighted sum of symbol embeddings as in conventional attention models. The default symbol of a variable is intended to capture the variable's bound meaning following the idea of referants by. We denote variables using X, Y, A etc. such as X:bernhard where X is the name of the variable and bernhard the default symbol as shown in Figure 1. Definition 2 (Invariant). Given a structure (e.g. list, grid) G over S, an invariant is a pair I (G, ψ) where G ∈ G is the invariant example such as a tokenised story with tokens as symbols and ψ: S → is a function representing the degree to which the symbol is considered a variable. Thus, the final representation of a symbol s included in G, φ I (s) is: the linear interpolation between its representation φ(s) and its variable bound value with itself as the default symbol φ V (X:s). We adhere to the term invariant and refrain from mentioning rules, unground rules, etc. used in logicbased formalisms, e.g. Muggleton & de , since neither the invariant structure needs to be rule-like nor the variables carry logical semantics. This distinction is clarified in Section 6. Definition 3 (Unification). Given an invariant I and an example K ∈ G, unification binds the variables in I to symbols in K. Defined as a function g: I × G → G, unification binds variables by computing the probability mass functions, P in equation 1, and returns the unified representation using equation 2. The probability mass function of a variable X:s d is: where φ U: S → R d is the unifying feature of a symbol and φ U (K) ∈ R |K|×d is applied element wise to symbols in K. If g is differentiable, it is referred to as soft unification. We distinguish φ from φ U to emphasise that the unifying properties of the symbols might be different from their representations. For example, φ(bernhard) could represent a specific person whereas φ U (bernhard) the notion of someone. Overall soft unification incorporates 3 learnable components: φ, ψ, φ U which denote the base features, variableness and unifying features of a symbol respectively. Given an upstream, potentially task specific, network f: G → S, an invariant I ∈ I and an input example K ∈ G with a corresponding desired output a ∈ S, the following holds: where f now predicts based on the unified representation produced by g. In this work, we focus on g, the invariants it produces together with the interaction of f • g. Since soft unification is end-to-end differentiable, it can be incorporated into existing task-specific upstream architectures. We present 3 architectures that model f • g using multi-layer perceptrons (MLP), convolutional neural networks (CNN) and memory networks to demonstrate the flexibility of our approach. In all cases, the d dimensional representation of symbols are learnable embeddings φ(s) = O[s] T E with E ∈ R |S|×d randomly initialised by N and O[s] the one-hot encoding of the symbol. The variableness of symbols is a learnable weight ψ w (s) = σ(w s) where w ∈ R |S| and σ is the sigmoid function. We consider every symbol independently a variable irrespective of its surrounding context and leave further contextualised formulations as future work. The underlying intuition of this configuration is that a useful symbol for a correct prediction might need to take on other values for different inputs. This usefulness can be viewed as the inbound gradient to the corresponding w s parameter and ψ w (s) acting as a gate. For further model details including the size of the embeddings, please refer to Appendix A. Unification MLP (UMLP) (f : MLP, g: RNN) We combine soft unification into a multi-layer perceptron to process fixed length inputs. In this case, the structure G is a sequence of symbols with a fixed length l, e.g. a sequence of digits 4234. Given an embedded input φ(k) ∈ R l×d, the upstream MLP computes the output symbol based on the flattened representations f (φ(k)) = softmax(hE T) where h ∈ R d is the output of the last layer. However, to compute the unifying features φ U, definition 3, g uses a bi-directional GRU is the hidden state of the GRU at symbol k and W U ∈ R d×d is a learnable parameter. This model emphasises the flexibility around the boundary of f • g and that the unifying features can be computed in any differentiable manner. Unification CNN (UCNN) (f : CNN, g: CNN) Given a grid of embedded symbols φ(K) ∈ R w×h×d where w is the width and h the height, we use a convolutional neural network such that the final prediction is f (φ(K)) = softmax((W h + b)E T ) where h this time is the of global max pooling and W, b are learnable parameters. We also model g using a separate convolutional network with the same architecture as f and set φ U (k) = c 2 (relu(c 1 (k)) where c 1, c 2 are the convolutional layers. The grid is padded with 0s to obtain w × h × d after each convolution such that every symbol has a unifying feature. This model conveys how soft unification can be adapted to the specifics of the domain for example by using a convolution in a spatially structured input. Unification Memory Networks (UMN) (f : MemNN, g: RNN) Soft unification does not need to happen prior to f in a f • g fashion but can also be incorporated at any intermediate stage multiple times. To demonstrate this ability, we unify the symbols at different memory locations at each iteration of a Memory Network. Memory networks can handle a list of lists structure such as a tokenised story as shown in Figure 2. The memory network f uses the final hidden state of a bi-directional GRU (outer squares in Figure 2) as the sentence representations to compute a context attention. At each iteration, we unify the words between the attended sentences using the same approach in UMLP with another bi-directional GRU (inner diamonds in Figure 2) for unifying features φ U (bernhard) = W U Φ(bernhard). Following equation 2, the new unified representation of the memory slot is computed and f uses it to perform the next iteration. Concretely, g produces an unification tensor U ∈ R M ×m×N ×d where M and m is the number of sentences and words in the invariant respectively, and N is the number of sentences in the example such that after the context attentions are applied over M and N, we obtain φ(k) ∈ R m×d as the unified sentence at that iteration. Note that unlike in the UMLP case, the sentences can be of varying length. The prediction is then softmax(W h J I + b) where h J I is the hidden state of the invariant after J iterations. This setup, however, requires pre-training f such that the context attentions match the correct sentences. A task might contain different questions such as "Where is X?" and "Why did X go to Y?". To let the models differentiate between questions and potentially learn different invariants, we extend them with a repository of invariants I ∈ I and aggregate the predictions from each invariant. One simple approach is to sum the predictions of the invariants I∈I f • g(I, K) used in UMLP and UCNN. Another approach could be to use features from the invariants such as memory representations in the case of UMN. For UMN, we weigh the predictions using a bilinear attention η based on the hidden states at the first iteration h To initially form the repository of invariants, we use the bag-of-words representation of the questions and find the most dissimilar ones based on their cosine similarity as a heuristic to obtain varied examples. We use 4 datasets consisting of context, query and an answer (C, q, a): fixed length sequences of symbols, shapes of symbols in a grid, story based natural language reasoning with the bAbI dataset and logical reasoning represented as logic programs, examples shown in Table 1 with further samples in Appendix B. In each case we use an appropriate model: UMLP for fixed length sequences, UCNN for grid and UMN for iterative reasoning. We use synthetic datasets of which the data generating distributions are known to evaluate not only the quantitative performance but also the quality of the invariants learned by our approach. Fixed Length Sequences We generate sequences of length l = 4 with 8 unique symbols represented as digits to predict (i) a constant, (ii) the head of the sequence, (iii) the tail and (iv) the duplicate symbol. We randomly generate 1000 triples and then only take the unique ones to ensure the test split contains unseen examples. The training is then performed over a 5-fold cross-validation. Grid To spatially organise symbols, we generate a grid of size 3×3 with 8 unique symbols organised into 2 × 2 box of identical symbol, a vertical, diagonal or horizontal sequence of length 3, a cross or a plus shape and a triangle. In each task we predict (i) the identical symbol, (ii) the head of the sequence, (iii) the centre of the cross or plus and (iv) the corner of the triangle respectively. We follow the same procedure from sequences and randomly generate 1000 discarding duplicate triples. bAbI The bAbI dataset has become a standard benchmark for evaluating memory based networks. It consists of 20 synthetically generated natural language reasoning tasks (refer to for task details). We take the 1k English set and use 0.1 of the training set as validation. Each token is lower cased and considered a unique symbol. Following previous works , we take multiple word answers also to be a unique symbol in S. Logical Reasoning To demonstrate the flexibility of our approach and distinguish our notion of a variable from that used in logic based formalisms, we generate logical reasoning tasks in the form of logic programs using the procedure by. The tasks involve learning f (C, Q) = True ↔ C Q over 12 classes of logic programs exhibiting varying paradigms of logical reasoning including negation by failure . We generate 1k and 10k logic programs per task for training with 0.1 as validation and another 1k for testing. We set the arity of literals to 1 or 2 using one random character from the English alphabet for predicates and constants, e.g. p(p) and an upper case character for logical variables, e.g. p(X). We probe three aspects of soft unification: the impact of unification on performance over unseen data, the effect of multiple invariants and data efficiency. To that end, we train UMLP and UCNN with and without unification, UMN with pre-training using 1 or 3 invariants over either the entire training set or only 50 examples. Every model is trained 3 times via back-propagation using Adam on an Intel Core i7-6700 CPU using the following objective function: where L nll is the negative log-likelihood with sparsity regularisation over ψ at τ = 0.1 to discourage the models from utilising spurious number of variables. For UMLP and UCNN, we set λ K = 0, λ U = 1 for training just the unified output and the converse for the non-unifying versions. To pre-train the UMN, we start with λ K = 1, λ U = 0 for 40 epochs then set λ U = 1 to jointly train the unified output. For iterative tasks, the mean squared error between hidden states (h at each iteration j and, in the strongly supervised cases, the negative log-likelihood for the context attentions using provided supporting facts are also added to the objective function. Further details such as batch size and total number of epochs are available in Appendix C. Figure 3: Test accuracy over iterations for Unification MLP and Unification CNN models with 1 invariant versus no unification. We observe that with soft unification the models achieve higher accuracy with fewer iterations than their plain counterparts on both per task training sizes. Figure 3 portrays how soft unification generalises better to unseen examples in test sets -the same sequence or grid never appears in both the training and test sets as outlined in Section 4 -over plain models. Despite f • g having more trainable parameters than f alone, the models with unification not only maintain higher accuracy in each iteration and solve the tasks in as few as 250 iterations with ≤ 1000 training examples but also improve accuracy by ≈ 0.3 when trained with only ≤ 50 per task. We believe soft unification architecturally biases the models towards learning structural patterns which in return achieves better on recognising common patterns of symbols across examples. Results with multiple invariants are identical and the models seem to ignore the extra invariants due to the fact that the tasks can be solved with a single invariant and the regularisation applied on ψ zeroing out unnecessary invariants; further in Appendix D. The fluctuations in accuracy around iterations 750 to 1000 in UCNN are also caused by penalising ψ which forces the model to relearn the task with less variables half way through training. Following Tables 2 and 3, we observe a trend of better performance through strong supervision, more data per task and using only 1 invariant. We believe strong supervision aids with selecting the correct sentences to unify and in a weak setting the model attempts to unify arbitrary context sentences often failing to follow the iterative reasoning chain. The increase in performance with more data and strong supervision is consistent with previous work reflecting how f • g can be bounded by the efficacy of f modelled as a memory network. As a , only in the supervised case do we observe a minor improvement over MemNN by 0.7 in Table 2 and no improvement in the weak case over comparable memory based networks in Table 3 . This dependency on f also limits the ability of f • g to learn from 50 examples per task failing 17/20 and 12/12 of bAbI and logical reasoning tasks respectively. The increase in error rate with 3 invariants, we speculate, stems from having more parameters and more pathways in the model rendering training more difficult. 6 ANALYSIS After training, we can extract the learned invariants by applying a threshold on ∀s ∈ S : ψ(s) > t indicating whether a symbol is used as a variable or not. We set t = 0.0 for all datasets except for bAbI, we use t = 0.1. The magnitude of this threshold seems to depend on the amount of regularisation τ, equation 5, and the number of training steps along with batch size all controlling how much ψ is pushed towards 0. Sample invariants shown in Figure 4 describe the common patterns present in the tasks with parts that contribute towards the final answer becoming variables. Extra symbols such as is or travelled do not emerge as variables, as shown in Figure 4a; we attribute this behaviour to the fact that changing the token travelled to went does not influence the prediction but changing the action, the value of Z:left to'picked' does. However, based on random initialisation, our approach can convert an arbitrary symbol into a variable and let f compensate for the unifications it produces. For example, the invariant "X:8 5 2 2" could predict the tail of another example by unifying the head with the tail using φ U, equation 3, of those symbols. Further examples are shown in Appendix D. Pre-training f as done in UMN seems to produce more robust and consistent invariants compared to immediately training f • g since, we speculate, by equation 4 f might encourage g(I, K) ≈ K. Interpretability versus Ability A desired property of interpretable models is transparency . A novel outcome of the learned invariants in our approach is that they provide an approximation of the underlying general principle present in the data such as the structure of multi-hop reasoning shown in Figure 4e. However, certain aspects regarding the ability of the model such as how it performs temporal reasoning, are still hidden inside f. In Figure 4b, although we observe Z:morning as a variable, the overall learned invariant captures nothing about how changing the value of Z:morning alters the behaviour of f. The model might look before or after a certain time point (e) Logical reasoning task 5 with arity 1. The model captures how S:n could entail X:i in a chain. Figure 4: Invariants learned across the four datasets using the three architectures. For iterative reasoning datasets, bAbI and logical reasoning, they are taken from strongly supervised UMN. X:bill went somewhere depending what Z:morning binds to. Without the regularising term on ψ(s), we initially noticed the models using, one might call extra, symbols as variables and binding them to the same value occasionally producing unifications such as "bathroom bathroom to the bathroom" and still f predicting, perhaps unsurprisingly, the correct answer as bathroom. Hence, regularising ψ with the correct amount τ in equation 5 to reduce the capacity of unification seems critical in extracting not just any invariant but one that represents the common structure. Soft unification from equation 3 reveals three main patterns: one-to-one, one-to-many or many-toone bindings as shown in Figure 5; further examples are in Appendix D. Figure 5a captures what one might expect unification to look like where variables unify with their corresponding counterparts, e.g. X:bernhard with brian and Y:frog with lion. However, occasionally the model can optimise to use less variables and squeeze the required information into a single variable, for example by binding Y:bathroom to john and kitchen as shown in Figure 5b. We believe this occurs due to the sparsity constraint on ψ(s) encouraging the model to be as conservative as possible. Finally, if there are more variables than needed as in Figure 5c, we observe a many-to-one binding with Y:w and Z:e mapping to the same constant q. This behaviour begs the question how does the model differentiate between p(q) and p(q, q). We speculate the model uses the magnitude of ψ(w) = 0.037 and ψ(e) = 0.042 to encode the difference despite both variables unifying with the same constant. Learning an underlying general principle in the form of an invariant is often the means for arguing generalisation in neural networks. For example, Neural Turing Machines are tested on previously unseen sequences to support the view that the model might have captured the underlying pattern or algorithm. In fact, claim "MemNNs can discover simple linguistic patterns based on verbal forms such as (X, dropped, Y), (X, took, Y) or (X, journeyed to, Y) and can successfully generalise the meaning of their instantiations." However, this claim is based on the output of f and unfortunately it is unknown whether the model has truly learned such a representation or indeed is utilising it. Our approach sheds light to this ambiguity and presents these linguistic patterns explicitly as invariants ensuring their utility through g without solely analysing the output of f on previously unseen symbols. Although we associate these invariants with our existing understanding of the task to mistakenly anthropomorphise the machine, for example by thinking it has learned X:mary as someone, it is important to acknowledge that these are just symbolic patterns. In these cases, our interpretations do not necessarily correspond to any understanding of the machine, relating to the Chinese room argument made by. Learning invariants by lifting ground examples is related to least common generalisation by which inductive inference is performed on facts such as generalising went(mary,kitchen) and went(john,garden) to went(X,Y). Unlike in a predicate logic setting, our approach allows for soft alignment and therefore generalisation between varying length sequences. Existing neuro-symbolic systems focus on inducing rules that adhere to given logical semantics of what variables and rules are. For example, δILP constructs a network by rigidly following the given semantics of first-order logic. Similarly, Lifted Relational Neural Networks ground first-order logic rules into a neural network while Neural Theorem Provers build neural networks using backwardchaining on a given knowledge base with templates. However, the notion of a variable is pre-defined rather than learned with a focus on presenting a practical approach to solving certain problems, whereas our motivation stems from a cognitive perspective. At first it may seem the learned invariants, Section 6, make the model more interpretable; however, this transparency is not of the model f but of the data. The invariant captures patterns that potentially approximates the data generating distribution but we still do not know how the model f uses them upstream. Thus, from the perspective of explainable artificial intelligence (XAI) , learning invariants or interpreting them do not constitute an explanation of the reasoning model f even though "if someone goes somewhere then they are there" might look like one. Instead, it can be perceived as causal attribution in which someone being somewhere is attributed to them going there. This perspective also relates to gradient based model explanation methods such as Layer-Wise Relevance Propagation and Grad-CAM . Consequently, a possible view on ψ, equation 2, is a gradient based usefulness measure such that a symbol utilised upstream by f to determine the answer becomes a variable similar to how a group of pixels in an image contribute more to its classification. Finally, one can argue that our model maintains a form of counterfactual thinking in which soft unification g creates counterfactuals on the invariant example to alter the output of f towards the desired answer, equation 4. The question where Mary would have been if Mary had gone to the garden instead of the kitchen is the process by which an invariant is learned through multiple examples during training. This view relates to methods of causal inference in which counterfactuals are vital as demonstrated in structured models by. We presented a new approach for learning variables and lifting examples into invariants through the usage of soft unification. Evaluating on four datasets, we analysed how Unification Networks perform comparatively to existing similar architectures while having the benefit of lifting examples into invariants that capture underlying patterns present in the tasks. Since our approach is end-toend differentiable, we plan to apply this technique to multi-modal tasks in order to yield multi-modal invariants for example in visual question answering. A MODEL DETAILS Unification MLP (UMLP) To model f as a multi-layer perceptron, we take symbol embeddings of size d = 16 and flatten sequences of length l = 4 into an input vector of size φ(k) ∈ R 64. The MLP consists of 2 hidden layers with tanh non-linearity of sizes 2d and d respectively. To process the query, we concatenate the one-hot encoding of the task id to φ(k) yielding a final input of size 64 + 4 = 68. For unification features φ U, we use a bi-directional GRU with hidden size d and an initial state of 0. The hidden state at each symbol is taken with a linear transformation to give φ U (s) = W U Φ(s) where Φ(s) is the hidden state of the biGRU. The variable assignment is then computed as an attention over the according to equation 3. Unification CNN (UCNN) We take symbols embeddings of size d = 32 to obtain an input grid φ(K) ∈ R 3×3×32. Similar to UMLP, for each symbol we append the task id as a one-hot vector to get an input of shape 3 × 3 × (32 + 4). Then f consists of 2 convolutional layers with d filters each, kernel size of 3 and stride 1. We use relu non-linearity in between the layers. We pad the grid with 2 columns and 2 rows to a 5 × 5 such that the output of the convolutions yield again a hidden output H ∈ R 3×3×d of the same shape. As the final hidden output h, we take a global max pool to over H to obtain h ∈ R d. Unification function g is modelled identical to f without the max pooling such that φ U (K ij) = H ij where H is the hidden output of the convolutional layers. Unlike previous architectures, with UMN we interleave g into f. We use embedding sizes of d = 32 and model f with an iterative memory network. We take the final hidden state of a bi-directional GRU, with initial state 0, Φ M to represent the sentences of the context C and query q in a ddimensional vector M i = Φ M (C i) and the query m q = Φ M (q). The initial state of the memory network is h 0 = m q. At each iteration j: where Φ A is another d-dimensional bi-directional GRU and ρ(x, y) = [x; y; x y; (x − y) 2 ] with the element-wise multiplication and [;] the concatenation of vectors. Taking β j as the context attention, we obtain the next state of the memory network: and iterate J many times in advance. The final prediction becomes f (C, q) = softmax(W h J + b). All weight matrices W and bias vectors b are independent but are tied across iterations. B GENERATED DATASET SAMPLES Table 5: Training sizes for randomly generated fixed length sequences and grid tasks with 8 unique symbols. The reason for Grid task (i) to be smaller is because there are at most 32 combinations of 2 × 2 boxes in a 3 × 3 grid with 8 unique symbols. Task Sequences Grid i 704.7 ± 12.8 25.6 ± 1.8 ii 709.4 ± 13.8 623.7 ± 14.1 iii 709.7 ± 14.0 768.2 ± 12.5 iv 624.8 ± 12.4 795.2 ± 10.3 C TRAINING DETAILS C.1 UNIFICATION MLP & CNN Both unification models are trained on a 5-fold cross-validation over the generated datasets for 2000 iterations with a batch size of 64. We don't use any weight decay and save the training and test accuracies every 10 iterations, as presented in Figure 3. We again use a batch size of 64 and pre-train f for 40 epochs then f together with g for 260 epochs. We use epochs for UMN since the dataset sizes are fixed. To learn g alongside f, we combine error signals from the unification of the invariant and the example. Following equation 4, the objective function not only incorporates the negative log-likelihood L nll of the answer but also the mean squared error between intermediate states h j I and h j K at each iteration as an auxiliary loss: We pre-train by setting λ U = 0 for 40 epochs and then set λ U = 1. For strong supervision we also compute the negative log-likelihood L nll for the context attention β j, described in Appendix A, at each iteration using the supporting facts of the tasks. We apply a dropout of 0.1 for all recurrent neural networks used and only for the bAbI dataset weight decay with 0.001 as the coefficient. Figure 6: Results of Unification MLP and CNN on increasing number of invariants. There is no impact on performance when more invariants per task are given. Upon closer inspection, we noticed the models ignore the extra invariants and only use 1. We speculate the regularisation ψ encourages the models to use a single 1 invariant. Xsandra went back to the Y:bathroom is X:sandra in the Y:bathroom yes Figure 9: Invariants learned that do not match the data generating distribution from UMLP and UCNN using ≤ 1000 examples to train. In these instances the unification still bind to the the correct symbols in order to predict the desired answer; quantitatively we get the same . Variable default symbols are omitted for clarity. (a) bAbI task 2. When a variable is unused in the next iteration, e.g. Z:football, it unifies with random tokens often biased by position. (b) Logical reasoning task 1. A one-to-one alignment is created between predicates and constants. (c) Logical reasoning task 3. Arity 1 atom forced to bind with arity 2 creates a one-to-many mapping.
End-to-end learning of invariant representations with variables across examples such as if someone went somewhere then they are there.
1,136
scitldr
We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical imaging and remote sensing. We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable. Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces. We then combine the projections to form a final reconstruction by solving a deconvolution-like problem. We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse. A variety of imaging inverse problems can be discretized to a linear system y = Ax + η where y ∈ R M is the measured data, A ∈ R M ×N is the imaging or forward operator, x ∈ X ⊂ R N is the object being probed by applying A (often called the model), and η is the noise. Depending on the application, the set of plausible reconstructions X could model natural, seismic, or biomedical images. In many cases the ing inverse problem is ill-posed, either because of the poor conditioning of A (a consequence of the underlying physics) or because M N.A classical approach to solve ill-posed inverse problems is to minimize an objective functional regularized via a certain norm (e.g. 1, 2, total variation (TV) seminorm) of the model. These methods promote general properties such as sparsity or smoothness of reconstructions, sometimes in combination with learned synthesis or analysis operators, or dictionaries BID44 ).In this paper, we address situations with very sparse measurement data (M N) so that even a coarse reconstruction of the unknown model is hard to get with traditional regularization schemes. Unlike artifact-removal scenarios where applying a regularized pseudoinverse of the imaging operator already brings out considerable structure, we look at applications where standard techniques cannot produce a reasonable image (Figure 1). This highly unresolved regime is common in geophysics and requires alternative, more involved strategies BID12 ).An appealing alternative to classical regularizers is to use deep neural networks. For example, generative models (GANs) based on neural networks have recently achieved impressive in regularization of inverse problems BID7, BID29 ). However, a difficulty in geophysical applications is that there are very few examples of ground truth models available for training (sometimes none at all). Since GANs require many, they cannot be applied to such problems. This suggests to look for methods that are not very sensitive to the training dataset. Conversely, it means that the sought reconstructions are less detailed than what is expected in data-rich settings; for Figure 1: We reconstruct an image x from its tomographic measurements. In moderately ill-posed problems, conventional methods based on the pseudoinverse and regularized non-negative least squares (x ∈ N, N is image dimension) give correct structural information. In fact, total variation (TV) approaches give very good . A neural network BID23 ) can be trained to directly invert and remove the artifacts (NN). In a severely ill-posed problem on the other hand (explained in FIG2) with insufficient ground truth training data, neither the classical techniques nor a neural network recover salient geometric features.an example, see the reconstructions of the Tibetan plateau BID51 ).In this paper, we propose a two-stage method to solve ill-posed inverse problems using random low-dimensional projections and convolutional neural networks. We first decompose the inverse problem into a collection of simpler learning problems of estimating projections into random (but structured) low-dimensional subspaces of piecewise-constant images. Each projection is easier to learn in terms of generalization error BID10 ) thanks to its lower Lipschitz constant. In the second stage, we solve a new linear inverse problem that combines the estimates from the different subspaces. We show that this converts the original problem with possibly non-local (often tomographic) measurements into an inverse problem with localized measurements, and that in fact, in expectation over random subspaces the problem becomes a deconvolution. Intuitively, projecting into piecewise-constant subspaces is equivalent to estimating local averages-a simpler problem than estimating individual pixel values. Combining the local estimates lets us recover the underlying structure. We believe that this technique is of independent interest in addressing inverse problems. We test our method on linearized seismic traveltime tomography BID8 BID20 ) with sparse measurements and show that it outperforms learned direct inversion in quality of achieved reconstructions, robustness to measurement errors, and (in)sensitivity to the training data. The latter is essential in domains with insufficient ground truth images. Although neural networks have long been used to address inverse problems BID32 BID21; BID42 ), the past few years have seen the number of related deep learning papers grow exponentially. The majority address biomedical imaging BID16; BID22 ) with several special issues 1 and review papers BID28 BID30 ) dedicated to the topic. All these papers address reconstruction from subsampled or low-quality data, often motivated by reduced scanning time or lower radiation doses. Beyond biomedical imaging, machine learning techniques are emerging in geophysical imaging BID3; BID25; BID5 ), though at a slower pace, perhaps partly due to the lack of standard open datasets. Existing methods can be grouped into non-iterative methods that learn a feed-forward mapping from the measured data y (or some standard manipulation such as adjoint or a pseudoinverse) to the model Figure 2: Regularization by Λ random projections: 1) each orthogonal projection is approximated by a convolutional neural network which maps from a non-negative least squares reconstruction of an image to its projection onto a lower dimension subspace of Delaunay triangulations; 2) projections are combined to estimate the original image using regularized least squares.either the regularizer being a neural network BID26 ), or neural networks replacing various iteration components such as gradients, projectors, or proximal mappings BID24; BID1 a); BID9 ). These are further related to the notion of plug-and-play regularization BID47 ), as well as early uses of neural nets to unroll and adapt standard sparse reconstruction algorithms BID14; BID50 ). An advantage of the first group of methods is that they are fast; an advantage of the second group is that they are better at enforcing data consistency. Generative models A rather different take was proposed in the context of compressed sensing where the reconstruction is constrained to lie in the range of a pretrained generative network BID6 ). Their scheme achieves impressive on random sensing operators and comes with theoretical guarantees. However, training generative networks requires many examples of ground truth and the method is inherently subject to dataset bias. Here, we focus on a setting where ground-truth samples are very few or impossible to obtain. There are connections between our work and sketching BID15; BID35 ) where the learning problem is also simplified by random low-dimensional projections of some object-either the data or the unknown reconstruction itself BID54 ). This also exposes natural connections with learning via random features BID39 ). The two stages of our method are (i) decomposing a "hard" learning task of directly learning an unstable operator into an ensemble of "easy" tasks of estimating projections of the unknown model into low-dimensional subspaces; and (ii) combining these projection estimates to solve a reformulated inverse problem for x. The two stages are summarized in Figure 2. While our method is applicable to continuous and non-linear settings, we focus on linear finite-dimensional inverse problems. Statistical learning theory tells us that the number of samples required to learn an M -variate LLipschitz function to a given sup-norm accuracy is O(L M) BID10 ). While this is proved for scalar-valued multivariate maps, it is reasonable to expect the same scaling in L to hold for vector-valued maps. This motivates us to study Lipschitz properties of the projected inverse maps. We wish to reconstruct x, an N -pixel image from X ⊂ R N where N is large (we think of x as an √ N × √ N discrete image). We assume that the map from x ∈ X to y ∈ R M is injective so that it is invertible on its range, and that there exists an L-Lipschitz (generally non-linear) inverse G, DISPLAYFORM0 In order for the injectivity assumption to be reasonable, we assume that X is a low-dimensional manifold embedded in R N of dimension at most M, where M is the number of measurements. Since we are in finite dimension, injectivity implies the existence of L BID45 ). Due to ill-posedness, L is typically large. Consider now the map from the data y to a projection of the model x into some K-dimensional subspace S, where K N. Note that this map exists by construction (since A is injective on X), and that it must be non-linear. To see this, note that the only consistent 2 linear map acting on y is an oblique, rather than an orthogonal projection on S (cf. Section 2.4 in BID48). We explain this in more detail in Appendix A.Denote the projection by P S x and assume S ⊂ R N is chosen uniformly at random. 3 We want to evaluate the expected Lipschitz constant of the map from y to P S x, noting that it can be written as P S • G: DISPLAYFORM1 where the first inequality is Jensen's inequality, and the second one follows from DISPLAYFORM2 and the observation that E P S P S = K N I N. In other words, random projections reduce the Lipschitz constant by a factor of K/N on average. Since learning requires O(L K) samples, this allows us to work with exponentially fewer samples and makes the learning task easier. Conversely, given a fixed training dataset, it gives more accurate estimates. The above example uses unstructured random subspaces. In many inverse problems, such as inverse scattering BID4; Di Cristo and Rondi FORMULA8 ), a judicious choice of subspace family can give exponential improvements in Lipschitz stability. Particularly, it is favorable to use piecewiseconstant images: x = K k=1 x k χ k, with χ k being indicator functions of some domain subset. Motivated by this observation, we use piecewise-constant subspaces over random Delaunay triangle meshes. The Delaunay triangulations enjoy a number of desirable learning-theoretic properties. For function learning it was shown that given a set of vertices, piecewise linear functions on Delaunay triangulations achieve the smallest sup-norm error among all triangulations .We sample Λ sets of points in the image domain from a uniform-density Poisson process and construct Λ (discrete) Delaunay triangulations with those points as vertices. Let S = {S λ | 1 ≤ λ ≤ Λ} be the collection of Λ subspaces of piecewise-constant functions on these triangulations. Let further G λ be the map from y to the projection of the model into subspace S λ, G λ y = P S λ x. Instead of learning the "hard" inverse mapping G, we propose to learn an ensemble of simpler mappings {G λ} Λ λ=1. We approximate each G λ by a convolutional neural network, Γ θ(λ) (y): R N → R N, parameterized by a set of trained weights θ(λ). Similar to Jin et al. FORMULA3, we do not use the measured data y ∈ R M directly as this would require the network to first learn to map y back to the image domain; we rather warm-start the reconstruction by a non-negative least squares reconstruction, y ∈ R N, computed from y. The weights are chosen by minimizing empirical risk: DISPLAYFORM0 where DISPLAYFORM1 is a set of J training models and non-negative least squares measurements. By learning projections onto random subspaces, we transform our original problem into that of estimating DISPLAYFORM0. To see how this can be done, ascribe to the columns of B λ ∈ 2 Consistent meaning that if x already lives in S, then the map should return x.3 One way to construct the corresponding projection matrix is as P S = W W †, where W ∈ R N ×K is a matrix with standard iid Gaussian entries. N ×K a natural orthogonal basis for the subspace S λ, B λ = [χ λ,1, . . ., χ λ,K], with χ λ,k being the indicator function of the kth triangle in mesh λ. Denote by q λ def = q λ (y) the mapping from the data y to an estimate of the expansion coefficients of x in the basis for S λ: DISPLAYFORM0.., q Λ ∈ R KΛ; then we can estimate x using the following reformulated problem: DISPLAYFORM1 and the corresponding regularized reconstruction: DISPLAYFORM2 with ϕ(x) chosen as the TV-seminorm x TV. The regularization is not essential. As we show experimentally, if KΛ is sufficiently large, ϕ(x) is not required. Note that solving the original problem directly using x TV regularizer fails to recover the structure of the model (Figure 1). Since the true inverse map G has a large Lipschitz constant, it would seem reasonable that as the number of mesh subspaces Λ grows large (and their direct sum approaches the whole ambient space R N), the Lipschitz properties of G should deteriorate as well. Denote the unregularized inverse mapping in y → x by G. Then we have the following estimate: DISPLAYFORM0 with σ min (B) the smallest (non-zero) singular value of B and L K the Lipschitz constant of the stable projection mappings q λ. Indeed, we observe empirically that σ min (B) −1 grows large as the number of subspaces increases which reflects the fact that although individual projections are easier to learn, the full-resolution reconstruction remains ill-posed. Estimates of individual subspace projections give correct local information. They convert possibly non-local measurements (e.g. integrals along curves in tomography) into local ones. The key is that these local averages (subspace projection coefficients) can be estimated accurately (see Section 4).To further illustrate what we mean by correct local information, consider a simple numerical experiment with our reformulated problem, q = B T x, where x is an all-zero image with a few pixels "on". For the sake of clarity we assume the coefficients q are perfect. Recall that B is a block matrix comprising Λ subspace bases stacked side by side. It is a random matrix because the subspaces are generated at random, and therefore the reconstruction x = (B) † q is also random. We approximate E x by simulating a large number of Λ-tuples of meshes and averaging the obtained reconstructions. Results are shown in FIG1 for different numbers of triangles per subspace, K, and subspaces per reconstruction, Λ. As Λ or K increase, the expected reconstruction becomes increasingly localized around non-zero pixels. The following proposition (proved in Appendix B) tells us that this phenomenon can be modeled by convolution. 4 Proposition 1. Let x be the solution to q = B x given as (B) † q. Then there exists a kernel κ(u), with u a discrete index, such that E x = x * κ. Furthermore, κ(u) is isotropic. While FIG1 suggests that more triangles are better, we note that this increases the subspace dimension which makes getting correct projection estimates harder. Instead we choose to stack more meshes with a smaller number of triangles. Intuitively, since every triangle average depends on many measurements, estimating each average is more robust to measurement corruptions as evidenced in Section 4. Accurate estimates of local averages enable us to recover the geometric structure while being more robust to data errors. 4 NUMERICAL To demonstrate our method's benefits we consider linearized traveltime tomography BID20; BID8 ), but we note that the method applies to any inverse problem with scarce data. In traveltime tomography, we measure N 2 wave travel times between N sensors as in FIG2. Travel times depend on the medium property called slowness (inverse of speed) and the task is to reconstruct the spatial slowness map. Image intensities are a proxy for slowness maps-the lower the image intensity the higher the slowness. In the straight-ray approximation, the problem data is modeled as integral along line segments: DISPLAYFORM0 where x: R 2 → R + is the continuous slowness map and s i, s j are sensor locations. In our experiments, we use a 128 × 128 pixel grid with 25 sensors (300 measurements) placed uniformly in an inscribed circle, and corrupt the measurements with zero-mean iid Gaussian noise. We generate random Delaunay meshes each with 50 triangles. The corresponding projector matrices compute average intensity over triangles to yield a piecewise constant approximation P S λ x of x. We test two distinct architectures: (i) ProjNet, tasked with estimating the projection into a single subspace; and (ii) SubNet, tasked with estimating the projection over multiple subspaces. The ProjNet architecture is inspired by the FBPConvNet BID23 ) and the U-Net BID41 ) as shown in Figure 11a in the appendix. Crucially, we constrain the network output to live in S λ by fixing the last layer of the network to be a projector, P S λ (Figure 11a). A similar trick in a different context was proposed in BID43 ).We combine projection estimates from many ProjNets by regularized linear least-squares to get the reconstructed model (cf. Figure 2) with the regularization parameter λ determined on five held-out images. A drawback of this approach is that a separate ProjNet must be trained for each subspace. This motivates the SubNet (shown in Figure 11b). Each input to SubNet is the concatenation of a non-negative least squares reconstruction and 50 basis functions, one for each triangle forming a 51-channel input. This approach scales to any number of subspaces which allows us to get visually smoother reconstructions without any further regularization as in. On the other hand, the projections are less precise which can lead to slightly degraded performance. As a quantitative figure of merit we use the signal-to-noise ratio (SNR). The input SNR is defined as 10 log 10 (σ 2 signal /σ 2 noise) where σ 2 signal and σ 2 noise are the signal and noise variance; the output SNR is defined as sup a,b 20 log 10 (x 2 / x − ax − b 2) with x the ground truth andx the reconstruction.130 ProjNets are trained for 130 different meshes with measurements at various SNRs. Similarly, a single SubNet is trained with 350 different meshes and the same noise levels. We compare the ProjNet and SubNet reconstructions with a direct U-net baseline convolutional neural network that reconstructs images from their non-negative least squares reconstructions. The direct baseline has the same architecture as SubNet except the input is a single channel non-negative least squares reconstruction like in ProjNet and the output is the target reconstruction. Such an architecture was proposed by BID23 ) and is used as a baseline in recent learning-based inverse problem works BID29; Ye et al. FORMULA3 ) and is inspiring other architectures for inverse problems BID2 ). We pick the best performing baseline network from multiple networks which have a comparable number of trainable parameters to SubNet. We simulate the lack of training data by testing on a dataset that is different than that used for training. Robustness to corruption To demonstrate that our method is robust against arbitrary assumptions made at training time, we consider two experiments. First, we corrupt the data with zero-mean iid Gaussian noise and reconstruct with networks trained at different input noise levels. In FIG3 and Table 1, we summarize the with reconstructions of geo images taken from the BP2004 dataset 6 and x-ray images of metal castings BID31 ). The direct baseline and SubNet are trained on a set of 20,000 images from the arbitrarily chosen LSUN bridges dataset BID53 ) and tested with the geophysics and x-ray images. ProjNets are trained with 10,000 images from the LSUN dataset. Our method reports better SNRs compared to the baseline. We note that direct reconstruction is unstable when trained on clean and tested on noisy measurements as it often hallucinates details that are artifacts of the training data. For applications in geophysics it is important that our method correctly captures the shape of the cavities unlike the direct inversion which can produce sharp but wrong geometries (see outlines in FIG3). with Gaussian noise FIG3 ) the direct method completely fails to recover coarse geometry in all test cases. In our entire test dataset of 102 x-ray images there is not a single example where the direct network captures a geometric feature that our method misses. This demonstrates the strengths of our approach. For more examples of x-ray images please see Appendix E. FIG4 illustrates the influence of the training data on reconstructions. Training with LSUN, CelebA BID27 ) and a synthetic dataset of random overlapping shapes (see FIG3 in Appendix for examples) all give comparable reconstructions-a desirable property in applications where real ground truth is unavailable. We complement our with reconstructions of checkerboard phantoms (standard resolution tests) and x-rays of metal castings in Figure 7. We note that in addition to better SNR, our method produces more accurate geometry estimates, as per the annotations in the figure. We proposed a new approach to regularize ill-posed inverse problems in imaging, the key idea being to decompose an unstable inverse mapping into a collection of stable mappings which only estimate Figure 7: Reconstructions on checkerboards and x-rays with 10dB measurement SNR tested on 10dB trained networks. Red annotations highlight where the direct net fails to reconstruct correct geometry. low-dimensional projections of the model. By using piecewise-constant Delaunay subspaces, we showed that the projections can indeed be accurately estimated. Combining the projections leads to a deconvolution-like problem. Compared to directly learning the inverse map, our method is more robust against noise and corruptions. We also showed that regularizing via projections allows our method to generalize across training datasets. Our reconstructions are better both quantitatively in terms of SNR and qualitatively in the sense that they estimate correct geometric features even when measurements are corrupted in ways not seen at training time. Future work involves getting precise estimates of Lipschitz constants for various inverse problems, regularizing the reformulated problem using modern regularizers BID46 ), studying extensions to non-linear problems and developing concentration bounds for the equivalent convolution kernel. This work utilizes resources supported by the National Science Foundation's Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign. We gratefully acknowledge the support of NVIDIA Corporation with the donation of one of the GPUs used for this research. We explain the need for non-linear operators even in the absence of noise with reference to FIG5. Projecting x into a given known subspace is a simple linear operation, so it may not be a priori clear why we use non-linear neural networks to estimate the projections. Alas, we do not know x and only have access to y. Suppose that there exists a linear operator (a matrix) F ∈ R N ×M which acts on y and computes the projection of x on S λ. A natural requirement on F is consistency: if x already lives in S λ, then we would like to have F Ax = x. This implies that for any x, not necessarily in S λ, we require F AF Ax = F Ax which implies that F A = (F A)2 is an idempotent operator. Letting the columns of B λ be a basis for S λ, it is easy to see that the least squares minimizer for F is B λ (AB λ) †. However, because R(F) = S λ = R(A *) (A * is the adjoint of A, simply a transpose for real matrices), in general it will not hold that (F A) * = F A. Thus, F A is an oblique, rather than orthogonal projection into S. In FIG5 this corresponds to the point P oblique S λ x which can be arbitrarily far from the orthogonal projection P ortho S λ x. The nullspace of the oblique projection is precisely N (A) = R(A *) ⊥.Thus consistent linear operators can at best yield oblique projections which can be far from the orthogonal one. One could also see this geometrically from FIG5. As the angle between S λ and R(A *) increases to π/2 the oblique projection point travels to infinity (note that the oblique projection always happens along the nullspace of A, which is the line orthogonal to P R(A *). Since our subspaces are chosen at random, in general they are not aligned with R(A *). The only subspace on which we can linearly compute an orthogonal projection from y is R(A *); this is given by the Moore-Penrose pseudoinverse. Therefore, to get the orthogonal projection onto random subspaces, we must use non-linear operators. More generally, for any other ad hoc linear reconstruction operator W, W y = W Ax always lives in the column space of W A which is a subspace whose dimension is at most the number of rows of A. However, we do not have any linear subspace model for x. As shown in the right half of FIG5, as soon as A is injective on X, the existence of this non-linear map is guaranteed by construction: since y determines x, it also determines P S λ x. We show the of numerical experiments in Figures 9 and 10 which further illustrate the performance difference between linear oblique projectors and our non-linear learned operator when estimating the projection of an image into a random subspace. We refer the reader to the captions below each figure for more details. Figure 10: We try hard to get the best reconstruction from the linear approach. SNRs are indicated in the bottom-left of each reconstruction. In the linear approach, coefficients are obtained using the linear oblique projection method. Once coefficients are obtained, they are non-linearly reconstructed according to. Both linear approach reconstructions use the box-constraint (BC) mentioned in. For the 130 subspace reconstruction total-variation (TV) regularization is also used. Therefore, once the coefficients are obtained using the linear approach, the reconstruction of the final image is done in an identical manner as ProjNet for 130 subspaces and SubNet for 350 subspaces. To give the linear approach the best chance we also optimized hyperparameters such as the regularization parameter to give the highest SNR.Using the definition of the inner product and rearranging, we get DISPLAYFORM0. Now, the probability distribution of triangles around any point u is both shift-and rotation-invariant because a Poisson process in the plane is shift-and rotation-invariant. It follows that E κ(u, v) = κ(u − v) for some κ, meaning that DISPLAYFORM1 which is a convolution of the original model with a rotationally invariant (isotropic) kernel. Figure 11 explains the network architecture used for ProjNet and SubNet. The network consists of a sequence of downsampling layers followed by upsampling layers, with skip connections BID19 a) ) between the downsampling and upsampling layers. Each ProjNet output is constrained to a single subspace by applying a subspace projection operator, P S λ. We train 130 such networks and reconstruct from the projection estimate using. SubNet is a single network that is trained over multiple subspaces. To do this, we change its input to be [y B λ]. Moreover, we apply the same projection operator as ProjNet to the output of the SubNet. Each SubNet is trained to give projection estimates over 350 random subspaces. This approach allows us to scale to any number of subspaces without training new networks for each. Moreover, this allows us to build an over-constrained system q = Bx to solve. Even though SubNet has almost as many parameters as the direct net, reconstructing via the projection estimates allows SubNet to get higher SNR and more importantly, get better estimates of the coarse geometry than the direct inversion. All networks are trained with the Adam optimizer. projection into input subspace Figure 11: a) ProjNet architecture; b) SubNet architecture. In both cases, the input is a non-negative least squares reconstruction and the network is trained to reconstruct a projection into one subspace. In SubNet, the subspace basis is concatenated to the non-negative least squares reconstruction. We showcase more reconstructions on actual geophysics images taken from the BP2004 dataset in Figure 12. Note that all networks were trained on the LSUN bridges dataset. We show additional reconstructions for the largest corruption case, p = 1 8, for x-ray images (FIG1) and geo images FIG2. Our method consistently has better SNR. More importantly we note that there is not a single instance where the direct reconstruction gets a feature that our methods do not. In a majority of instances, the direct network misses a feature of the image. This is highly undesirable in settings such as geophysical imaging. The shapes dataset was generated using random ellipses, circle and rectangle patches. See FIG3 for examples. This dataset was used in FIG4. In Section 4 we train multiple ProjNets, each focusing on a different low-dimensional subspace. Here we train an ensemble of direct networks where each network is as described in Section 4.1.1 and evaluate the robustness of a method where the outputs of these networks are averaged to give a final reconstruction. Once again, we consider scenarios where the model is trained with data at a particular noise level and then tested with data at a different noise level and with erasures that were unseen during training time. We show that our proposed method is more robust to changes in the test scenario. In FIG4, we consider the erasure model with p = 1 8 (described in FIG3). 9 out of 10 randomly chosen direct network reconstructions fail to capture the key structure of the original image under this corruption mechanism. In TAB8, we summarize this with the SNRs of reconstructions from the erasure corruption mechanism. In that table we also report SNRs when reconstructing from measurements at different noise levels. The ensemble of direct networks performs well when the training and test data have the same measurement noise level. However, our method is more robust to changes in the test noise level. This further illustrates that direct networks are highly tuned to the training scenario and therefore not as stable as our proposed method (cf. Section 3). Original FIG4: Reconstructions of the original image from 10 individually trained direct inversion networks for 10dB noise under the p = 1 8 erasure corruptions model (described in FIG3). 9 out of the 10 reconstructions fail to capture the key structure of the original image. Single
We solve ill-posed inverse problems with scarce ground truth examples by estimating an ensemble of random projections of the model instead of the model itself.
1,137
scitldr
Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks. The high performance of deep neural nets is tempered by the cost of extensive engineering and validation to find the best architecture for a given problem. High-level design decisions such as depth, units per layer, and layer connectivity are not always obvious, and the success of models such as Inception , ResNets BID12, FractalNets BID18 and DenseNets BID14 demonstrates the benefits of intricate design patterns. Even with expert knowledge, determining which design elements to weave together requires ample experimentation. In this work, we propose to bypass the expensive procedure of fully training candidate models by instead training an auxiliary model, a HyperNet BID11, to dynamically generate the weights of a main model with variable architecture. Though these generated weights are worse than freely learned weights for a fixed architecture, we leverage the observation BID19 ) that the relative performance of different networks early in training (i.e. some distance from the eventual optimum) often provides a meaningful indication of performance at optimality. By comparing validation performance for a set of architectures using generated weights, we can approximately rank numerous architectures at the cost of a single training run. To facilitate this search, we develop a flexible scheme based on memory read-writes that allows us to define a diverse range of architectures, with ResNets, DenseNets, and FractalNets as special cases. We validate our one-Shot Model Architecture Search through HyperNetworks (SMASH) for Convolutional Neural Networks (CNN) on CIFAR-10 and CIFAR-100 , Imagenet32x32 BID6, ModelNet10 , and STL-10 BID7, achieving competitive performance with similarly-sized hand-designed networks. Modern practical methods for optimizing hyperparameters rely on random search BID3 or Bayesian Optimization (BO) (; BID15, treating the model performance as a black box. While successful, these methods require multiple training runs for evaluation (even when starting with a good initial model) and, in the case of BO, are not typically used to specify variable-length settings such as the connectivity and structure of the model under consideration. Relatedly, bandit-based methods BID19 provide a framework for efficiently exploring the hyperparameter space by employing an adaptive early-stopping strategy, allocating more resources to models which show promise early in training. Evolutionary techniques BID9 Stanley et al.;; ) offer a flexible approach for discovering variegated models from trivial initial conditions, but often struggle to scale to deep neural nets where the search space is vast, even with enormous compute .Reinforcement learning methods BID2 ) have been used to train an agent to generate network definitions using policy gradients. These methods start from trivial architectures and discover models that achieve very high performance, but can require twelve to fifteen thousand full training runs to arrive at a solution. The method that most resembles our own is that of Saxe et al. , who propose to efficiently explore various architectures by training only the output layer of convolutional networks with random convolutional weights. While more efficient than fully training an entire network end-toend, this method does not appear to scale to deeper networks . Our method is conceptually similar, but replaces random weights with weights generated through HyperNets BID11, which are one of a class of techniques for dynamically adapting weights through use of an auxiliary model BID8 BID16; ). In our case we learn a transform from a binary encoding of an architecture to the weight space, rather than learning to adapt weights based on the model input. Our method is explicitly designed to evaluate a wide range of model configurations (in terms of connectivity patterns, depth, and width) but does not address other hyperparameters such as regularization, learning rate schedule, weight initialization, or data augmentation. Unlike the aforementioned evolutionary or RL methods, we explore a somewhat pre-defined design space, rather than starting with a trivial model and designating a set of available network elements. While we still consider a rich set of architectures, our method cannot discover wholly new structures on its own and is constrained in that it only dynamically generates a specific subset of the model parameters. Additionally, although our method is not evolutionary, our encoding scheme is reminiscent of CGP BID21.Stochastic regularization techniques such as Dropout , Swapout , DropPath BID18 or stochastic depth BID13 superficially resemble our method, in that they obtain variable configurations by randomly dropping connectivity paths in a fixed network architecture. Convolutional neural fabrics , for example, leverage this idea to attempt to train one large network as an implicit ensemble of all subnetworks produced through dropping paths. A key element that sets our method apart is that the weights for each node in our network are dynamically generated, rather than fixed; if a Dropout ensemble were to visit a unit that had not previously been trained, the unit's weights would be completely untuned. Our method generalizes even to previously unseen architectures, and the network we train under stochastic conditions is merely a proxy we use to evaluate network configurations, rather than the final model. In SMASH (Algorithm 1), our goal is to rank a set of neural network configurations relative to one another based on each configuration's validation performance, which we accomplish using weights generated by an auxiliary network. At each training step, we randomly sample a network architecture, generate the weights for that architecture using a HyperNet, and train the entire system end-to-end through backpropagation. When the model is finished training, we sample a number of random architectures and evaluate their performance on a validation set, using weights generated by the HyperNet. We then select the architecture with the best estimated validation performance and train its weights normally. Sample random c and evaluate error on validation set E v = f c (H(c), x v ) end loop Fix architecture and train normally with freely-varying weights W SMASH comprises two core components: the method by which we sample architectures, and the method by which we sample weights for a given architecture. For the former, we develop a novel "memory-bank" view of feed-forward networks that permits sampling complex, branching topologies, and encoding said topologies as binary vectors. For the latter, we employ a HyperNet BID11 ) that learns to map directly from the binary architecture encoding to the weight space. We hypothesize that so long as the HyperNet learns to generate reasonable weights, the validation error of networks with generated weights will correlate with the performance when using normally trained weights, with the difference in architecture being the primary factor of variation. Throughout the paper, we refer to the entire apparatus during the first part of training (the HyperNet, the variable architecture main network, and any freely learned main network weights) as the SMASH network, and we refer to networks trained with freely learned weights in the second stage as ing networks. In order to explore a broad range of architectures with variable depth, connectivity patterns, layer sizes and beyond, we require a flexible mechanism for defining such architectures, which we can also easily encode into a conditioning vector for the HyperNet. To this end, we introduce a "memory-bank" view of feed-forward networks. Rather than viewing a network as a series of operations applied to a forward-propagating signal, we view a network as having a set of memory banks (initially tensors filled with zeros) which it can read and write. Each layer is thus an operation that reads data from a subset of memory, modifies the data, and writes the to another subset of memory. For a single-branch architecture, the network has one large memory bank it reads and overwrites (or, for a ResNet, adds to) at each op. A branching architecture such as a DenseNet reads from all previously written banks and writes to an empty bank, and a FractalNet follows a more complex read-write pattern, as shown in FIG0.Our base network structure consists of multiple blocks FIG1 ), where each block has a set number of memory banks at a given spatial resolution, with successively halved spatial resolutions as in most CNN architectures. Downsampling is accomplished via a 1x1 convolution followed by average pooling BID14, with the weights of the 1x1 convolution and the fully-connected output layer being freely learned, rather than generated. When sampling an architecture, the number of banks and the number of channels per bank are randomly sampled at each block. When defining each layer within a block, we randomly select the read-write pattern and the definition of the op to be performed on the read data. When reading from multiple banks we concatenate the read tensors along the channel axis, and when writing to banks we add to the tensors currently in each bank. For all reported experiments, we only read and write from banks at one block (i.e. one spatial resolution), although one could employ resizing to allow reading and writing from any block, similar to .Each op comprises a 1x1 convolution (reducing the number of incoming channels), followed by a variable number of convolutions interleaved with nonlinearities, as shown in FIG1 (a). We randomly select which of the four convs are active, along with their filter size, dilation factor, number of groups, and the number of output units (i.e. the layer width). The number of output channels of the 1x1 conv is some factor of the width of the op, chosen as the "bottleneck ratio" hyperparameter. The weights for the 1x1 convolution are generated by the HyperNet as described in Section 3.2, while the other convolutions are normally learned parameters. To ensure variable depth, we learn a single set of 4 convolutions for each block, and share it across all ops within a block. We limit the max filter size and number of output units, and when a sampled op uses less than the maximum of either, we simply slice the weight tensor to the required size. The fixed transition convolutions and output layer employ this same slicing based on the number of incoming non-empty memory banks. Exact details regarding this scheme are available in the appendix. In designing our scheme, we strive to minimize the number of static learned parameters, placing the majority of the network's capacity in the HyperNet. A notable consequence of this goal is that we only employ BatchNorm BID15 at downsample layers and before the output layer, as the layer-specific running statistics are difficult to dynamically generate. We experimented with several different normalization schemes including WeightNorm , LayerNorm BID1 and NormProp BID0 but found them to be unstable in training. Instead, we employ a simplified version of WeightNorm where we divide the entirety of each generated 1x1 filter by its Euclidean norm (rather than normalizing each channel separately), which we find to work well for SMASH and to only in a minor drop in accuracy when employed in fixed-architecture networks. No other convolution within an op is normalized. A HyperNet BID11 ) is a neural net used to parameterize the weights of another network, the main network. For a Static HyperNet with parameters H, the main network weights W are some function (e.g. a multilayer perceptron) of a learned embedding z, such that the number of learned weights is typically smaller than the full number of weights for the main network. For a Dynamic HyperNet, the weights W are generated conditioned on the network input x, or, for recurrent networks, on the current input x t and the previous hidden state h t−1.We propose a variant of a Dynamic HyperNet which generates the weights W based on a tensor encoding of the main network architecture c. Our goal is to learn a mapping W = H(c) that is reasonably close to the optimal W for any given c, such that we can rank each c based on the validation error using HyperNet-generated weights. We thus adopt a scheme for the layout of c to enable sampling of architectures with wildly variable topologies, compatibility with the toolbox available in standard libraries, and to make c's dimensions as interpretable as possible. Our HyperNet is fully convolutional, such that the dimensionality of the output tensor W varies with the dimensionality of the input c, which we make a 4D tensor of the standard format Batch x Channel x Height x Width; the batch size is set to 1 so that no output elements are wholly independent. This allows us to vary the depth and width of the main network by increasing the height or width of c. Under this scheme, every slice of the spatial dimensions of W corresponds to a specific subset of c. Information describing the op that uses that W subset is embedded in the channel dimension of the corresponding c slice. For example, if an op reads from memory banks 1, 2, and 4, then writes to 2 and 4, then the first, second, and fourth channels of the corresponding slice of c will be filled with 1s (indicating the read pattern) and the sixth and eighth channels of that slice will be filled with 1s (indicating the write pattern). The rest of the op description is encoded in the remaining channels in a similar 1-hot fashion. We only encode into the width-wise extent of c based on the number of output units of the op, so elements of c which do not correspond to any elements of W are empty. A naïve implementation of this scheme might require the size of c to be equal to the size of W, or have the HyperNet employ spatial upsampling to produce more elements. We found these choices poor, and instead employ a channel-based weight-compression scheme that reduces the size of c and keeps the representational power of the HyperNet proportional to that of the main networks. We make the spatial extent of c some fraction k of the size of W, and place k units at the output of the HyperNet. We then reshape the ing 1 × k × height × width tensor to the required size of W. k is chosen to be DN 2, where N is the minimum memory bank size, and D is a "depth compression" hyperparameter that represents how many slices of W correspond to a single slice of c. Complete details regarding this scheme (and the rest of the encoding strategy) are available in Appendix B. We apply SMASH to several datasets, both for the purposes of benchmarking against other techniques, and to investigate the behavior of SMASH networks. Principally, we are interested in determining whether the validation error of a network using SMASH-generated weights (the "SMASH score") correlates with the validation of a normally trained network, and if so, the conditions under which the correlation holds. We are also interested in the transferability of the learned architectures to new datasets and domains, and how this relates to normal (weight-wise) transfer learning. Our publicly available code 1 is written in PyTorch BID22 to leverage dynamic graphs, and explicitly defines each sampled network in line with the memory-bank view to avoid obfuscating its inner workings behind (potentially more efficient) abstractions. We omit many hyperparameter details for brevity; full details are available in the appendices, along with visualizations of our best-found architectures. Red line is a least-squares best fit. First, we train a SMASH network for 300 epochs on CIFAR-100, using a standard annealing schedule BID14, then sample 250 random architectures and evaluate their SMASH score on a held-out validation set formed of 5,000 random examples from the original training set. We then sort the architectures by their SMASH score and select every 5th architecture for full training and evaluation, using an accelerated training schedule of 30 epochs. For these networks, which we deem SMASHv1, the architecture uses a fixed memory bank size (though a variable number of banks in each block), a single fixed 3x3 conv in the main body of the op (rather than the variable 2x2 array of convs), a single group, and a fixed bottleneck ratio of 4. The variable elements comprise the read-write pattern, the number of output units, and the dilation factor of the 3x3 filter. When sampling architectures, we allocate a random, upper-bounded compute budget to each block. Under these conditions, we observe a correlation (FIG3) between the SMASH score and the true validation performance, suggesting that SMASH-generated weights can be used to rapidly compare architectures. It is critical not to overstate this claim; this test is arguably a single datapoint indicating that the correlation holds in this scenario, but neither guarantees the correlation's generality nor implies the range of conditions for which it will hold. We thus conduct a more thorough investigation of this correlation. We expect, based on preliminary experiments detailed in Appendix C, that the two key variables determining the strength of the correlation are the capacity of the HyperNet, and the ratio of HyperNetgenerated weights to freely learned weights. For the former, we reason that if the HyperNet lacks representational capacity, then it will be unable to learn an acceptable mapping between architectures and weights, and the generated weights will be too far from optimal to permit ranking. For the latter, we reason that if the main network has too many freely-learned weights relative to the number of dynamically generated weights, too much of its capacity will be "static" and ill-adapted to the wide range of architectures considered. Following these hypotheses, we repeat the first experiment multiple times, varying the architecture of the HyperNet as well as the g hyperparameter, which controls the maximum layer width and consequentially the ratio of freely-learned vs dynamic weights. A higher value of g corresponds to relatively fewer dynamically generated weights. We consider three different ratios and five different HyperNet configurations. For each setting of g we begin by training five different SMASH networks, each employing one of the candidate configurations. After training, we sample 500 random architectures as before, and then evaluate all five SMASH scores, rank them according to their average, and then train every 10th ing network normally. We train the ing networks for a full 100 epochs (as opposed to the previous shortened schedule) and repeat training runs with a total of 5 different random seeds. Finally, we evaluate the strength and significance of the correlation between SMASH score and ing validation performance (averaged for each architecture across runs) using Pearson's R. The of this study are reported in TAB1. The first column details the choices of growth rate and depth for each of the three blocks of the HyperNet, and the second column the number of parameters for each architecture. The values of g=4, g=8, and g=16 roughly correspond to average freely-learned vs dynamic weight ratios of 1:4, 1:2, and 2:1, though these vary somewhat with individual sampled architectures. Several trends are visible in this table. First, for g=4, we note that the strength of correlation increases with increased HyperNet capacity up to the fourth architecture with 12M parameters, but the correlation breaks down for the largest architecture. This suggests that the strength of correlation can indeed depend on the capacity of the HyperNet, but also that the HyperNet is either potentially susceptible to overfitting or that too large a HyperNet becomes too difficult to train and produces poor weights. Second, for g=8, we note that the correlation varies little with changes in architecture, but is weaker than the best correlations from g = 4. This suggests that for a middling ratio, the capacity of the HyperNet has less effect on the correlation as there are fewer weights for it to adapt, but the strength of the correlation is weaker as each set of sampled weights is consequentially less optimal than for g = 4. Third, we note a complete breakdown of the correlation for g=16, which is in line with our expectation that placing too much capacity in a single set of statically learned weights will prevent the SMASH network from properly adapting to individual architectures. As an additional test of our method, we examine whether or not the HyperNet has learned to take into account the architecture definition in c, or whether it ignores c and naively generates an unconditional subspace of weights that happen to work well. We "trick" the HyperNet by sampling one architecture, but asking it to generate the weights for a different architecture by corrupting the encoding tensor c (e.g. by shuffling the dilation values). For a given architecture, we find that SMASH validation performance is consistently highest when using the correct encoding tensor, suggesting that the HyperNet has indeed learned a passable mapping from architecture to weights. Following this, we posit that if the HyperNet learns a meaningful mapping W = H(c), then the classification error E = f (W, x) = f (H(c), x) can be backpropagated to find dE dc, providing an approximate measure of the error with respect to the architecture itself. If this holds true, then perturbing the architecture according to the dE dc vector (within the constraints of our scheme) should allow us to guide the architecture search through a gradient descent-like procedure. Our preliminary tests with this idea did not yield better SMASH scores than randomly perturbing the architectural definition, though we suspect that this was in part due to our lack of an intuitively satisfying update rule for the discrete architecture space. Models with weights initially learned on one large dataset frequently outperform models trained from scratch on a smaller dataset; it follows that architectures might display the same behavior. We test on STL-10 (BID7, a small dataset of 96x96 images similar to the CIFAR datasets. We compare the performance of the best-found architecture from CIFAR-100 (with weights trained from scratch on STL-10) to the best-found architecture from running SMASH on STL-10, and a WRN baseline. For these experiments, we make use of the full 5,000 images in the training set; in the following section we also include comparisons against a WRN baseline using the recommended 10-fold training split. In this case, we find that the best-found architecture from CIFAR-100 outperforms the best-found architecture from STL-10, achieving 17.54% and 20.275% error, respectively. For reference, a baseline WRN28-10 and WRN40-4 achieve respective 15.43% and 16.06% errors. This presents an interesting phenomenon: one the one hand, one might expect the architecture discovered on STL-10 to be better-tuned to STL-10 because it was specifically learned on that dataset. On the other hand, CIFAR-100 has significantly more training examples, potentially making it a better dataset for distinguishing between good architectures, i.e. accuracy on CIFAR-100 is more indicative of generality. The better performance of the architecture found on CIFAR-100 would seem to favor the latter hypothesis, suggesting that architecture search benefits from larger training sets more so than domain specificity. We next investigate how well our best-found CIFAR-100 architecture performs on ModelNet10 , a 3D object classification benchmark. We train on the voxelated instances of the ModelNet10 training set using the settings of BID4, and report accuracy on the ModelNet10 test set. Our 8M parameter model achieves an accuracy of 93.28%, compared to a 93.61% accuracy from a hand-designed Inception-ResNet BID4 with 18M parameters trained on the larger ModelNet40 dataset. We run SMASH on CIFAR-10 and 100, augmenting our search space from the initial correlation experiment to include variable filter sizes, variable groups, and the full variable op structure shown in FIG1, and denote the ing networks SMASHv2. We report the final test performance of the two ing networks with the highest SMASH scores on CIFAR-10 and 100 in TAB2.Next, we take our best-found SMASHv2 architecture from CIFAR-100 and train it on STL-10 using the recommended 10-fold training splits, and ImageNet32x32 BID6. We compare against Wide ResNet baselines from our own experiments in TAB3 reported by BID6 in 4. Noting the better performance of WRN40-4 on STL-10, we also train a variant of our best architecture with only a single main convolution and 3x3 filters, to comparably reduce the number of parameters. Our SMASHv2 nets with 16M parameters achieve final test errors of 20.60% on CIFAR-100 and 4.03% on CIFAR-10. This performance is not quite on par with state-of-the-art hand-designed networks, but compares favorably to other automatic design methods that employ RL BID2 ) or evolutionary methods . Our networks outperform Large-Scale Evolution despite requiring significantly less time to discover (though not starting from trivial models) and 10 orders of magnitude less compute. Our method outperforms MetaQNN BID2 but lags behind Neural Architecture Search , though both methods require vastly more computation time, and unlike Neural Architecture Search, we do not postprocess our discovered architectures through grid search. We believe this work opens a number of future research paths. The SMASH method itself has several simplistic elements that might easily be improved upon. During training, we sample each element of the configuration one-by-one, independently, and uniformly among all possible choices. A more intelligent method might employ Bayesian Optimization or HyperBand BID19 to guide the sampling with a principled tradeoff between exploring less-frequently sampled architectures against those which are performing well. One might employ a second parallel worker constantly evaluating validation performance throughout training to provide signal to an external optimizer, and change the optimization objective to simultaneously maximize performance while minimizing computational costs. One could also combine this technique with RL methods and use a policy gradient to guide the sampling. Another simple technique (which our code nominally supports) is using the HyperNet-generated weights to initialize the ing network and accelerate training, similar to Net2Net BID5.Our architecture exploration is fairly limited, and for the most part involves variable layer sizes and skip connections. One could envision a multiscale SMASH that also explores low-level design, varying things such as the activation at each layer, the order of operations, the number of convolutions in a given layer, or whether to use convolution, pooling, or more exotic blocks. One might also search over network-wide design patterns rather than randomly selecting the read-write pattern at each layer. Alternatively, one could consider varying which elements of the network are generated by the HyperNet, which are fixed learned parameters, and one might even make use of fixed unlearned parameters such as Gabor Filters or wavelets. Our memory-bank view also opens up new possibilities for network design. Each layer's read and write operations could be designed to use a learned softmax attention mechanism, such that the read and write locations are determined dynamically at inference time. We also do not make use of memory in the traditional "memory-augmented" sense, but we could easily add in this capacity by allowing information in the memory banks to persist, rather than zeroing them at every training step. We also only explore one definition of reading and writing, and one might for example change the "write" operation to either add to, overwrite, or perhaps even multiply (a la gated networks ) the existing tensor in a given bank. In this work, we explore a technique for accelerating architecture selection by learning a model over network parameters, conditioned on the network's parametric form. We introduce a flexible scheme for defining network connectivity patterns and generating network weights for highly variable architectures. Our demonstrate a correlation between performance using suboptimal weights generated by the auxiliary model and performance using fully-trained weights, indicating that we can efficiently explore the architectural design space through this proxy model. Our method achieves competitive, though not state-of-the-art performance on several datasets. We briefly describe they hyperparameters used for the SMASH network in our experiments. The SMASHv1 network has memory banks with N = 6 channels each, a maximum of 240 memory banks per block (though on average less than half that number), and a depth compression ratio of D = 3. Each layer's number of units is uniformly sampled between 6 and 42 (along even multiples of 6), and its dilation factor is uniformly sampled between 1 and 3 (with 1 representing no dilation and 3 representing 2 zeros inserted between each filter). We employ a constant bottlneck ratio of 4 as in BID14, so the output of the HyperNet-generated 1x1 convolution is always 4 times the number of output units. We constrain the main network to have a maximum budget of 16M parameters, though due to our sampling procedure we rarely sample networks with more than 5M parameters. Our SMASHv2 networks have variable memory bank sizes at each blocks, which we constrain to be multiples of N = 8 up to Nmax = 64. We sample filter sizes from, and sample dilation values such that the max spatial extent of a filter in any direction is 9. We sample convolutional groups as factors of the base N value (so for these networks). We put some hand-designed priors on the choice of op configuration (i.e. which convolutions are active), giving slight preference to having all four convolutions active. For SMASHv2 nets we employ a slightly more complex bottleneck ratio: the output of the 1x1 conv is equal to the number of incoming channels while that number is less than twice the number of output units, at which point it is capped (so, a maximum bottleneck ratio of 2).Our HyperNet is a DenseNet, designed ad-hoc to resemble the DenseNets in the original paper BID14 within the confines of our encoding scheme, and to have round numbers of channels. It consists of a standard (non-bottleneck) Dense Block with 8 3x3 convolutional layers and a growth rate of 10, followed by a 1x1 convolution that divides the number of channels in two, a Dense Block with 10 layers and a growth rate of 10, another compressing 1x1 convolution, a Dense Block with 4 layers and a growth rate of 10, and finally a 1x1 convolution with the designated number of output channels. We use Leaky ReLU with a negative slope of 0.02 as a defense against NaNs, as standard ReLU would obfuscate their presence when we had bugs in our early code revisions; we have not experimented with other activations. We adopt a scheme for the layout of the embedding tensor to facilitate flexibility, compatibility with the convolutional toolbox available in standard libraries, and to make each dimension interpretable. First, we place some constraints on the hyperparameters of the main network: each layer's number of output units must be divisible by the memory bank size N and be less than Nmax, and the number of input units must be divisible by D, where N is the number of channels in each memory bank, and Nmax and D are chosen by the user. Applying these constraints allows us to reduce the size of the embedding vector by DN 2, as we will see shortly. The input to a standard 2D CNN is x ∈ R B×C×H×L, where B, C, H, and L respectively represent the Batch, Channel, Height, and Length dimensions. Our embedding tensor is c ∈ R DISPLAYFORM0 where M is the maximum number of memory banks in a block, dmax is the maximum kernel dilation, and n ch is the sum total of input channels to the 1x1 convs of the main network. The conditional embedding c is a one-hot encoding of the memory banks we read and write at each layer. It has 2M + dmax channels, where the first M channels represent which banks are being read from, the next M channels represent which banks are being written to, and the final dmax channels are a one-hot encoding of the dilation factor applied to the following 3x3 convolution. The height dimension corresponds to the number of units at each layer, and the length dimension corresponds to the network depth in terms of the total number of input channels. We keep the Batch dimension at 1 so that no signals propagate wholly independently through the HyperNet. FIG2 shows an example of a small randomly sampled network, its equivalent memory bank representation, and how the read-write pattern is encoded in c. The dilation encoding is omitted in FIG2 for compactness. Our HyperNet has 4DN 2 output channels, such that the output of the HyperNet is DISPLAYFORM1 2 ×n ch /D, which we reshape to W ∈ R Nmax×4Nmaxn ch ×1×1. We generate the weights for the entire main network in a single pass, allowing the HyperNet to predict weights at a given layer based on weights at nearby layers. The HyperNet's receptive field represents how far up or down the network it can look to predict parameters at a given layer. As we traverse the main network, we slice W along its second axis according to the number of incoming channels, and slice along the first axis according to the width of the given layer. At each training step, we sample a network architecture block-by-block, with a random (but upper bounded) computational budget allocated to each block. For SMASHv1, We use memory banks with N = 6 channels each, constrain the number of incoming memory banks to be a multiple of 3 (D = 3), and constrain the number of output units at each layer to be a multiple of 6 (with Nmax = 42) for compatibility with the memory layout. Our HyperNet is a 26 layer DenseNet, each layer of which comprises a Leaky ReLU activation followed by a 3x3 convolution with simplified WeightNorm and no biases. We do not use bottleneck blocks, dropout, or other normalizers in the HyperNet. When sampling our SMASHv2 networks for evaluation, we first sample 500 random architectures, then select the architecture with the highest score for further evaluation. We begin by perturbing this architecture, with a 5% chance of any individual element being randomly resampled, and evaluate 100 random perturbations from this base. We then proceed with 100 perturbations in a simple Markov Chain, where we only accept an update if it has a better SMASH score on the validation set. When training a ing network we make all parameters freely learnable and replace simple WeightNorm with standard BatchNorm. We tentatively experimented with using SMASH generated weights to initialize a ing net, but found standard initialization strategies to work better, either because of a yet-undiscovered bug in our code, or because of the disparity between the dynamics of the SMASH network using WeightNorm and the ing network using BatchNorm. In line with our claim of "one-shot" model search, we keep our exploration of the SMASH design space to a minimum. We briefly experimented with three different settings for N and D, and use a simple, ad-hoc DenseNet architecture for the HyperNet, which we do not tune. We investigated the choice of architecture while examining the SMASH correlation, but stick to the original ad-hoc design for all benchmark experiments. When training SMASH, we use Adam BID17 with the initial parameters proposed by When training a ing network, we use Nesterov Momentum with an initial step size of 0.1 and a momentum value of 0.9. For all tests other than the initial SMASHv1 experiments, we employ a cosine annealing schedule without restarts BID10.For the CIFAR experiments, we train the SMASH network for 100 epochs and the ing networks for 300 epochs, using a batch size of 50 on a single GPU. On ModelNet10, we train for 100 epochs. On ImageNet32x32, we train for 55 epochs. On STL-10, we train for 300 epochs when using the full training set, and 500 epochs when using the 10-fold training splits. For ModelNet-10 tests, we employ 3x3x3 filters (rather than fully variable filter size) to enable our network to fit into memory and keep compute costs manageable, hence why our model only has 8M parameters compared to the base 16M parameters. All of our networks are pre-activation, following the order BN-ReLU-Conv if BatchNorm is used, or ReLU-Conv if WeightNorm is used. Our code supports both pre-and post-activation, along with a variety of other options such as which hyperparameters to vary and which to keep constant. While investigating the SMASH correlation, we initially conducted two brief experiments to guide the choice of experiments in the remainder of the investigation. First, experiment, we train a low-budget SMASH network (to permit more rapid testing) with a much smaller HyperNet relative to the main network (though still the standard ratio of generated to freely learned weights). We expect the decreased capacity HyperNet to be less able to learn to generate good weights for the full range of architectures, and for the correlation between SMASH score and true performance to therefore be weak or nonexistent. The of this study are shown in FIG4 (a), where we arguably observe a breakdown of the correlation. In addition to repeat trials for each ing net, we also train two additional SMASH networks with different random seeds and compare their predicted performance against the initial SMASH net in FIG4, to get a brief sense for how these values can vary across training runs. Next, we train a high-budget SMASH network and drastically increase the ratio of normally learned parameters to HyperNet-generated parameters, such that the majority of the net's model capacity is in non-generated weights. Under these conditions, the validation errors achieved with SMASH-generated weights are much lower than validation errors achieved with an equivalent SMASH network with the typical ratio, but the ing top models are not as performant and we found that (in the very limited number of correlation tests we performed) the SMASH score did not correlate with true performance. This highlights two potential pitfalls: first, if the HyperNet is not responsible for enough of the network capacity, then the aggregate generated and learned weights may not be sufficiently well-adapted to each sampled architecture, and therefore too far from optimal to be used in comparing architectures. Second, comparing SMASH scores for two separate SMASH networks can be misleading, as the SMASH score is a function of both the normally learned and generated weights, and a network with more fixed weights may achieve better SMASH scores even if the ing nets are no better. [,,,
A technique for accelerating neural architecture selection by approximating the weights of each candidate architecture instead of training them individually.
1,138
scitldr
Textual entailment (or NLI) data has proven useful as pretraining data for tasks requiring language understanding, even when building on an already-pretrained model like RoBERTa. The standard protocol for collecting NLI was not designed for the creation of pretraining data, and it is likely far from ideal for this purpose. With this application in mind we propose four alternative protocols, each aimed at improving either the ease with which annotators can produce sound training examples or the quality and diversity of those examples. Using these alternatives and a simple MNLIbased baseline, we collect and compare five new 9k-example training sets. Our primary are largely negative, with none of these new methods showing major improvements in transfer learning. However, we make several observations that should inform future work on NLI data, such as that the use of automatically provided seed sentences for inspiration improves the quality of the ing data on most measures, and all of the interventions we investigated dramatically reduce previously observed issues with annotation artifacts. The task of natural language inference (NLI; also known as textual entailment) has been widely used as an evaluation task when developing new methods for language understanding tasks, but it has recently become clear that high-quality NLI data can be useful in transfer learning as well. Several recent papers have shown that training large neural network models on natural language inference data, then fine-tuning them for other language understanding tasks often yields substantially better on those target tasks . This holds even when starting from large models like BERT that have already been pretrained extensively on unlabeled data (; ; b). The largest general-purpose corpus for NLI, and the one that has proven most successful in this setting, is the Multi-Genre NLI Corpus . MNLI was designed for use in a benchmark task, rather than as a resource for use in transfer learning and as far as we know, it was not developed on the basis of any kind of deliberate experimentation. Further, data collected under MNLI's data collection protocol has known issues with annotation artifacts which make it possible to perform much better than chance using only one of the sentences in each pair (; ;). This work begins to ask what would be involved in collecting a similar dataset that is explicitly designed with transfer learning in mind. In particular, we consider four potential changes to the original MNLI data collection protocol that are designed to improve either the ease with which annotators can produce sound examples, or the quality and diversity of those examples, and evaluate their effects on transfer. We collect a baseline dataset of about 10k examples that follows the MNLI protocol with our annotator pool, followed by four additional datasets of the same size which isolate each of our candidate changes. We then compare all five in a set of transfer learning experiments that look at our ability to use each of these datasets to improve performance on the eight downstream language understanding tasks in the SuperGLUE (b) benchmark. All five of our datasets are consistent with the task definition that was used in MNLI, which is in turn based on the definition introduced by. In this task, each example consists of a pair of short texts, called the premise and the hypothesis. The model is asked to read both texts and make a three-way classification decision: Given the premise, would a reasonable person infer that hypothesis must be true (entailment), infer that that it must be false (contradiction), or decide that there is not enough information to make either inference (neutral). While it is certainly not clear that this framing is optimal for pretraining, we leave a more broad-based exploration of task definitions for future work. Our BASE data collection protocol (Figure 1) follows MNLI closely in asking annotators to read a premise sentence and then write three corresponding hypothesis sentences in empty text boxes corresponding to the three different labels (entailment, contradiction, and neutral). When an annotator follows this protocol, they produce three sentence pairs at once, all sharing a single premise. Our PARAGRAPH protocol tests the effect of supplying annotators with complete paragraphs, rather than sentences, as premises. Longer texts offer the potential for discourse-level inferences, the addition of which should yield a dataset which is more difficult, more diverse, and less likely to contain trivial artifacts. However, reading full paragraphs adds a potential cost in added annotator time and effort, which could potentially be better spent constructing more sentence-level examples. Our EDITPREMISE and EDITOTHER protocols test the effect of pre-filling a single seed text in each of the three text boxes that annotators are asked to fill out. By reducing the raw amount of typing required, this could allow annotators to produce good examples more quickly. By encouraging them to keep the three sentences similar, it could also encourage minimal-pair-like examples that minimize artifacts. We test two variants of this idea: One uses a copy of the premise sentence as a seed text and the second retrieves a new sentence from an existing corpus that is similar to the premise sentence, and uses that. Our CONTRAST protocol tests the effect of adding artificial constraints on the kinds of hypothesis sentences annotators can write. Giving annotators difficult and varying constraints could encourage creativity and prevent annotators from falling into repeating ruts or patterns in their writing that could lead to easier, more repetitive data. However, as with the use of longer contexts in BASE, this protocol risks substantially slowing the annotation process. We experiment with a procedure inspired by that used to create the language-andvision dataset NLVR2 , in which in which annotators must write sentences that are valid entailments (or contradictions) for a given premise, but not valid entailments for a second, similar, distractor premise. In evaluations on transfer learning with the SuperGLUE benchmark, all of these four methods offer substantial improvements in transfer ability over a plain RoBERTa model, but that only EDITOTHER and CONTRAST offering consistent improvements over BASE, and only by very small margins. While this is largely a negative for our primary focus on transfer, we also observe that all four of these methods are able to produce data of comparable subjective quality while significantly reducing the incidence of previously reported annotation artifacts, and that PARAGRAPH, EDITPREMISE, and EDITOTHER all accomplish this without significantly increasing the time cost of annotation. The observation that NLI data can be effective in pretraining was first reported for SNLI and MNLI by on models pretrained from scratch on NLI data. This finding was replicated in the setting of multitask pretraining by. This was later extended to the context of intermediate training-where a model is pretrained on unlabeled data, then on relatively abundant labeled data (MNLI), and finally scarce task specific labeled data-by , , Liu et al. (2019a), , and Liu et al. (2019b) across a range of large pretrained models models and target language understanding tasks. Similar have been observed with transfer from the SocialIQA corpus to target tasks centered on common sense. A small body of work including , Bingel and Søgaard and Wang et al. (2019a) Table 1: Randomly selected examples from the datasets under study. Neither the MNLI training set nor any of our collected data are filtered for quality in any way, and errors or debatable judgments are common in both. explored the empirical landscape of which supervised NLP tasks can offer effective pretraining for other supervised NLP tasks. Existing NLI datasets have been built using a wide range of strategies: FraCaS and several targeted evaluation sets were constructed manually by experts from scratch. The RTE challenge corpora (, et seq.) primarily used expert annotations on top of existing premise sentences. SICK was created using a structured pipeline centered on asking crowdworkers to edit sentences in prescribed ways. MPE uses a similar strategy, but constructs unordered sets of sentences for use as premise. SNLI introduced the method, used in MNLI, of asking crowdworkers to compose labeled hypotheses for a given premise. SciTail and SWAG used domain-specific resources to pair existing sentences as potential entailment pairs, with SWAG additionally using trained models to identify examples worth annotating. There has been little work directly evaluating and comparing these many methods. In that absence, we focus on the SNLI/MNLI approach, because it has been shown to be effective for the collection of pretraining data and because its reliance on only crowdworkers and unstructured source text makes it simple to scale. Two recent papers have investigated other methods that could augment the base MNLI protocol we study here. ANLI collects new examples following this protocol, but adds an incentive for crowdworkers to produce sentence pairs on which a baseline system will perform poorly. introduce a method for expanding an already-collected dataset by making small edits to existing examples that change their labels, motivated by the same desire that motivates our EDITPREMISE and EDITOTHER protocols: to produce minimally-different minimal pairs with differing labels. Both of these papers offer methodological changes that are potentially complementary to the changes we investigate here, and neither evaluates the impact of their methods on transfer learning. Since ANLI is large and roughly comparable with MNLI, we include it in our transfer evaluations here. The basic interface for our tasks is similar to that used for SNLI and MNLI: We provide a premise from a preexisting text source and ask human annotators to provide three hypothesis sentences: one that says something true about the fact or situation in the prompt (entailment), one that says something that may or may not be true about the fact or situation in the prompt (neutral), and one that definitely does not say something true about the fact or situation in the prompt (contradiction). BASE In this baseline, modeled closely on the protocol used for MNLI, we show annotators a premise sentence and ask them to provide compose one new sentence for each label. PARAGRAPH Here, we use the same instructions as BASE, but with full paragraphs, rather than single sentences, as the supplied premises. EDITPREMISE Here, we pre-fill the three text boxes with editable copies of the premise sentence, and ask annotators to edit each text field to compose sentences that conform to the same three requirements used in BASE. Annotators are permitted to delete the pre-filled text. EDITOTHER Here, we follow the same procedure as EDITPREMISE, but rather than pre-filling the premise as a seed sentence, we instead use a similarity search method to retrieve a different sentence from the same source corpus that is similar to the premise. We hypothesize that the additional variation offered by these added sentences could improve the creativity and diversity of the ing examples. CONTRAST Here, we again retrieve a second sentence that is similar to the premise, but we display it as a second premise rather than using it to seed an editable text box. We then ask annotators to compose two new sentences: One sentence must be true only about the fact or situation in the first premise (that is, contradictory or neutral with respect to the second premise). The other sentence must be false only about the fact or situation in the first premise (and true or neutral with respect to the second premise). This yields an entailment pair and a contradiction pair, both of which use only the first premises, with the second premise serving only as a constraint on the annotation process. We could not find a sufficiently intuitive way to collect neutral sentence pairs under this protocol, and opted to use only two classes rather than increase the difficulty of an already unintuitive task. MNLI uses the small but stylistically diverse OpenANC corpus as its source for premise sentences, but uses nearly every available sentence from its non-technical sections. To avoid re-using premises, we instead draw on English Wikipedia. 1 1 We use the 2019-06-20 downloadable version, extract the plain text with Apertium's WikiExtractor feature (Forcada Similarity Search The EDITOTHER and CON-TRAST protocols require pairs of related sentences as their inputs. To construct these, we assemble a heuristic sentence-matching system intended to generate pairs of highly similar sentences that can be minimally edited to construct entailments or contradictions: Given a premise, we retrieve its closest 10k nearest neighbors according to dot-product similarity over Universal Sentence Encoder embeddings. Using a parser and an NER system, we then select those neighbors which share a subject noun phrase in common with the premise (dropping premises for which no such neighbors exist). From those filtered neighbors, we retrieve the single non-identical neighbor that has the highest overlap with the premise in both raw tokens and entity mentions, preferring sentences with similar length to the hypothesis. We start data collection for each protocol with a pilot of 100 items, which are not included in the final datasets. We use these to refine task instructions and to provide feedback to our annotator pool on the intended task definition. We continue to provide regular feedback throughout the annotation process to clarify ambiguities in the protocols and to discourage the use of systematic patternssuch as consistently composing shorter hypotheses for entailments than for contradictions-that could make the ing data artificially easy. Annotators are allowed to skip prompts which they deem unusable for any reason. These generally involve either non-sentence strings that were mishandled by our sentence tokenizer or premises with inaccessible technical language. Skip rates ranged from about 2.5% for EDITOTHER to about 10% for CONTRAST (which can only be completed when the two premises are both comprehensible and sufficiently different from one another). A pool of 19 professional annotators located in the United States worked on our tasks, with about ten working on each. As a consequence of this relatively small annotation team, many annotators worked under more than one protocol, which we ran consecutively. This introduces a modest confound into our , in that annotators start the later tasks having seen somewhat more feedback. All annotators, though, see substantial feedback et al., 2011), sentence-tokenize it with SpaCy , and randomly sample sentences (or paragraphs) for annotation. from a pilot phase before starting each task. This confound presentation prevents us from perfectly isolating the differences between protocols, but we argue that these nonetheless form an informative case study. Using each protocol, we collect at least 10k examples and split them into exactly 9k training examples and at least 1k validation examples, all to be released upon acceptance. Table 1 shows randomly chosen examples. As we are investigating data collection protocols for pretraining and do not use any kind of second-pass quality control (motivated by work like), we do not collect a test set and do not recommend these datasets for system evaluation. Hypotheses are mostly fluent, full sentences that adhere to prescriptive writing conventions for US English. In constructing hypotheses, annotators often reuse words or phrases from the premise, but rearrange them, alter their inflectional forms, or substitute synonyms or antonyms. Hypothesis sentences tend to differ from premise sentences both grammatically and stylistically. Table 2 shows some simple statistics on the collected text. Our clearest observation here is that the two methods that use seed sentences tend to yield longer hypotheses and tend not to show a clear correlation between hypothesis-premise token overlap and label (as measured by the standard deviations in unique tokens). CONTRAST tends to produce shorter hypotheses. Annotator Time Annotators completed each of the five protocols at a similar rate, taking 3-4 minutes per prompt. This goes against our expectations that the longer premises in PARAGRAPH should substantially slow the annotation process, and that the pre-filled text in EDITPREMISE and EDITOTHER should speed annotation. Since the relatively complex CONTRAST produces only two sentence pairs per prompt rather than three, it yields fewer examples per minute. Table 3 shows the three words in each dataset that are most strongly associated with each label, using the smoothed PMI method of. We also include for a baseline: a 9k-example sample from the government documents single-genre section of MNLI, which is meant to to be maximally comparable to the single-genre datasets we collect. BASE shows similar associations to the original MNLI, but all four of our interventions reduce these associations. The use of longer contexts or seed sentences in particular largely eliminates the strong association between negation and contradiction seen in MNLI, and no new strong associations appear to take its place. Our experiments generally compare models trained in ten different settings: Each of the five 9k-example training sets introduced in this paper; the full 393k-example MNLI training set; the full 1.1m-example ANLI training set (which combines the SNLI training set, the MNLI training set, and the newly-collected ANLI training examples); 2 9k- Table 3: The top three words most associated with specific labels in each dataset, sorted by the PMI between the word and the label. The counts column shows how many of the instances of each word occur in hypotheses matching the specified label. We compare the two-class CONTRAST with a two-class version of MNLI Gov. example samples from the MNLI training set and from the combined ANLI training set, meant to control for the size differences between these existing datasets and our baselines; and finally a 9k-example sample from the government section of the MNLI training set, meant to control (as much as possible) for the difference between our singlegenre Wikipedia datasets and MNLI's relatively diverse text. Our models are trained starting from pretrained RoBERTa (large variant; b) or XLNet (large, cased;). RoBERTa represented the state of the art on most of our target tasks as of the launch of our experiments, and XLNet is competitive with RoBERTa on most tasks, but offers a natural replication, as well as the advan- Table 4: NLI modeling experiments with RoBERTa, reporting on the validation sets for MNLI and for the task used for training each model (Self), and the GLUE diagnostic set. We compare the two-class CON-TRAST with a two-class version of MNLI. tage that it can be used to better compare models trained on our data with models trained on ANLI data: ANLI was collected with a model-in-the-loop procedure using RoBERTa that makes it difficult to interpret RoBERTa . We run our expemients using the jiant toolkit (d), which implements the Super-GLUE tasks, MNLI, and ANLI, and in turn uses transformers , AllenNLP , and PyTorch . To make it possible to train these large models on single consumer GPUs, we use small-batch (b = 4) training and a maximum total sequence length of 128 word pieces. We train for up to 2 epochs for the very large ReCoRD, 10 epochs for the very small CB, COPA, and WSC, and 4 epochs for the remaining tasks. Except where noted, all reflect the median final performance from three random restarts of training. Direct NLI Evaluations As a preliminary sanity check, Table 4 shows the of evaluating models trained in each of the settings described above on their own validation sets, on the MNLI validation set, and on the expert-constructed GLUE diagnostic set (c). As NLI classifiers trained on CONTRAST cannot produce the neutral labels used in MNLI, we evaluate them separately, and compare them with two-class variants of the MNLI models. Our BASE data yields a model that performs somewhat worse than a comparable MNLI Gov. 9k model, both on their respective validation sets and on the full MNLI validation set. This suggests, at Table 5: Results from RoBERTa hypothesis-only NLI classifiers on the vaidation sets for MNLI and for the datasets used in training. least tentatively, that the new annotations are less reliable than those in MNLI. This is disconcerting, but does not interfere with our key comparisons. The main we draw from these is that none of the first three interventions improve performance on the out-of-domain GLUE diagnostic set, suggesting that they do not help in the collection of data that is both difficult and consistent with the MNLI label definitions. We also observe that the newer ANLI data yields worse performance than MNLI on the out-of-domain evaluation data when we control for dataset size. To further investigate the degree to which our hypotheses contain artifacts that reveal their labels, Table 5 shows with single-input versions of our models trained on hypothesis-only versions of the datasets under study and evaluated on the datasets' validation sections. Our first three interventions, especially EDIT-PREMISE, show much lower hypothesis-only performance than BASE. This adds further evidence, alongside our PMI , that these interventions reduce the presence of such artifacts. While we do not have a direct baseline for the two-class CON-TRAST in this experiment, the fact that it shows substantially lower performance than MNLI 9k is consistent with the encouraging seen above. Transfer Evaluations For our primary evaluation, we use the training sets from our datasets in STILTs-style intermediate training : We fine-tune a large pretrained model on our collected data using standard fine-tuning procedures, then fine-tune a copy of the ing model again on each of the target evaluation datasets we use. We then measure the aggregate performance of the ing models across those evaluation datasets. We evaluate on the tasks in the SuperGLUE benchmark (b): BoolQ , MultiRC , and , CommitmentBank (CB;), Choice of Plausible Alternatives (COPA;), Recognizing Textual Entailment (RTE; ; ;), the Winograd Schema Challenge (WSC;), and WiC , in addition to a broad-coverage RTE diagnostic set (AX b) and WinoGender RTE (AX g ;). These tasks were selected to be difficult for BERT but relatively easy for nonexpert humans, and are meant to replace the largelysaturated GLUE benchmark (c). SuperGLUE does not include labeled test data, and does not allow for substantial ablation analyses on its test sets. Since we have no single final model whose performance we aim to show off, we do not evaluate on the test sets. We also neither use any auxiliary WSC-format data when training our WSC model (as in) nor artificially modify the task format. As has been observed elsewhere, we do not generally reach above-chance performance on that task without these extra techniques. Results are shown in Table 6. 3 Intermediate training with any of our five datasets yields models that transfer better than the plain RoBERTa or XLNet baseline, but we do not see consistent improvements over MNLI Gov. 9k. We also replicate the previously established that NLI data is broadly helpful for transfer, with the large combined ANLI training set showing improvements over plain RoBERTa on six of eight tasks and simultaneously reducing the variance in across restarts. We note, though, that our best overall uses only 9k NLI training examples, suggesting either that this size is enough to maximize the gains available through NLI pretraining, or Table 6: Model performance on the SuperGLUE validation and diagnostic sets. The Avg. column shows the overall SuperGLUE score-an average across the eight primary tasks, weighting each task equally-as a mean and standard deviation across three restarts. CONTRAST yield consistent improvements in transfer performance over BASE, and these improvements are small and not reliably reflected by any one target task. We take this to be a largely negative : While there are informative trends, we cannot confidently claim that any intervention yields improvements in the degree to which the ing data can be used for transfer. Our chief on transfer learning are negative: None of our four interventions consistently improve upon the base MNLI data collection protocol by more than a marginal degree, though we see suggestive evidence that methods that supply annotators with retrieved non-premise seed sentences for inspiration offer small improvements. However, we also observe that all four of our interventions, and especially the use of longer contexts or pre-filled seed sentences, help reduce the prevalence of artifacts in the generated hypotheses that reveal the label, and the use of longer premises or seed sentences in particular do this without increasing the time cost of annotation. This suggests that these methods may be valuable in the collection of high-quality evaluation data, if combined with additional validation methods to ensure high human agreement with the collected labels. The need and opportunity that motivated this work remains compelling: Human-annotated data like MNLI has already proven itself as a valuable tool in teaching machines general-purpose skils for language understanding, and discovering ways to more effectively build and use such data could further accelerate the field's already fast progress toward robust, general-purpose language understanding technologies. Further work along this line of research could productively follow a number of directions: General work on incentive structures and task design for crowdsourcing could help to address more general questions about how to collect data that is simultaneously creative and consistently labeled. Machine learning methods work on transfer learning could help to better understand and exploit the effects that drive the successes we have seen with NLI data so far. Finally, there remains room for further empirical work investigating the kinds of task definitions and data collection protocols most likely to yield positive transfer.
We propose four new ways of collecting NLI data. Some help slightly as pretraining data, all help reduce annotation artifacts.
1,139
scitldr
Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication. Understanding the interaction dynamics of objects and predicting what happens next is one of the key capabilities of humans which we heavily rely on to make decisions in everyday life BID3. A model that can accurately predict future observations of complex sensory modalities such as vision must internally represent the complex dynamics of real-world objects and people, and therefore is more likely to acquire a representation that can be used for a variety of visual perception tasks, such as object tracking and action recognition BID31 BID25 BID7. Furthermore, such models can be inherently useful themselves, for example, to allow an autonomous agent or robot to decide how to interact with the world to bring about a desired outcome BID27.However, modeling future distributions over images is a challenging task, given the high dimensionality of the data and the complex dynamics of the environment. Hence, it is common to make various simplifying assumptions. One particularly common assumption is that the environment is deterministic and that there is only one possible future BID5 BID31 BID1 BID25. Models conditioned on the actions of an agent frequently make this assumption, since the world is more deterministic in these settings BID27 BID10. However, most real-world prediction tasks, including the action-conditioned settings, are in fact not deterministic, and a deterministic model can lose many of the nuances that are present in real physical interactions. Given the stochastic nature of video prediction, any deterministic model is obliged to predict a statistic of all the possible outcomes. For example, deterministic models trained with a mean squared error loss function generate the expected value of all the possibilities for each pixel independently, which is inherently blurry BID26. Figure 1: Importance of stochasticity in video prediction. In each video, a random shape follows a random direction (first row). Given only the first frame, the deterministic model from BID10 predicts the average of all the possibilities. The third row is the output of SV2P with latent sampled from approximated posterior which predicts the correct motion. Last two rows are stochastic outcomes using random latent values sampled from assumed prior. As observed, these outcomes are random but within the range of possible futures. Second sample of Figure 1c shows a case where the model predicts the average of more than one outcome. Our main contribution in this paper is a stochastic variational method for video prediction, named SV2P, that predicts a different plausible future for each sample of its latent random variables. We also provide a stable training procedure for training a neural network based implementation of this method. To the extent of our knowledge, SV2P is the first latent variable model to successfully predict multiple frames in real-world settings. Our model also supports action-conditioned predictions, while still being able to predict stochastic outcomes of ambiguous actions, as exemplified in our experiments. We evaluate SV2P on multiple real-world video datasets, as well as a carefully designed toy dataset that highlights the importance of stochasticity in video prediction (see Figure 1). In both our qualitative and quantitative comparisons, SV2P produces substantially improved video predictions when compared to the same model without stochasticity, with respect to standard metrics such as PSNR and SSIM. The stochastic nature of SV2P is most apparent when viewing the predicted videos. Therefore, we highly encourage the reader to check the project website https://goo.gl/iywUHc to view the actual videos of the experiments. The TensorFlow BID0 implementation of this project will be open sourced upon publication. A number of prior works have addressed video frame prediction while assuming deterministic environments BID28 BID31 BID34 BID38 BID1 BID25. In this work, we build on the deterministic video prediction model proposed by BID10, which generates the future frames by predicting the motion flow of dynamically masked out objects extracted from the previous frames. Similar transformationbased models were also proposed by BID6. Prior work has also considered alternative objectives for deterministic video prediction models to mitigate the blurriness of the predicted frames and produce sharper predictions BID26 BID33. Despite the adversarial objective, BID26 found that injecting noise did not lead to stochastic predictions, even for predicting a single frame. BID27; BID5 make sharp video predictions by assuming deterministic outcomes in video games given the actions of the agents. However, this assumption does not hold in real-world settings, which almost always have stochastic dynamics. Auto-regressive models have been proposed for modeling the joint distribution of the raw pixels. Although these models predict sharp images of the future, their training and inference time is extremely high, making them difficult to use in practice. BID29 proposed a parallelized multi-scale algorithm that significantly improves the training and prediction time but still requires more than a minute to generate one second of 64×64 video on a GPU. Our comparisons suggest that the predictions from these models are sharp, but noisy, and that our method produces substantially better predictions, especially for longer horizons. Another approach for stochastic prediction uses generative adversarial networks (GANs) BID14, which have been used for video generation and prediction BID32. BID35; applied adversarial training to predict video from a single image. Although GANs generate sharp images, they tend to suffer from modecollapse BID13, particularly in conditional generation settings BID40.Variational auto-encoders (VAEs) BID21 ) also have been explored for stochastic prediction tasks. BID36 uses conditional VAEs to predict dense trajectories from pixels. BID39 predicts a single stochastic frame using cross convolutional networks in a VAElike architecture. BID30 uses conditional VAEs and Gaussian mixture priors for stochastic prediction. Both of these works have been evaluated solely on synthetic datasets with simple moving sprites and no object interaction. Real images significantly complicate video prediction due to the diversity and variety of stochastic events that can occur. BID11 compared various architectures for multimodal motion forecasting and one-frame video prediction, including variational inference and straightforward sampling from the prior. Unlike these prior models, our focus is on designing a multi-frame video prediction model to produce stochastic predictions of the future. Multi-frame prediction is dramatically harder than single-frame prediction, since complex events such as collisions require multiple frames to fully resolve, and single-frame predictions can simply ignore this complexity. We believe, our approach is the first latent variable model to successfully demonstrate stochastic multi-frame video prediction on real world datasets. In order to construct our stochastic variational video prediction model, we first formulate a probabilistic graphical model that explains the stochasticity in the video. Since our goal is to perform conditional video prediction, the predictions are conditioned on a set of c context frames x 0,..., x c−1 (e.g., if we are conditioning on one frame, c = 1), and our goal is to sample from p(x c:T |x 0:c−1), where x i denotes the i th frame of the video FIG1 ). Video prediction is stochastic as a consequence of the latent events that are not observable from the context frames alone. For example, when a robot's arm pushes a toy on a table, the unknown weight of that toy affects how it moves. We therefore introduce a vector of latent variables z into our model, distributed according to a prior z ∼ p(z), and build a model p(x c:T |x 0:c−1, z). This model is still stochastic but uses a more general representation, such as a conditional Gaussian, to explain just the noise in the image, while z accounts for the more complex stochastic phenomena. We can then factorize this model to T t=c p θ (x t |x 0:t−1, z). Learning then involves training the parameters of these factors θ, which we assume to be shared between all the time steps. At inference time we need to estimate values for the true posterior p(z|x 0:T), which is intractable due its dependency on p(x 0:T). We overcome this problem by approximating the posterior with an inference network q φ (z|x 0:T) that outputs the parameters of a conditionally Gaussian distribution N (µ φ (x 0:T), σ φ (x 0:T)). This net- work is trained using the reparameterization trick BID21, according to: DISPLAYFORM0 Here, θ and φ are the parameters of the generative model and inference network, respectively. To learn these parameters, we can optimize the variational lower bound, as in the variational autoencoder (VAE) BID21: DISPLAYFORM1 where D KL is the Kullback-Leibler divergence between the approximated posterior and assumed prior p(z) which in our case is the standard Gaussian N (0, I).In Equation 2, the first term on the RHS represents the reconstruction loss while the second term represents the divergence of the variational posterior from the prior on the latent variable. It is important to emphasize that the approximated posterior is conditioned on all of the frames, including the future frames x t:T. This is feasible during training, since x t:T is available at the training time, while at test time we can sample the latents from the assumed prior. Since the aim in our method is to recover latent variables that correspond to events which might explain the variability in the videos, we found that it is in fact crucial to condition the inference network on future frames. At test time, the latent variables are simply sampled from the prior which corresponds to a smoothing-like inference process. In principle, we could also perform a filtering-like inference procedure of the form q φ (z|x 0:t−1) for time step t to infer the most likely latent variables based only on the conditioning frames, instead of sampling from the prior, which could produce more accurate predictions at test time. However, it would be undesirable to use a filtering process at training time: in order to incentivize the forward prediction network to make use of the latent variables, they must contain some information that is useful for predicting future frames that is not already present in the context frames. If they are predicted entirely from the context frames, no such information is present, and indeed we found that a purely filtering-based model simply ignores the latent variables. So far, we've assumed that the latent events are constant over the entire video. We can relax this assumption by conditioning prediction on a time-variant latent variable z t that is sampled at every time step from p(z). The generative model then becomes p(z t)T t=c p θ (x t |x 0:t−1, z t) and, assuming a fixed posterior, the inference model will be approximated by q φ (z t |x 0:T), where the model parameters φ are shared across time. In practice, the only difference between these two formulations is the frequency of sampling z from p(z) and q φ (z|x 0:T). In the time-invariant version, we sample z once per video, whereas with the time-variant latent, sampling happens every frame. The main benefit of time-variant latent variable is better generalization beyond T, since the model does not have to encode all the events of the video in one vector z. We provide an empirical comparison of these formulations in Section 5.2. With latent loss Finn el al. Naive Training (Only Step 3)Step 1 Then 3Step FORMULA1 In the first phase, the inference network is turned off and only the generative network is being trained, ing in deterministic predictions. The inference network is used in the second phase without a KL-loss. The last phase includes D KL q φ (z|x 0:T)||p(z) to enable accurate sampling latent from p(z). (a) the KL-loss (b) the reconstruction loss (c) Training stability. This graph compares reconstruction loss at the end of five training sessions on the BAIR robot pushing dataset, with and without following all the steps of the training procedure. The proposed training is quite stable and in lower error compared to naïve training. In action-conditioned settings, we modify the generative model to be conditioned on action vector a t. This in p(z t)T t=c p θ (x t |x 0:t−1, z t, a t) as generative model while keeping the posterior approximation intact. Conditioning the outcome on actions can decrease future variability; however it will not eliminate it if the environment is inherently stochastic or the actions are ambiguous. In this case, the model is still capable of predicting stochastic outcomes in a narrower range of possibilities. To model the approximated posterior q φ (z|x 0:T) we used a deep convolutional neural network as shown in the top row of FIG2. Since we assumed a diagonal Gaussian distribution for q φ (z|x 0:T), this network outputs the mean µ φ (x 0:T) and standard deviation log σ φ (x 0:T) of the approximated posterior. Hence the entire inference network is convolutional, the predicted parameters are 8×8 single channel response maps. We assume each entry in this response maps is pairwise independent, forming the latent vector z. The latent value is then sampled using Equation 1. As discussed before, this sampling happens every frame for time-varying latent, and once per video in time-invariant case. For p(x t |x 0:t−1, z), we used the CDNA architecture proposed by BID10, which is a deterministic convolutional recurrent network that predicts the next frame x t given the previous frame x t−1 and an optional action a t. This model constructs the next frames by predicting the motions of segments of the image (i.e., objects) and then merging these predictions via masking. Although this model directly outputs pixels, it is partially-appearance invariant and can generalize to unseen objects BID10. To condition on the latent value, we modify the CDNA architecture by stacking z t as an additional channel on tiled action a t. Our model can be trained end-to-end. However, our experiments show that naïve training usually in the model ignoring the latent variables and converging to a suboptimal deterministic solution FIG4. Therefore, we train the model end-to-end in three phases, as follows:1. Training the generative network: In this phase, the inference network has been disabled and the latent value z will be randomly sampled from N (0, I). The intuition behind this phase is to train the generative model to predict the future frames deterministically (i.e. modeling p θ (x t |x 0:t−1)). In the second phase, the inference network is trained to estimate the approximate posterior q φ (z|x 0:T); however, the KL-loss is set to 0. This means that the model can use the latent value without being penalized for diverging from p(z). As seen in FIG4, this phase in very low reconstruction error, however it is not usable at the test time since D KL q φ (z|x 0:T)||p(z) 0 and sampling z from the assumed prior will be inaccurate. In the last phase, the KL-loss is added, ing in a sudden drop of KLdivergence and an increase of reconstruction error. The reconstruction loss converging to a value lower than the first phase and KL-loss converging to zero are indicators of successful training. This means that z can be sampled from p(z) at test time for effective stochastic prediction. To gradually transition from the second phase to the third, we add a multiplier to KL-loss that is set to zero during the first two phases and then increased slowly in the last phase. This is similar to the β hyper-parameter in BID15 and BID2 that is used to balance latent channel capacity and independence constraints with reconstruction accuracy. We found that this training procedure is quite stable and the model almost always converges to the desired parameters. To demonstrate this stability, we trained the model with and without the proposed training procedure, five times each. FIG4 shows the average and standard deviation of reconstruction loss at the end of these training sessions. Naïve training in a slightly better error compared to BID10, but with high variance. When following the proposed training algorithm, the model consistently converges to a much lower reconstruction error. To highlight the importance of stochasticity in video prediction, we created a toy video dataset with intentionally stochastic motion. Each video in this dataset is four frames long. The first frame contains a random shape (triangle, rectangle or circle) with random size and color, centered in the frame, which then randomly moves to one of the eight directions (up, down, left, right, up-left, upright, down-left, down-right). Each frame is 64×64×3 and the is static gray. The main intuition behind this design is that, given only the first frame, a model can figure out the shape, color, and size of the moving object, but not its movement direction. We train BID10 and SV2P to predict the future frames, given only the first frame. Figure 1 shows the video predictions from these two models. Since BID10 is a deterministic model with mean squared error as loss, it predicts the average of all possible outcomes, as expected. In contrast, SV2P predicts different possible futures for each sample of the latent variable z ∼ N (0, I). In our experiments, all the videos predicted by SV2P are within the range of plausible futures (e.g. we never saw the shape moves in any direction other than the original eight). However, in some cases, SV2P still predicts the average of more than one future, as it can be seen in the first random sample of Figure 1c. The main reason for this problem seems to be overlapping posterior distributions in latent space which can cause some latent values (sampled from p(z)) to be ambiguous. To demonstrate that the inference network is working properly and that the latent variable does indeed learn to store the information necessary for stochastic prediction (i.e., the direction of movement), we include predicted futures when z ∼ q φ (x 0:T). By estimating the correct parameters of the latent distribution, using the inference network, the model always generates the right outcome. However, this cannot be used in practice, since the inference network requires access to all the frames, including the ones in the future. Instead, z will be sampled from assumed prior p(z). To evaluate SV2P, we test it on three real-world video datasets by comparing it to the CDNA model BID10, as a deterministic baseline, as well as a baseline that outputs the last seen frame as the prediction. We compare SV2P with an auto-regressive stochastic model, video pixel networks (VPN). We use the parallel multi-resolution implementation of VPN from BID29, which is an order of magnitude faster than the original VPN, but still requires more than a minute to generate one second of 64×64 video. In all of these experiments, we plot the of sampling the latent once per video (SV2P time-invariant latent) and once per frame (SV2P time-variant latent). We strongly encourage readers to view https://goo.gl/iywUHc for videos of the which are more illustrative than printed frames. We quantitatively and qualitatively evaluate SV2P on following real-world datasets:• BAIR robot pushing dataset BID8: This dataset contains action-conditioned videos collected by a Sawyer robotic arm pushing a variety of objects. All of the videos in this datasets have similar table top settings with static . Each video also has recorded actions taken by the robotic arm which correspond to the commanded gripper pose. An interesting property of this dataset is the fact that the arm movements are quite unpredictable in the absence of actions (compared to the robot pushing dataset BID10 which the arm moves to the center of the bin). For this dataset, we train the models to predict the next ten frames given the first two, both in action-conditioned and action-free settings.• Human3.6M BID18: Humans and animals are one of the most interesting sources of stochasticity in natural videos, which behave in complex ways as a consequence of unpredictable intentions. To study human motion prediction, we use the Human3.6M dataset which consists of actors performing various actions in a room. We used the pre-processing and testing format of BID10: a 10 Hz frame rate and 10-frame prediction given the previous ten. The videos from this datasets contains various actions performed by humans (walking, talking on the phone, . . .). Similar to BID10, we included videos from all the performed actions in training dataset while keeping all the videos from an specific actor out for testing.• Robotic pushing prediction BID10: We use the robot pushing prediction dataset to compare SV2P with another stochastic prediction method, video pixel networks (VPNs). VPNs demonstrated excellent on this dataset in prior work, and therefore robot pushing dataset provides a strong point of comparison. However, in contrast to our method, VPNs do not include latent stochastic variables that represent random events, and rely on an expensive auto-regressive architecture. In this experiment, the models have been trained to predict the next ten frames, given the first two. Similar to BAIR robot pushing dataset, this dataset also contains actions taken by the robotic arm which are the pose of the commanded gripper. Figure 5: Stochasticity of SV2P predictions on the action-free BAIR dataset. Each line presents the sample with highest PSNR compared to ground truth, after multiple sampling. The number on the right indicates the number of random samples. As can be seen, SV2P predicts highly stochastic videos and, on average, only three samples is enough to predict outcomes with higher quality compared to BID10. In our quantitative evaluation, we aim to understand whether the range of possible futures captured by our stochastic model includes the true future. Models that are more stochastic do not necessarily score better on average standard metrics such as PSNR BID17 and SSIM BID37. However, if we are interested primarily in understanding whether the true outcome is within the set of predictions, we can instead evaluate the score of the best sample from multiple random priors. We argue that this is a better metric for stochastic models, since it allows us to understand if uncertain futures contain the true outcome. Figure 5 illustrates how this metric changes with different numbers of samples. By predicting more possible futures, the probability of predicting the true outcome increases, and therefore it is more likely to get a sample with higher PSNR compared to the ground truth. Of course, as with all video prediction metrics, it is imperfect, and is only suitable for understanding the performance of the model when combined with a visual examination of the qualitative in Section 5.3.To use this metric, we sample 100 latent values from prior z ∼ N (0, I) and use them to predict 100 videos and show the of the sample with highest PSNR. For a fair comparison to VPN, we use the same best out of 100 samples for our stochastic baseline. Since even the fast implementation of VPN is quite slow, we limit the comparison with VPN to only last dataset with 256 test samples. Overall, SV2P with both time-variant and time-invariant latent sampling outperform all of the other baselines, by predicting higher quality videos with higher PSNR and SSIM. Time-varying latent sampling is more stable beyond the time horizon used during training FIG6 ). One possible explanation for this behaviour is that the time-invariant latent has to include the information required for predicting all the frames and therefore, beyond training time, it collapses. This issue is mitigated by a time-variant latent variable which takes a different value at each time step. However, this stability is not always the case as it is more evident in late frames of FIG6.One other interesting observation is that the time-invariant model outperforms the time-variant model in the Human3.6M dataset. In this dataset, the most important latent event -the action performed by the actor -is consistent across the whole video which is easier to capture using timeinvariant latent. We can better understand the performance of the proposed model by visual examination of the qualitative . We highlight some of the most important and observable differences in predictions by different models in FIG9 1. In all of these figures, the x-axis is time (i.e., each row is one video). The first row is the ground truth video, and the second row is the of BID10. The of sampling the latent from approximated posterior is provided in the third row. For stochastic methods, we show the best (highest PSNR) and worst (lowest PSNR) predictions out of 100 samples (as discussed in Section 5.2), as well as two random predicted videos from our model. FIG9 illustrates two examples from the BAIR robot pushing dataset in the action-free setting. As a consequence of the high stochasticity in the movement of the arm in absence of actions, BID10 only blurs the arm out, while SV2P predicts varied but coherent movements of the arm. Note that, although each predicted movements of the arm is random, it is still in the valid range of possible outcomes (i.e., there is no sudden jump of the arm nor random movement of the objects). The proposed model also learned how to move objects in cases where they have been pushed by the predicted movements of the arm, as can be seen in the zoomed images of both samples. Quantitative comparison of the predicted frames on Human3.6M dataset using confidence of object detection as quality metric. The y-axis demonstrates the average confidence of BID16 in detecting humans in predicted frames. Based on this metric, SV2P predicts images with more meaningful semantics compared to to BID10.In the action-conditioned setting FIG10 ), the differences are more subtle: the range of possible outcomes is narrower, but we can still observe stochasticity in the behavior of the pushed objects. Interactions between the arm and objects are uncertain due to ambiguity in depth, friction, and mass, and SV2P is able to capture some of this variation. Since these variations are subtle and occupy a smaller part of the images, we illustrate this with zoomed insets in FIG10. Some examples of varied object movements can be found in last three rows of right example of FIG10. SV2P also generates sharper outputs, compared to BID10 as is evident in the left example of FIG10.Please note that the approximate posterior q φ (z|x 0:T) is still trained with the evidence lower bound (ELBO), which means that the posterior must compress the information of the future events. Perfect reconstruction of high-quality images from posterior distributions over latent states is an open problem, and the in our experiments compare favorably to those typically observed even in single-image VAEs (e.g. see BID39). This is why the model cannot reconstruct all the future frames perfectly, even though when latent values are sampled from q φ (z|x 0:T).Figure 10 displays two examples from the Human3.6M dataset. In absence of actions, BID10 manages to separate the foreground from , but cannot predict what happens next accurately. This in distorted or blurred foregrounds. On the other hand, SV2P predicts a variety of different outcomes, and moves the actor accordingly. Note that PSNR and SSIM are measuring reconstruction loss with respect to the ground truth and they may not generally present a better prediction. For some applications, a prediction with lower PSNR/SSIM might have higher quality and be more interesting. A good example is the prediction with the worst PSNR in Figure 10 right, where the model predicts that the actor is spinning in his chair with relatively high quality. However, this output has the lowest PSNR compared to the ground truth. However, pixel-wise metrics such as PSNR and SSIM may not be the best measures for semantic evaluation of predicted frames. Therefore, we use the confidence of an object detector to show the predicted frames contain useful semantic information. For this purpose, we use the open-sourced implementation of BID16 to compare the quality of predicted frames in Human3.6M dataset. As it can be seen in FIG8, SV2P predicted frames which the human inside can be detected with higher confidence, compared to BID10.Finally, FIG11 demonstrates on the Google robot pushing dataset. The qualitative and quantitative in FIG11 and 6 both indicate that SV2P produces substantially better predictions than VPNs. The quantitative suggest that our best-of-100 metric is a reasonable measure of performance: the VPN predictions are more noisy, but simply increasing noise is not sufficient to increase the quality of the best sample. The stochasticity in our predictions is more coherent, corresponding to differences in object or arm motion, while much of the stochasticity in the VPN predictions resembles noise in the image, as well as visible artifacts when predicting for substantially longer time horizons. We proposed stochastic variational video prediction (SV2P), an approach for multi-step video prediction based on variational inference. Our primary contributions include an effective stochastic prediction method with latent variables, a network architecture that succeeds on natural videos, and a training procedure that provides for stable optimization. The source code for our method will be released upon acceptance. We evaluated our proposed method on three real-world datasets in actionconditioned and action-free settings, as well as one toy dataset which has been carefully designed to highlight the importance of the stochasticity in video prediction. Both qualitative and quantitative indicate higher quality predictions compared to other deterministic and stochastic baselines. SV2P can be expanded in numerous ways. First, the current inference network design is fully convolutional, which exposes multiple limitations, such as unmodeled spatial correlations between the latent variables. The model could be improved by incorporating the spatial correlation induced by the convolutions into the prior, using a learned structured prior in place of the standard spherical Gaussian. Time-variant posterior approximation to reflect the new information that is revealed as the video progresses, is another possible SV2P improvement. However, as discussed in Section 3, this requires incentivizing the inference network to incorporate the latent information at training time. This would allow time-variant latent distributions which is more aligned with generative neural models for time-series BID19 BID12 BID22.Another exciting direction for future research would be to study how stochastic predictions can be used to act in the real world, producing model-based reinforcement learning methods that can execute risk-sensitive behaviors from raw image observations. Accounting for risk in this way could be especially important in safety-critical settings, such as robotics. In lack of actions and therefore high stochasticity, BID10 only blurs the robotic arm out while the proposed method predicts sharper frames on each sampling. SV2P also predicts the interaction dynamics between random movements of the arm and the objects. BID10. This is mostly evident in zoomed in objects which have been pushed by the arm. Figure 10: Prediction on the action-free Human3.6M dataset. SV2P predicts a different outcome on each sampling given the latent. In the left example, the model predicts walking as well as stopping which in different outputs in predicted future frames. Similarly, the right example demonstrates various outcomes including spinning. BID29 with SV2P on the robotic pushing dataset. We use the same best PSNR out of 100 random samples for both methods. Besides stochastic movements of the pushed objects, another source of stochasticity is the starting lag in movements of the robotic arm. SV2P generates sharper images compared to BID10 (notice the pushed objects in zoomed images) with less noise compared to BID29 (look at the accumulated noise in later frames).A TRAINING DETAILS FIG2 contains details of the network architectures used as generative and inference models. In all of the experiments we used the same set of hyper-parameters which can be found in TAB1. In the first step of training, we disable the inference network and instead sample latent values from N (0, I). In step 2, the latent values will be sampled from the approximated posterior q φ (z|x 0:T) = N µ(x 0:T), σ(x 0:T). Please note that the inference network approximates log(σ) instead of σ for numerical stability. To gradually switch from Step 2 of training procedure to Step 3, we increase β linearly from its starting value to its end value over the length of training.
Stochastic variational video prediction in real-world settings.
1,140
scitldr
In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}. Modeling complex interactions among intelligent agents from the real world is essential for understanding and creating intelligent multi-agent behaviors, which is typically formulated as a multiagent learning (MAL) problem in multi-agent systems. When the system dynamics are agnostic and non-stationary due to the adaptive agents with implicit goals, multi-agent reinforcement learning (MARL) is the most commonly used technique for MAL. MARL has recently drawn much attention and achieved impressive progress on various non-trivial tasks, such as multi-player strategy games , traffic light control , taxi-order dispatching etc. A central challenge in MARL is to specify a good learning goal, as the agents' rewards are correlated and thus cannot be maximized independently . Without explicit access to the reward signals, imitation learning could be the most intuitive solution for learning good policies directly from demonstrations. Conventional solutions such as behavior cloning (BC) learn the policy in a supervised manner by requiring numerous data while suffering from compounding error . Inverse reinforcement learning (IRL) alleviates these shortcomings by recovering a reward function but is always expensive to obtain the optimal policy due to the forward reinforcement learning procedure in an inner loop. Generative adversarial imitation learning (GAIL) leaves a better candidate for its model-free structure without compounding error, which is highly effective and scalable. However, real-world multi-agent interactions could be much challenging to imitate because of the strong correlations among adaptive agents' policies and rewards. Consider if a football coach wants to win the league, he must make targeted tactics against various opponents, in addition to the situation of his team. Moreover, the multi-agent environment tends to give rise to more severe compounding errors with more expensive running costs. Motivated by these challenges, we investigate the problem of modeling complicated multi-agent interactions from a pile of off-line demonstrations and recover their on-line policies, which can regenerate analogous multi-agent behaviors. Prior studies for multi-agent imitation learning typically limit the complexity in demonstrated interactions by assuming isolated reward structures (; ; ;) and independence in per-agent policies that overlook the high correlations among agents . In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with correlated policies by approximating opponents' policies, in order to reach inaccessible opponents' actions due to concurrently execution of actions among agents when making decisions. Consequently, with approximated opponents model, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL) suitable for learning correlated policies under our proposed framework, which allows for decentralized training and execution. We prove that our framework treats the demonstrator interactions as one of -Nash Equilibrium (-NE) solutions under the recovered reward. In experiments, we conduct multi-dimensional comparisons for both the reward gap between learned agents and demonstrators, along with the distribution divergence between demonstrations and regenerated interacted trajectories from learned policies. Furthermore, the reveal that CoDAIL can better recover correlated multi-agent policy interactions than other state-of-the-art multi-agent imitation learning methods in several multi-agent scenarios. We further illustrate the distributions of regenerated interactions, which indicates that CoDAIL yields the closest interaction behaviors to the demonstrators. 2.1 MARKOV GAME AND -NASH EQUILIBRIUM Markov game (MG), or stochastic game , can be regarded as an extension of Markov Decision Process (MDP). Formally, we define an MG with N agents as a tuple N, S, A,..., A (N), P, r,..., r (N), ρ 0, γ, where S is the set of states, A (i) represents the action space of agent i, where i ∈ {1, 2, . . ., N}, P: S × A × A × · · · × A (N) × S → R is the state transition probability distribution, ρ 0: S → R is the distribution of the initial state s 0, and γ ∈ is the discounted factor. Each agent i holds its policy ] to make decisions and receive rewards defined as r We use −i to represent the set of agents except i, and variables without superscript i to denote the concatenation of all variables for all agents (e.g., π represents the joint policy and a denotes actions of all agents). For an arbitrary function f: s, a → R, there is a fact that The objective of agent i is to maximize its own total expected return R In Markov games, however, the reward function for each agent depends on the joint agent actions. Such a fact implies that one's optimal policy must also depend on others' policies. For the solution to the Markov games, -Nash equilibrium (-NE) is a commonly used concept that extends Nash equilibrium (NE) . where ) is the value function of agent i under state s, and Π (i) is the set of policies available to agent i. -NE is weaker than NE, which can be seen as sub-optimal NE. Every NE is equivalent to an -NE where = 0. Imitation learning aims to learn the policy directly from expert demonstrations without any access to the reward signals. In single-agent settings, such demonstrations come from behavior trajectories sampled with the expert policy, denoted as τ E = {(s t, a . However, in multi-agent settings, demonstrations are often interrelated trajectories, that is, which are sampled from the interactions of policies among all agents, denoted as Ω E = {(s t, a . For simplicity, we will use the term interactions directly as the concept of interrelated trajectories, and we refer to trajectories for a single agent. Typically, behavior cloning (BC) and inverse reinforcement learning (IRL) are two main approaches for imitation learning. Although IRL theoretically alleviates compounding error and outperforms to BC, it is less efficient since it requires resolving an RL problem inside the learning loop. Recent proposed work aims to learn the policy without estimating the reward function directly, notably, GAIL , which takes advantage of Generative Adversarial Networks (GAN ), showing that IRL is the dual problem of occupancy measure matching. GAIL regards the environment as a black-box, which is non-differentiable but can be leveraged through Monte-Carlo estimation of policy gradients. Formally, its objective can be expressed as min where D is a discriminator that identifies the expert trajectories with agents' sampled from policy π, which tries to maximize its evaluation from D; H is the causal entropy for the policy and λ is the hyperparameter. In multi-agent learning tasks, each agent i makes decisions independently while the ing reward ) depends on others' actions, which makes its cumulative return subjected to the joint policy π. One common joint policy modeling method is to decouple the π with assuming conditional independence of actions from different agents : However, such a non-correlated factorization on the joint policy is a vulnerable simplification which ignores the influence of opponents. And the learning process of agent i lacks stability since the environment dynamics depends on not only the current state but also the joint actions of all agents . To solve this, recent work has taken opponents into consideration by decoupling the joint policy as a correlated policy conditioned on state s and a where ) is the conditional policy, with which agent i regards all potential actions from its opponent policies π (−i) (a (−i) |s), and makes decisions through the marginal policy In multi-agent settings, for agent i with policy π (i), it seeks to maximize its cumulative reward against demonstrator opponents who equip with demonstrated policies π (−i) E via reinforcement learning: where H(π (i) ) is the γ-discounted entropy of policy π (i) and λ is the hyperparameter. By coupling with Eq., we define an IRL procedure to find a reward function r (i) such that the demonstrated joint policy outperforms all other policies, with the It is worth noting that we cannot obtain the demonstrated policies from the demonstrations directly. To solve this problem, we first introduce the occupancy measure, namely, the unnormalized distribution of s, a pairs correspond to the agent interactions navigated by joint policy π: With the definition in Eq., we can further formulate ρ π from agent i's perspective as where. Furthermore, with the support of Eq., we have In analogy to the definition of occupancy measure of that in a single-agent environment, we follow the derivation from and state the directly 1. Proposition 1. The IRL regarding demonstrator opponents is a dual form of following occupancy measure matching problem with regularizer ψ, and the induced optimal policy is the primal optimum: With setting the regularizer ψ = ψ GA similar to , we can obtain a GAIL-like imitation algorithm to learn π by introducing the adversarial training procedures of GANs which lead to a saddle point (where D (i) denotes the discriminator for agent i, which plays a role of surrogate cost function and guides the policy learning. However, such an algorithm is not practical, since we are unable to access the policies of demonstrator opponents π because the demonstrated policies are always given through sets of interactions data. To alleviate this deficiency, it is necessary to deal with accessible counterparts. Thereby we propose Proposition 2. Proposition 2. Let µ be an arbitrary function such that µ holds a similar form as π (−i), then Proof. Substituting π (−i) with µ in Eq. by importance sampling. Proposition 2 raises an important point that a term of importance weight can quantify the demonstrator opponents. By replacing π where is the importance sampling weight. In practice, it is challenging to estimate the densities and the learning methods might suffer from large variance. Thus, we fix α = 1 in our implementation, and as the experimental have shown, it has no significant influences on performance. Besides, a similar approach can be found in. So far, we've built a multi-agent imitation learning framework, which can be easily generalized to correlated or non-correlated policy settings. No prior has to be considered in advance since the discriminator is able to learn the implicit goal for each agent. With the objective shown in Eq., demonstrated interactions can be imitated by updating discriminators to offer surrogate rewards and learning their policies alternately. Formally, the update of discriminator for each agent i can be expressed as: and the update of policy is: where discriminator D (i) is parametrized by ω, and the policy π (i) is parametrized by θ. It is worth noting that the agent i considers opponents' action a (−i) while updating its policy and discriminator, with integrating all its possible decisions to find the optimal response. However, it is unrealistic to have the access to opponent joint policy π(a (−i) |s) for agent i. Thus, it is possible to estimate opponents' actions via approximating π (−i) (a (−i) |s) using opponent modeling. To that end, we construct a function, as the approximation of opponents for each agent i. Then we rewrite Eq. and Eq. as: and respectively. Therefore, each agent i must infer the opponents model σ (i) to approximate the unobservable policies π (−i), which can be achieved via supervised learning. Specifically, we learn in discrete action space by minimizing a cross-entropy (CE) loss, and a mean-square-error (MSE) loss in continuous action space: With opponents modeling, agents are able to be trained in a fully decentralized manner. We name our algorithm as Decentralized Adversarial Imitation Learning with Correlated policies (Correlated DAIL, a.k.a. CoDAIL) and present the training procedure in Appendix Algo. 1, which can be easily scaled to a distributed algorithm. As a comparison, we also present a non-correlated DAIL algorithm with non-correlated policy assumption in Appendix Algo. 2. In this section, we prove that the reinforcement learning objective against demonstrator counterparts shown in the last section is essentially equivalent to reaching an -NE. Since we fix the policies of agents −i as π E, the RL procedure mentioned in Eq. can be regarded as a single-agent RL problem. Similarly, with a fixed π (−i) E, the IRL process of Eq. is cast to a single-agent IRL problem, which recovers an optimal reward function r (i) * which achieves the best performance following the joint action π E. Thus we have We can also rewrite Eq. as Given the value function defined in Eq. for each agent i, for H(, then we finally obtain which is exactly the -NE defined in Definition 1. We can always prove that is bounded in small values such that the -NE solution concept is meaningful. Generally, random policies that keep vast entropy are not always considered as sub-optimal solutions or demonstrated policies π E in most reinforcement learning environments. As we do not require those random policies, we can remove them from the candidate policy set Π (i), which indicates that H(π (i) ) is bounded in small values, so as. Empirically, we adopt a small λ, and attain the demonstrator policy π E with an efficient learning algorithm to become a close-to-optimal solution. Thus, we conclude that the objective of our CoDAIL assumes that demonstrated policies institute an -NE solution concept (but not necessarily unique) that can be controlled the hyperparameter λ under some specific reward function, from which the agent learns a policy. It is worth noting that claimed that NE is incompatible with maximum entropy inverse reinforcement learning (MaxEnt IRL) because NE assumes that the agent never takes sub-optimal actions. Nevertheless, we prove that given demonstrator opponents, the multi-agent MaxEnt IRL defined in Eq. is equivalent to finding an -NE. Albeit non-correlated policy learning guided by a centralized critic has shown excellent properties in couple of methods, including MADDPG , COMA , MA Soft-Q , it lacks in modeling complex interactions because its decisions making relies on the independent policy assumption which only considers private observations while ignores the impact of opponent behaviors. To behave more rational, agents must take other agents into consideration, which leads to the studies of opponent modeling where an agent models how its opponents behave based on the interaction history when making decisions (; ;). For multi-agent imitation learning, however, prior works fail to learn from complicated demonstrations, and many of them are bounded with particular reward assumptions. For instance, proposed Parameter Sharing Generative Adversarial Imitation Learning (PS-GAIL) that adopts parameter sharing trick to extend GAIL to handle multi-agent problems directly, but it does not utilize the properties of Markov games with strong constraints on the action space and the reward function. Besides, there are many works built-in Markov games that are restricted under tabular representation and known dynamics but with specific prior of reward structures, as fully cooperative games (; ; Šošic et al., 2016;), two-player zero-sum games , two-player general-sum games , and linear combinations of specific features . Recently, some researchers take advantage of GAIL to solve Markov games. Inspired by a specific choice of Lagrange multipliers for a constraint optimization problem , derived a performance gap for multi-agent from NE. It proposed multi-agent GAIL (MA-GAIL), where they formulated the reward function for each agent using private actions and observations. As an improvement, presented a multi-agent adversarial inverse reinforcement learning (MA-AIRL) based on logistic stochastic best response equilibrium and MaxEnt IRL. However, both of them are inadequate to model agent interactions with correlated policies with independent discriminators. By contrast, our approach can generalize correlated policies to model the interactions from demonstrations and employ a fully decentralized training procedure without to get access to know the specific opponent policies. Except for the way of modeling multi-agent interactions as recovering agents' policies from demonstrations, which can regenerate similar interacted data, some other works consider different effects of interactions. proposed to learn a policy representation function of the agents based on their interactions and sets of generalization tasks using the learned policy embeddings. They regarded interactions as the episodes that contain only k (in the paper they used 2 agents), which constructs an agent-interaction graph. Different from us, they focused on the potential relationships among agents to help characterize agent behaviors. and proposed to use the Dynamic Bayesian Model that describes physical relationships among vehicles and driving behaviors to model interaction-dependent behaviors in autonomous driving scenario. Correlated policy structures that can help agents consider the influence of other agents usually need opponents modeling to infer others' actions. Opponent modeling has a rich history in MAL , and lots of researches have recently worked out various useful approaches for different settings in deep MARL, e.g., DRON and ROMMEO . In this paper, we focus on imitation learning with correlated policies, and we choose a natural and straightforward idea of opponent modeling that learning opponents' policies in the way of supervised learning with historical trajectories. Opponent models are used both in the training and the execution stages. Environment Description We test our method on the Particle World Environments , which is a popular benchmark for evaluating multi-agent algorithms, including several cooperative and competitive tasks. Specifically, we consider two cooperative scenarios and two com- petitive ones as follows: 1) Cooperative-communication, with 2 agents and 3 landmarks, where an unmovable speaker knowing the goal, cooperates with a listener to reach a particular landmarks who achieves the goal only through the message from the speaker; 2) Cooperative-navigation, with 3 agents and 3 landmarks, where agents must cooperate via physical actions and it requires each agent to reach one landmark while avoiding collisions; 3) Keep-away, with 1 agent, 1 adversary and 1 landmark, where the agent has to get close to the landmark, while the adversary is rewarded by pushing away the agent from the landmark without knowing the target; 4) Predator-prey, with 1 prey agent with 3 adversary predators, where the slower predactor agents must cooperate to chase the prey agent that moves faster and try to run away from the adversaries. Experimental Details We aim to compare the quality of interactions modeling in different aspects. To obtain the interacted demonstrations sampled from correlated policies, we train the demonstrator agent via a MARL learning algorithm with opponents modeling to regard others' policies into one's decision making, since the ground-truth reward in those simulated environments is accessible. Specifically, we modify the multi-agent version ), an efficient model-free policy gradient algorithm, by keeping an auxiliary opponents model and a conditioned policy for each agent, which can transform the original centralized on-policy learning algorithm to be decentralized. Note that we do not necessarily need experts that can do well in our designated environments. Instead, any demonstrator will be treated as it is from an -NE strategy concept under some unknown reward functions, which will be recovered by the discriminator. In our training procedure, we first obtain demonstrator policies induced by the ground-truth rewards and then generate demonstrations, i.e., the interactions data for imitation training. Then we train the agents through the surrogate rewards from discriminators. We compare CoDAIL with MA-AIRL, MA-GAIL, non-correlated DAIL (NC-DAIL) (the only difference between MA-GAIL and NC-DAIL is whether the reward function depends on joint actions or individual action) and a random agent. We do not apply any prior to the reward structure for all tasks to let the discriminator learn the implicit goals. All training procedures are pre-trained via behavior cloning to reduce the sample complexity, and we use 200 episodes of demonstrations, each with a maximum of 50 timesteps. Tab. 1 and Tab. 2 show the averaged absolute differences of reward for learned agents compared to the demonstrators in cooperative and competitive tasks, respectively. The learned interactions are considered superior if there are smaller reward gaps. Since cooperative tasks are reward-sharing, we show only a group reward for each task in Tab. 1. Compared to the baselines, CoDAIL achieves smaller gaps in both cooperative and competitive tasks, which suggests that our algorithm has a robust imitation learning capability of modeling the demonstrated interactions. It is also worth noting that CoDAIL achieves higher performance gaps in competitive tasks than cooperative ones, for which we think that conflict goals motivate more complicated interactions than a shared goal. Besides, MA-GAIL and NC-DAIL are about the same, indicating that less important is the surrogate reward structure on these multi-agent scenarios. To our surprise, MA-AIRL does not perform well in some environments, and even fails in Predator-prey. We list the raw obtained rewards in Appendix C, and we provide more hyperparameter sensitivity in Appendix D. Since we aim to recover the interactions of agents generated by the learned policies, it is proper to evaluate the relevance between distributions of regenerated interactions and demonstration data. Specifically, we collect positions of agents over hundreds of state-action tuples, which can be regarded as the low-dimension projection of the state-action interactions. We start each episode from a different initial state but the same for each algorithm in one episode. We run all the experiments under the same random seed, and collect positions of each agent in the total 100 episodes, each with a maximum of 50 timesteps. We first estimate the distribution of position (x, y) via Kernel Density Estimation (KDE) with Gaussian kernel to compute the Kullback-Leibler (KL) divergence between the generated interactions with the demonstrated ones, shown in Tab. 3. It is evident that in terms of the KL divergence between regenerated interactions with demonstrator interactions, CoDAIL generates the interaction data that obtains the minimum gap with the demonstration interaction, and highly outperforms other baseline methods. Besides, MA-GAIL and NC-DAIL reflect about-thesame performance to model complex interactions, while MA-AIRL behaves the worst, even worse than random agents on Predator-prey. To further understand the interactions generated by learned policies compared with the demonstrators, we visualize the interactions for demonstrator policies and all learned ones. We plot the density distribution of positions, (x, y) and marginal distributions of x-position and y-position. We illustrate the conducted on Keep-away in Fig. 1, other scenarios can be found in the Appendix E. Higher frequency positions in collected data are colored darker in the plane, and higher the value with respect to its marginal distributions. As shown in Fig. 1, the interaction densities of demonstrators and CoDAIL agents are highly similar (and with the smallest KL divergence), which tend to walk in the right-down side. In contrast, other learned agents fail to recover the demonstrator interactions. It is worth noting that even different policies can interact to earn similar rewards, but still keep vast differences among their generated interactions. Furthermore, such a reminds us that the real reward is not the best metric to evaluate the quality of modeling the demonstrated interactions or imitation learning . In this paper, we focus on modeling complex multi-agent interactions via imitation learning on demonstration data. We develop a decentralized adversarial imitation learning algorithm with correlated policies (CoDAIL) with approximated opponents modeling. CoDAIL allows for decentralized training and execution and is more capable of modeling correlated interactions from demonstrations shown by multi-dimensional comparisons against other state-of-the-art multi-agent imitation learning methods on several experiment scenarios. In the future, we will consider covering more imitation learning tasks and modeling the latent variables of policies for diverse multi-agent imitation learning. We list the raw obtained rewards of all algorithms in each scenarios. We evaluate how the stability of our algorithm when the hyperparameters change during our experiments on Communication-navigation. Tab. 6 shows the total reward difference between learned agents and demonstrators when we modify the training frequency of D and G (i.e., the policy), which indicates that the frequencies of D and G are more stable when D is trained slower than G, and the reaches a relative better performance when the frequency is 1:2 or 1:1. Fig. 2 illustrates that the choice of λ has little effect on the total performance. The reason may be derived from the discrete action space in this environment, where the policy entropy changes gently. The density and marginal distribution of agents' positions, (x, y), in 100 repeated episodes with different initialized states, generated from different learned policies upon Cooperative-navigation. Experiments are done under the same random seed. The top of each sub-figure is drawn from state-action pairs of all agents while the below explain for each one. KL is the KL divergence between generated interactions (top figure) with the demonstrators. The density and marginal distributions of agents' positions, (x, y), in 100 repeated episodes with different initialized states, generated from different learned policies upon Predator-prey. Experiments are conducted under the same random seed. The top of each sub-figure is drawn from state-action pairs of all agents while the below explains for each one. The KL term means the KL divergence between generated interactions (top figure) with the demonstrators.
Modeling complex multi-agent interactions under multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies.
1,141
scitldr
Plagiarism and text reuse become more available with the Internet development. Therefore it is important to check scientific papers for the fact of cheating, especially in Academia. Existing systems of plagiarism detection show the good performance and have a huge source databases. Thus now it is not enough just to copy the text as is from the source document to get the original work. Therefore, another type of plagiarism become popular -- cross-lingual plagiarism. We present a CrossLang system for such kind of plagiarism detection for English-Russian language pair. The key idea for CrossLang 1 system is that we use the monolingual approach. We have suspicious Russian document and English reference collection. We reduce the task to the one language -we translate the suspicious document into English, because the reference collection is in English. After this step we perform the subsequent document analysis. Due to this fact the main challenge with the CrossLang design is that the algorithms should be stable to the translation ambiguity. The main stages of CrossLang service is depicted in Figure 1. CrossLang receives the suspicious document from Antiplagiat system, when user send it for originality checking. Then it goes to Entry pointmain service, that routes the data between following stages: 1. Machine Translation system -microservice, that translates suspicious document into English. For these purposes we use Transformer Vaswani et al., open-source neural machine translation framework. 2. Source retrieval -this stage unites two microservices: Shingle index and Document storage. Entry point receives the translated suspicious document's shingles (n -grams) and Shingle index returns to it the documents ids from the reference English collection. To deal with the translation ambiguity we use modified shingle-based approach. Document storage returns the Source texts from the collection by these ids. 3. Document comparison -this microservice performs the comparison between translated suspicious document and source documents. We compare not the texts themselves, but the vectors corresponding to the phrases of these texts. Thus we deal with the translation ambiguity problem. We create machine translation system using state-of-the-art The CrossLang BLEU score lower than Google's BLEU score -this was to be expected. But it is very important to notice that we are not interested in ideal translation. Our main goal is to translate with sufficient quality for the next stages: Source retrieval and Document comparison. The method of source retrieval in the case of verbatim plagiarism is inverted index construction,where a document from the reference collection is represented as a set of its shingles, i.e. overlapping word n -grams, and a suspicious document's shingles are checked for matches with the indexed documents. There is one major problem with using the standard shingles -in our case the machine translation stage generates texts that differ too much from the sources of plagiarism. We argue that the source retrieval task can be solved with the help of a similar method that performs better than the method mentioned above; this improvement is achieved by moving from word shingles to word-class shingles, where each word is substituted by the label of the class it belongs to: {word 1, . . ., word n} → {class(word 1),..., class(word n)}. Clustering the word vectors is a convenient and relatively fast way of obtaining semantic word classes. For the word embedding model we used fastText Bojanowski et al. trained on English Wikipedia. The dimension for word embedding model was set to 100. For the semantic word classes construction we applied agglomerative clustering on word embeddings with the cosine similarity measure to group words into word classes. We got 777K words clustered into 30K classes. For the comparison between retrieved documents and translated suspicious documents we introduce the phrase embedding model. We split documents (retrieved and suspicious) into phrases s and compare its vectors. For mapping the word sequence into low dimensional space we use the encoderdecoder scheme with L-2 reconstruction error minimization E rec = s −ŝ 2. Encoder-decoder model is completely unsupervised and does not use any information whether the phrase pair is paraphrased or not. We train Seq2Seq model with attention Bahdanau et al. on 10M sentences from Wikipedia. In order to use information about phrase similarity we extend the objective function. We employ the margin-base loss from Wieting et al. with the limited number of similar phrase pairs S = {(s i, s j)}: where The sampling of so named "false neighbour" s i during training helps to improve the final quality without strict limitations on what phrases we should use at dissimilar. This part of objective requires a dataset of similar sentences S = {(s i, s j)}. We used double translation method as a method of similar sentences generation comparable to paraphrase. The final objective function is: where α is a tunable hyperparameter that weights both of errors. 6. For each phrase embedding from the suspicious document find nearest vectors by cosine similarity from source documents using Annoy 7 library. 1. The best of our knowledge it is the first system for cross-lingual plagiarism detection for English-Russian language pair. It is deployed on production and we could analyze the . We could not find another examples of such system (even for other language pairs). 2. The Source retrieval 1.2 stage is often employed using rather simple heuristical algorithms such as shingle-based search or keyword extraction. However, these methods can significantly suffer from word replacements and usually detect only near-duplicate paraphrase. We present modified method, see 1.2. 3. Many articles on the cross-lingual plagiarism detection topic investigate the solutions based on bilingual or monolingual word embeddings Ferrero et al. for documents comparison, but almost none of them uses the phrase embeddings for this problem solution. We present phrase embeddings comparison in 1.3. There are no and datasets for cross-lingual plagiarism detection task for language pair EnglishRussian. We create dataset for the problem and make it available. Visit 8 for dataset download and details about generation. For the whole framework we got Precision = 0.83, Recall = 0.79 and F 1 = 0.80. Since our system translates the suspicious document into the language of the collection it's natural to analyze the performance of our system for monolingual problem. For such experiment we do not use the machine translation service. In order to check performance of monolingual paraphrased plagiarism detection we exploit PAN'11 contest dataset and quality metrics Potthast et al.. Results of CrossLang and top-3 known previous methods are in Table 2. Our service is deployable on an 8-GPU cluster with Tesla-K100 GPUs, 128GB RAM and 64 CPU Cores. Depending on the requirements, the service is able to scale horizontally. For the fast rescaling we use Docker containerization and Consul and Consul-template for the service discovery and automatic load balancing. The stress testing of our system showed that the system is able to check up to 100 documents in a minute. Despite the fact the average loading on our service is much lower, this characteristic of our service is important for withstanding peak loads. We introduced CrossLang -a framework for cross-lingual plagiarism detection for English Russian language pair. We decomposed the problem of cross-lingual plagiarism detection into several stages and provide a service, consists of a set of microservices. The CrossLang use a monolingual approachreducing the problem to the one language. For this purpose we trained the neural machine translation system. Another two main algoithmic components are Source Retrieval and Document Comparison stages. For the Source Retrieval problem we used a modification of shingling method that allow us to deal with ambiguity after translation. For the Document Comparison stage we used phrase embeddings that were trained with slight supervision. We evaluated the effectiveness of main stages.
A system for cross-lingual (English-Russian) plagiarism detection
1,142
scitldr
Data parallelism has become a dominant method to scale Deep Neural Network (DNN) training across multiple nodes. Since the synchronization of the local models or gradients can be a bottleneck for large-scale distributed training, compressing communication traffic has gained widespread attention recently. Among several recent proposed compression algorithms, Residual Gradient Compression (RGC) is one of the most successful approaches---it can significantly compress the transmitting message size (0.1% of the gradient size) of each node and still preserve accuracy. However, the literature on compressing deep networks focuses almost exclusively on achieving good compression rate, while the efficiency of RGC in real implementation has been less investigated. In this paper, we develop an RGC method that achieves significant training time improvement in real-world multi-GPU systems. Our proposed RGC system design called RedSync, introduces a set of optimizations to reduce communication bandwidth while introducing limited overhead. We examine the performance of RedSync on two different multiple GPU platforms, including a supercomputer and a multi-card server. Our test cases include image classification on Cifar10 and ImageNet, and language modeling tasks on Penn Treebank and Wiki2 datasets. For DNNs featured with high communication to computation ratio, which has long been considered with poor scalability, RedSync shows significant performance improvement. For training large-scale deep neural networks (DNNs) on multiple computing nodes, data parallelism has emerged as the most popular choice due to its simplicity and effectiveness BID5; BID13 ). However, the communication bandwidth of network fabric has become the bottleneck limiting data parallel performance. On one hand, models of DNNs, which already contain tens to hundreds of layers and totaling 10-20 million parameters today, continue to grow bigger. Therefore, the requirement of communicating model parameter updates among all computing nodes poses a higher challenge to network bandwidth. On the other hand, the development of DNN training accelerators has shifted the bottleneck of training towards communication across models. As the evolution of the inter-connected network bandwidth is not as fast as computing hardware, synchronization overhead has become the bottleneck of data parallelism on distributed systems using new computing hardware. Many recent studies focused on reducing the communication cost between nodes by reducing the size of the gradients to be transmitted. One line of work BID15; BID2; ) propose to quantize the gradients to low-precision values. Considering compression ratio (ratio of compressed gradients size to their original size) achieved by quantization is limited, another line of research orthogonal to quantization is to sparsify communication gradients and restrict weight-updates to a small subset of parameters. Residual Gradient Compression (RGC) method (; BID0 ; BID4 ; BID9 ; BID14) is currently the most promising pruning method to achieve good compression ratio while ensuring no loss of training accuracy. It transmits only a small subset of gradients and maintains the remaining gradients locally as residuals to be added to gradients of the next iteration. The first RGC implementation is proposed by and uses a threshold-based method to only send gradients larger than a predefined constant threshold for fully-connected layers. Considering a predefined threshold is hard to be chosen appropriately, BID0 improve the robustness of RGC by selecting top 1% gradients to communicate according to their magnitude. Because these two implementations are tuned for some specific network structures, applying them to other DNNs will lead to accuracy loss as indicated in BID4. Based on their work, the latest RGC variants, such as BID14; BID4; BID9 ), are able to achieve a 0.1% compression ratio on local gradients while ensuring almost no loss of model accuracy on a variety of DNN structures after introducing some key modifications. Despite of good model accuracy achieved with simulation experiments, no recent studies have discussed the potential performance gain after integrating the latest RCG methods to real distributed training system, especially to the multi-GPU systems equipped with high-quality network infrastructures. The challenges of applying RGC to distributed GPU systems come from two aspects. First, there is no efficient compression algorithm proposed for RGC method. According to our experimental , selecting top-0.1% elements with the state-of-the-art GPU-based top-k algorithm are so expensive that the overhead of compression is much higher than the benefits of network bandwidth reduction. Second, synchronization of sparse data structures is nontrivial to be supported with existing efficient communication libraries, such as Message Passing Interface (MPI), which are designed for dense data structures. Targeting multi-GPU systems, a highly-efficient RGC implementation called RedSync is proposed. Our contributions are listed as follows:• We combined pruning and quantization techniques together to compress transmitting gradients. A set of parallel-friendly top-0.1% selection methods are designed to support pruning operations inside GPU device memory, which are orders of magnitude faster than the stateof-the-art GPU-based top-k selection method.• Considering the distribution characteristics of communication data, we apply allgather operation using MPI for a sparse synchronization scheme. A cost model is derived to analyze both communication cost and calculation overhead. Based on it, we pointed out potential performance gain and the bottleneck of our implementation.• RedSync is able to ensure almost no accuracy loss to train a set of DNNs after integrating with the latest algorithm improvements. This is the first work, as far as we known, to evaluate the performance of RGC method on the scale of 128 GPUs. RedSync provides significant performance improvements for communication-intensive networks, like VGG, AlexNet and some LSTMs. Input: node id k, the number of node N Input: dataset χ Input: mini batch size b per node Input: DISPLAYFORM0; w) by forward and backward propagation for j = #layer, #layer − 1,..., 0 do V DISPLAYFORM1 We first give an overview of a simple RGC workflow used in RedSync (see more details in Algorithm 1). We denote a DNN model as f (w), where w is the vector of parameters. We assume a system has N workers. Each worker, say the k-th worker, holds a local dataset χ t k at iteration t with size b and a local copy of the global weight w. Synchronous SGD method is adopted in RedSync. At each iteration, node k computes the gradient G k using local data, where G k j indicates gradients of layer j. Each node also maintains a residual V k, which is initialized as 0 and used to accumulate untransmitted gradient from previous iterations. After added with latest gradient, a subset of residuals is selected as the communication-set, and is compressed into sparse data structures. The select operation in Algorithm 1 chooses more important elements based on magnitude. Those selected elements (denoted ask Masks) are synchronized among all the nodes using allreduce operations, which is able to take advantage of the highly-optimized allreduce operation on HPC systems (Figure 1: Performance of four communication-set selection methods under message sizes. Elements in the data list are generated randomly from a standard uniform distribution. Comm. illustrates the time taken to synchronize the message through a network with a peak bandwidth of 3.5GBps by allreduce operation. Performance is measured as total time cost for 100 times independent operations. implemented with allreduce has been widely adopted in state-of-the-art large-scale CNN training tasks BID7 and). Remaining elements outside the communicationset are assigned as new residuals of the next iteration. The workflow of this algorithm is the same as an RGC variant called Deep Gradient Compression Method mentioned BID9. In the following, we details our contribution in implementations of select, Allreduce and decompress to make this workflow efficient in practice. The efficiency of communication-set selection method is critical for the RGC system's overall performance. Since a predefined threshold is difficult to determine, recent work BID9; BID14 ) suggest to select top 0.1% elements from residuals of each layer as the communication-set. However, the top-0.1% selection is nontrivial to be implemented on GPU. One of the most efficient top-k selection methods designed for GPU can be implemented based on radixSelect algorithm BID1 ), which determines each bit of the k-th largest element by scan and scatter. Serial scan BID17 ) and scatter operations are extremely timeconsuming. As shown in Figure 1, the computation time for top-0.1% with radixSelect on a Titan X GPU sometimes is even slightly higher than the time for synchronizing these parameters through a 3.5 GBps network. To avoid performing a top-0.1% operation on a large number of parameters, we propose two communication-set selection algorithms called trimmed top-k selection and threshold binary search selection, which are more efficient on GPUs. Trimmed top-k selection. Observing that the distribution of residuals is usually similar to a normal distribution, we can use statistical features to remove most of the smaller elements and limit radixSelect operation on a relatively small subset. As shown in Algorithm 2, we first calculate the mean and maximum of residuals' absolute values of this layer. A relative large threshold value is chosen according to mean and maximum value, for example, 0.8 × (max − mean) + mean. Operation count nonzero gets the number of elements whose absolute values are greater than the threshold. If the number is smaller than k (the number of top-0.1% elements), we dynamically decrease the threshold until we find the number of parameters whose absolute value above the threshold is larger than k. Then we trim all elements that are less than the threshold and perform a top-k selection operation using radixSelect on the remaining elements. Operation mean, max and count nonzero can all be efficiently implemented with a single reduction operation. nonzero indices is a typical stream compaction problem, which uses just one scan operation as its backbone BID16 ).Threshold binary search selection. For some layers with very large numbers of parameter elements, even conducting radixSelect on a small subset of elements will still be a very time-consuming operation. In order to completely avoid using radixSelect operation on GPU, we propose a method to select approximate top-0.1% elements as communication-set. Instead of identifying the kth (top 0.1%th) largest element, we search for a threshold to make it between the kth to 2kth largest element, and then select elements larger than the threshold as communication-set. In this case, at least 0.1% largest elements are included in the communication-set. As shown in Algorithm 3, we use a binary search algorithm to find such a threshold. To avoid excessive searching, it will always be terminated when the difference of left bound and right bound is less than a small value. Input: tensor to be compressed X Input: number of elements remained k Output: < indice, values > 1: mean ← mean(abs(X)) 2: max ← max(abs(X)) 3: ← 0.2 4: ratio ← (1 −) 5: nnz = count nonzero(abs(X) > threshold) 6: while nnz > k do 7: threshold ← mean + ratio × (max − mean) 8: nnz = count nonzero(abs(X) > threshold) 9: ratio = ratio − 10: end while 11: DISPLAYFORM0 Algorithm 3 Top-k selection with threshold binary search selection Input: tensor to be compressed X Input: number of elements remained k Input: Termination condition parameter Output: < indice, values > 1: mean ← mean(abs(X)); max ← max(abs(X)) 2: l ← 0.0; r ← 1.0; threshold = 0.0 3: while r − l > do 4: ratio = l + (r − l)/2 5: threshold ← mean + ratio × (max − mean) 6: nnz = count nonzero(abs(X) > threshold) 7: if nnz > k and 2k > nnz then 8: For layers with large sizes, such as the first fully-connected layer in VGG16 and softmax layer in LSTM, the time for count nonzero operation is still not negligible. We further improve the efficiency of the selection algorithm by reducing the number of count nonzero operations. We recommend that, after a threshold binary search for this layer, the threshold element can be reused in the next few iterations. The interval of search is empirically set to 5, and the selection algorithm introduces only one nonzero count overhead on average. In Figure 1, we compared the time cost of different selection approaches on parameter lists of different sizes. Compared with directly performing radixSelect, both proposed methods significantly reduce the selection time for large sizes. For top-0.1% selection on 64MB elements, trimmed top-k and sampled threshold binary search selection are 38.13 and 16.17 × faster than radixSelect. In practice, we dynamically choose compression strategies: For smaller parameter sets such as biases and batch norm layers, we do not compress residuals or directly use radixSelect to select top-0.1% significant elements. Trimmed top-k selection is suitable for parameters of middle size layers, like convolutional layers, because it can ensure the compression ratio to be exactly 0.1% and introduce no extra communication bandwidth requirements. Threshold binary search based selection is suitable for large size layers, like hidden layers and softmax layers in LSTMs, for which the compression cost is more critical to be optimized than the communication cost. Compressed residuals should include k indices and k values. We further investigate the possibility of quantizing these values. By setting the values of all elements of the same sign in the communicationset to their mean, we can almost eliminate the communication bandwidth requirement of value information transmitting by using only one floating-point number instead of k. In order to facilitate quantization compression, we slightly modify our select method to ensure that elements in the communication-set are all of the same sign. It can be achieved by choosing the largest k elements and the smallest k elements as communication-set in turns. In other words, if we select the largest k elements (all positive numbers) in this layer as the communication-set at current iteration, we will choose smallest k elements (all negative numbers) as the communication-set for the next iteration. It is worth noting that sampled threshold binary search selection cannot be used with quantization. In addition, we do not quantify the output layer of the DNN, in order to distinguish the correct classification information. Synchronization of dense gradient structures in traditional distributed DNN systems can be simply implemented with an allreduce operation, which has been well-studied on multiple-GPU systems BID3 ). However, the design of a sparse allreduce in a distributed setting is not as simple because each worker may contribute different non-zero indices in its compressed residuals. According to our observation, there are very few overlapping indices of the communication-set distribution of different nodes. For example, training VGG16 on Cifar10 dataset using 16 GPUs with a compression ratio as 0.1% for each node, the averaged compression ratio of synchronized residuals of all nodes is 1.55%. We utilize the allgather operation, an operation in which the data contributed by each node is gathered at all nodes, to implement sparse allreduce. The message representing compressed residuals of each node should include the information of indices and values of elements in communication-set. When using threshold binary search selection, the length of each node's message is different. As a , the packaged message should also include an initial element, which indicates the length of the compressed elements. Instead of using two allgather operations for indices and values message separately, we package the indices and values into a single message to reduce latency. After finishing the allgather operation, each node collects N compressed residuals of this layer from all the other nodes. We add the compressed residuals to the corresponding weights in the local model after scaling with the learning rate. It can be seen as an operation that adds a sparse array to a dense array, which has been fully-optimized in Level 1 function axpyi of cuSparse library on GPU. RedSync implements a set of algorithm improvement techniques proposed in BID9. We details momentum correction, momentum factor masking and our modification to warmup training in Appendix C, as well as local gradient clipping in Appendix B. To analyze the potential performance gain of sparse synchronization, we adopt a widely-used performance model to estimate the communication cost in terms of latency and bandwidth used. We assume that the time taken to send a message between any two nodes can be modeled as α + nβ, where α is the latency (or startup time) per message, independent of message size, β is the transfer time per byte, and n is the number of bytes transferred. The node's network interface is assumed to be single ported; i.e. at most one message can be sent and one message can be received simultaneously. M is the number of elements in residuals of current layer. D is the compression ratio. In the case of reduction operations, we assume that γ 2 is the computational cost for performing the reduction operation for a message of size M, and γ 1 is the cost to decompress the collected sparse message of size M. For the case where the compression ratio of each node is different, which is always true for the binary search method, D represents the average compression ratio of all nodes. Suppose that we use recursive doubling for allgather and Rabenseifners algorithm mentioned in for allreduce communication. The cost of quantized sparse and dense synchronization is illustrated Equation 1 and 2, respectively. The derivations are left in Appendix A. DISPLAYFORM0 As implicated by the performance model, the compression rate for the model is not equal to the compression rate for communication bandwidth. The bandwidth term of sparse synchronization is (p − 1)DM β, which is proportional to the number of nodes p. Even if the sparseness D is 0.1% for all p node, when p is 128, the communication bandwidth for sparse synchronization will be 12.8% of dense synchronization rather than 0.1% of dense synchronization. Second, the overhead of reduction may be a new bottleneck when scaling RedSync to larger scale. The last term pγ 1 in Eq. 1 indicates that the overhead to do reduction also increases linearly with the number of nodes p. However, in Eq. 2, reduction overhead almost does not increase with number of nodes. We tested the accuracy and performance of our proposed implementation on two different multi-GPU systems, including a world's top GPU supercomputer and a multi-GPU server. Muradin is a server with eight GPUs in the same node. It is equipped with one Intel(R) Xeon(R) CPU E5-2640 v4 and 8 TITAN Vs, which is connected to the CPU through PCI-E 3.0. Piz Daint is a GPU supercomputer. Each node of it includes two Intel Xeon E5-2690v3 CPUs and one NVIDIA Tesla P100 GPUs. In total, there are 5320 nodes connected by Aries interconnect with Dragonfly topology. We used pytorch v4.0 to conduct basic DNN training operations. For communication library, horovod an MPI wrapper upon pytorch, is used to provide collective communication operations. Horovod was compiled with OpenMPI v3.1 with cuda-aware supported on both systems. We tested our performance on two major types of mainstream deep learning applications. For Image Classification tasks, we studied ResNet-44 and VGG16 on Cifar10 BID8 ), AlexNet, VGG16 and ResNet-50 on ImageNet BID6 ). For all CNNs, we used Nesterov's momentum SGD as optimizer. We used the same learning rate strategies as the SGD for the RGC methods. Warm-up technique was applied to the first 5 epochs of ResNet50 and VGG16 for both SGD and RGC. For Language Modeling tasks, we picked two datasets for evaluation. The Penn Treebank corpus (PTB) dataset consists of 923,000 training, 73,000 validation and 82,000 test words BID10 ). The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia BID11 ). It consists 2,088,628 training, 217,646 and 245,569 test words. We adopted a 2-layer LSTM language model architecture with 1500 hidden units per layer BID12 ) to evaluate both datasets. We tied the weights of encoder and decoder and use vanilla SGD with gradient clipping. Learning rate decays when no improvement has been made in validation loss. We examined the convergence of RedSycn on the datasets mentioned before. For the Cifar10 dataset, we used two CNNs, i.e. ResNet44 and VGG16, as test cases. Both DNNs were tested on 4 GPUs, and the total mini-batch size is 256. On the ImageNet dataset, we tested AlexNet, ResNet50, and VGG16. On the PTB and Wiki2 dataset, we examined the perplexity of the 2-layer LSTM mentioned before. FIG1 shows the validation error of RGC and quantized RGC provided by RedSync on three test cases compared with original SGD. More comprehensive are shown in the left side of Table 1. We also tested the sensitivity of the RGC method to large training data batch size. As shown in the right side of Table 1 when increasing the batch size to 2048, RedSync got no loss of accuracy compared to the original SGD. Next we tested the performance and scalability of RedSync as number of GPUs grow. Fig. 5 illustrates scalability of RedSync on Piz Daint with four test cases. Fig. 3 and Fig. 4 show the performance of RedSync on Muradin with six test cases. We compared RedSync and its quantization version Quantized-RedSync with a baseline data parallel implementation provided by horovod. Data was collected by averaging training time in 1000 training iterations. We used trimmed top-k algorithm to compress layers in CNNs larger than 128KB and used threshold binary search algorithm for hidden layers and the softmax layer for LSTM. Fig. 6 illustrates the cost of different parts using RedSync when scaling it to 128 GPUs on Piz Daint. Our observations are summarized as follows.1. Using our parallel-friendly selection methods for compression is critical for system overall performance. In Fig. 3 and Fig. 4, we added an RGC implementation called pure RGC, which uses radixSelect to select top 0.1% elements as communication-set rather than our proposed methods. The performance of pure RGC is even slower than the baseline version, because compression time is too long.2. RedSync is suitable for accelerating data parallel training on DNNs with high communication to computation ratio. For VGG16, AlexNet and LSTM, although performance of RedSync on a single GPU is not as good as baseline version due to compression and decompression overhead, RedSync can achieve significant speedup with more than 2 GPUs. However, we observed no performance gain for ResNet50 both on Piz Daint and Muradin. As implicated in Table 1, the ratio of computation to communication of ResNet50 is the highest in the DNNs we investigated. On large scale, most of time during ResNet50 training with RedSync is wasted on decompression phase, as shown in Fig. 6, which overdrafts the benefit of communication bandwidth reduction.3. The scalability curve of RedSync on Piz Daint shows a concave shape. For example, as shown in Fig. 5, RedSync gets a better speedup to baseline version on 32 GPUs than 128 GPUs for AlexNet. It is because that communication bandwidth requirement and decompression overhead both grow linearly with the number of GPU in use. Such phenomenon verifies our analysis using communication performance model. Quantized-RedSync always achieves better performance than RedSync for CNNs. However, for LSTM training on small scale, Quantized-RedSync achieves worse performance than RedSync. This is due to the balance of communication and computational overhead. CNN adopts trimmed top-k as the communication-set selection method and its quantized version has similar computation cost. As shown in Fig. 6, no significant difference of selection cost in CNN training. Therefore, the reducing of communication cost by quantization improves the system's overall performance. As for LSTMs, they use sampled threshold binary search as selection for non-quantized RedSync, but use threshold binary search for quantized RedSync. Sampled selection is much more faster. Therefore, on small-scale, RedSync has better performance than Quantized-RedSync due to less selection overhead. When scaling to more than 16 GPUs, benefit from the reduction of communication compensates for the cost of the communication-set selection. This paper proposes a distributed implementation called RedSync to accelerate data parallel DNN training by utilizing a type of gradient sparsification method named as Residual Gradient Compression (RGC). We solved two major obstacles to implement RGC on multi-GPU systems: high overhead of compression using GPU and lack of support for collective communication implementation for sparse data structures. We tested the performance of RedSync on two GPU platforms, including a supercomputer system and a multi-GPU server. For AlexNet, VGG16, and LSTM, we observed significant speedup for large-scale DNN training. The left part of FIG3 illustrates how sparse allgather works by recursive doubling method. We assume the compression rate on all of the node is the same as D. If we use threshold binary search for communication-set selection, D here should be the average compression ratio of all nodes for a good approximation. In the first step, nodes that are a distance 1 apart exchange their compressed residuals, the size of which is M × D. In the second step, nodes that are a distance 2 apart exchange their own data as well as the data they received in the previous step, which is 2M × D in total. In the third step, nodes that are a distance 4 apart exchange their own data as well the data they received in the previous two steps. In this way, for a power-of-two number of processes, all processes get all the data in lgp steps. The amount of data exchanged by each node is M × D in the first step, 2M × D in the second step, and so forth, up to 2 lg(p)−1 M × D in the last step. Therefore, The time for message transfer taken by this algorithm is T transf er = lg(p)α + (p − 1)M × Dβ. After including decompressing overhead γ for collected p different compressed residuals and communication selection overhead T select, the time for all-gather based synchronization should be T transf er DISPLAYFORM0 As shown in the right part of FIG3, the Rabenseifners algorithm is adopted for allreduce operation on messages. It does a reduce-scatter followed by an allgather. Reduce-scatter is a variant of reduce in which the , instead of being stored at the root, is scattered among all p nodes. We use a recursive halving algorithm, which is analogous to the recursive doubling algorithm used for allgather but in reverse way. In the first step, each node exchanges data with a node that is a distance p/2 away: Each process sends the data needed by all processes in the other half, which is of size M/2. They also receives the data needed by all processes in its own half, and performs the reduction operation on the received data. In the second step, each process exchanges data with a process that is a distance p/4 away. This procedure continues recursively, halving the data communicated at each step, for a total of lgp steps. After reduce-scatter, allgather phase will have the the same bandwidth and latency requirements. The time taken by Rabenseifners algorithm is the sum of the times taken by reduce-scatter (recursive halving), allgather and reduction operations. The total time should be DISPLAYFORM1 It is necessary to improve data parallel efficiency by overlapping communication with computation through pipelining communication and gradient calculation. Before updating aggregated gradients after scaling with learning rate to weights, gradient clipping is usually adopted to avoid gradient explosion. It rescales all of the gradients when the sum of their norms exceeds a threshold. For RGC methods, the local clipping technique BID9 ) is adopted to perform gradient clipping by a new threshold (N −1/2 of original) locally before adding the current gradients to previous residuals. The difference is that traditional data parallel does clipping after communication of all layers are completed, while the RGC algorithm needs to do clipping before communication. In this case, we need to wait for the completion of the entire back-propagation to get gradients of all layers. And then we do clipping on gradients and then perform compression for communication. Local clipping is equivalent to introducing synchronization between computing and communication and thus eliminating the possibility of Communication hiding. As shown in FIG4, We have abandoned gradient clipping for CNNs, which seldom have gradient exploration problem for the deep networks in order to explore the potential overlapping. As for RNNs, gradients are achieved after backpropagation of all time steps using Back Propagation Through Time (BPTT). When backpropagation of the last layer is completed, we use the gradients of all layers to conduct local gradient clipping. In this case, the communication time can only overlap with the compression calculation. Because even with the original data parallel approach, the computation and communication overlap for each layer can only be made at the last time step, RGC dose not introduce too much overhead. We integrate the momentum masking and momentum correction schemes as proposed in BID9 for momentum SGD and Nesterov momentum SGD optimizers in RedSync. The Momentum SGD version of RGC method adopted by RedSync is illustrated in Algorithm 4. A warm-up training, by exponentially decreasing the compression ratio of the residuals in communication-set in first few epochs, is generally adopted to accelerate convergence in the first few iterations. For example, it is recommended to decrease the compression ratio of residuals in the warm-up period as follows: 25%, 6.25%, 1.5625%, 0.4%, 0.1%. However, we find it could be inefficient for large-scale. As analyzed in the previous section, even synchronization of compressed residual with a compression ratio as 1.5625% requires 100% bandwidth of dense allreduce for quantized RedSync on 64 GPUs. Instead of adopting high-compression-ratio RGC method of warm-up training, we use original SGD optimizer synchronized by allreduce in first few epochs if necessary.
We proposed an implementation to accelerate DNN data parallel training by reducing communication bandwidth requirement.
1,143
scitldr
Backpropagation is driving today's artificial neural networks. However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules. It is unknown how the brain solves the credit assignment problem when learning: how does each neuron know its role in a positive (or negative) outcome, and thus know how to change its activity to perform better next time? Biologically plausible solutions to credit assignment include those based on reinforcement learning (RL) algorithms. In these approaches a globally distributed reward signal provides feedback to all neurons in a network. However these methods have not been demonstrated to operate at scale. For instance, variance in the REINFORCE estimator scales with the number of units in the network. This drives the hypothesis that learning in the brain must rely on additional structures beyond a global reward signal. In artificial neural networks, credit assignment is performed with gradient-based methods computed through backpropagation. This is significantly more efficient than RL-based algorithms. However there are well known problems with implementing backpropagation in biologically realistic neural networks. For instance backpropagation requires a feedback structure with the same weights as the feedforward network to communicate gradients (so-called weight transport). Yet such structures are not observed in neural circuits. Despite this, backpropagation is the only method known to solve learning problems at scale. Thus modifications or approximations to backpropagation that are more plausible have been the focus of significant recent attention. Notably, it turns out that weight transport can be avoided by using fixed, random feedback weights, through a phenomenon called feedback alignment. However feedback alignment does not work in larger, more complicated network architectures (such as convolutional networks). Here we propose to use an RL algorithm to train a feedback system to enable learning. We propose to use a REINFORCE-style perturbation approach to train a feedback signal to approximate what would have been provided by backpropagation. We demonstrate that our model learns as well as regular backpropagation in small models, overcomes the limitations of fixed random feedback weights ("feedback alignment") on more complicated feedforward networks, and can be utilized in convolutional networks. Our method illustrates a biologically realistic way the brain could perform gradient descent-like learning. Let an N hidden-layer network be given byŷ = f (x) ∈ R p, composed of a set of layer-wise summation and non-linear activations, for hidden layer states h i ∈ R ni, non-linearity σ and with input h 0 = x and output h N +1 =ŷ. Define L as the loss function L(x, y), where the data (x, y) ∈ D are drawn from a distribution ρ. Our aim is then to minimize: E ρ [L(x, y)]. Backpropagation computes the error signalẽ i in a top-down fashion: Let the loss gradient term be denoted as Here we replace λ i with an approximation, with its own parameters to be learned:, for parameters B. We will useẽ i to denote the gradient signal backpropagated through the synthetic gradients, and e i for the true gradients. To estimate B we use stochasticity inherent to biological neural networks. For each input each unit produces a noisy response: ) with standard deviation c h > 0. This then generates a noisy lossL(x, y, ξ) and a baseline loss L(x, y) =L(x, y, 0). We will use the noisy response to estimate gradients, that then allow us to optimize the baseline L. This is achieved by linearizing the loss: To demonstrate the method can be used to solve simple supervised learning problems we use node perturbation with a four-layer network and MSE loss to solve MNIST (Fig. 1). We approximate loss gradients as follows: Tẽi+1. The feedback parameters B i+1 are estimated by solving the least squares problem:, whereλ is the perturbationbased estimator derived above. B is updated with each mini-batch using stochastic gradient-descent to minimize this loss. 1 Updates to W i are made using the synthetic gradients ∆W i = ηẽ i h i−1, for learning rate η. We observed that the system is able to provide a close correspondence between the feedforward and feedback matrices in both layers of the network (Fig. 1a). The relative error between B i and W i is lower than what is observed for feedback alignment, suggesting that this co-adaptation of W i and B i is indeed beneficial. We observe that the alignment (the angle between the estimated gradient and the true gradient, proportional to e T W B) is lower for node perturbation than for feedback alignment (Fig. 1b). Recent studies have shown that sign congruence of the feedforward and feedback matrices is all that is required to achieve good performance. Here the sign congruence is also higher in node perturbation (Fig. 1c). Finally, the learning performance of node perturbation is comparable to backpropagation (Fig. 1d) -achieving close to 3% test error. These suggest node perturbation for learning feedback weights can be used in deep networks. Hyperparameters found through random search. A known shortcoming of feedback alignment is in auto-encoding networks with tight bottleneck layers. To see if our method has the same shortcoming we examine a simple auto-encoding network with MNIST input data (size 784-200-2-200-784, MSE loss). We also compare the method to the'matching' learning rule, in which updates to B match updates to W. As expected, feedback alignment performs poorly. Node perturbation actually performs better than backpropagation, and comparable to ADAM (Fig. 2a). In fact ADAM begins to overfit, while node perturbation does not. The matched learning rule performs similarly to backpropagation. These are surprising at first glance. Perhaps, similar to feedback alignment, learning feedback weights strikes the right balance between providing a useful signal to learn, and constraining the updates to be sufficiently aligned with B, acting as a type of regularization. The noise added when estimating the feedback weights may also serve to regularize the latent representation, as, indeed, the latent space learnt by node perturbation shows a more evenly distributed separation of digits. While, in contrast, the representations learnt by backprop and ADAM show more structure, and feedback alignment does not learn a useful representation at all (Fig. 2b,c). These show that node perturbation is able to successfully communicate error signals through thin layers of a network as needed. Finally we test the method on a convolutional neural network (CNN) solving CIFAR10. The CNN has the architecture Conv(3x3, 1x1, 32), MaxPool(3x3, 2x2), Conv(5x5, 1x1, 128), MaxPool(3x3, 2x2), Conv(5x5, 1x1, 256), MaxPool(3x3, 2x2), FC 2048, FC 2048, Softmax, with hyperparameters found through random search. For this network we learn feedback weights direct from the output layer to each earlier layer: TẽN (similar to 'direct feedback alignment'). Here this was solved by gradient-descent. We obtain a test accuracy of 75.2%. When compared with fixed feedback weights (test accuracy of 72.5%) and backpropagation (test accuracy of 77.2%), we see it is advantageous to learn feedback weights. This shows the method can be used in a CNN, and can solve challenging computer vision problems without weight transport. Here we implement a perturbation-based synthetic gradient method to train neural networks. We show that this hybrid approach can be used in both fully connected and convolutional networks. By removing both the symmetric feedforward, feedback weight requirement imposed by backpropagation this approach is a step towards more biologically-plausible deep learning. In contrast to many perturbation-based methods, this hybrid approach can solve large-scale problems. We thus believe this approach can provide powerful and biologically plausible learning algorithms. While previous research has provided some insight and theory for how feedback alignment works the effect remains somewhat mysterious, and not applicable in some network architectures. Recent studies have shown that some of these weaknesses can be addressed by instead imposing sign congruent feedforward and feedback matrices. Yet what mechanism may produce congruence in biological networks is unknown. Here we show that the shortcomings of feedback alignment can be addressed in another way: the system can learn to adjust weights as needed to provide a useful error signal. Our work is closely related to Akrout et al 2019, which also uses perturbations to learn feedback weights. However our approach does not divide learning into two phases, and training of the feedback weights does not occur in a layer-wise fashion. Here we tested our method in an idealized setting, however it is consistent with neurobiology in two important ways. First, it involves the separate learning of feedforward and feedback weights. This is possible in cortical networks where complex feedback connections exist between layers, and where pyramidal cells have apical and basal compartments that allow for separate integration of feedback and feedforward signals. Second, noisy perturbations are common in neural learning models. There are many mechanisms by which noise can be measured or approximated, or neurons could use a learning rule that does not require knowing the noise. While our model involves the subtraction of a baseline loss to reduce the variance of the estimator, this does not affect the expected value of the estimator; technically the baseline could be removed or approximated. Thus we believe our approach could be implemented in neural circuits. There is a large space of plausible learning rules that can learn feedback signals in order to more efficiently learn. These promise to inform both models of learning in the brain and learning algorithms in artificial networks. Here we take an early step in this direction. We review the key components of the model. Data (x, y) ∈ D are drawn from a distribution ρ. The loss function is linearized: such that with expectation taken over the noise distribution ν(ξ). This suggests a good estimator of the loss gradient iŝ Letẽ i be the error signal computed by backpropagating the synthetic gradients: Then parameters B i+1 are estimated by solving the least squares problem: Under what conditions can we show thatB i+1 → W i+1 (with enough data)? One way to find an answer is to define the synthetic gradient in terms of the system without noise added. Then B Tẽ is deterministic with respect to x, y and, assumingL has a convergent power series around ξ = 0, we can write Taken together these suggest we can proveB i+1 → W i+1 in the same way we prove consistency of the linear least squares estimator. For this to work we must show the expectation of the Taylor series approximation is well behaved. That is, we must show the expected remainder term of the expansion: is finite and goes to zero as c h → 0. This requires some additional assumptions on the problem. We make the following assumptions: • A1: the noise ξ is subgaussian, • A3: the error matricesẽ n (ẽ n) T are full rank, for 1 ≤ n ≤ N + 1, • A4: the mean of the remainder and error terms is bounded: Consider first convergence of the final layer feedback matrix, B N +1. In the final layer it is true that e N +1 = e N +1., then the least squares estimator solves and converges to the true feedback matrix, in the sense that:. We first show that, under A1-2, the conditional expectation of the estimator converges to the gradient L Taking a conditional expectation gives: We must show the remainder term goes to zero as c h → 0. This is true provided each moment E((ξ N j) m |x, y) is sufficiently well-behaved. Using Jensen's inequality and the triangle inequality in the first line, we have that With this in place, we have that the problem is close to a linear least squares problem, sincê with residual η This follows since e N +1 is defined in relation to the baseline loss, not the stochastic loss, meaning it is measurable with respect to (x, y) and can be moved into the conditional expectation. From and A3, we have that the least squares estimator satisfies Thus, using the continuous mapping theorem Then we have: lim We can use Theorem 1 to establish convergence over the rest of the layers of the network when the activation function is the identity. and σ(x) = x, the least squares estimator solves and converges to the true feedback matrix, in the sense that: Proof. DefineW n (c):= plim T →∞B n, assuming this limit exists. From Theorem 1 the top layer estimateB N +1 converges in probability toW N +1 (c). We can then use induction to establish thatB j in the remaining layers also converges in probability toW j (c). That is, assume thatB j converge in probability toW j (c) in higher layers N + 1 ≥ j > n. Then we must establish thatB n also converges in probability. To proceed it is useful to also definẽ as the error signal backpropagated through the converged (but biased) weight matricesW (c). Again it is true thatẽ N +1 = e N +1. As in Theorem 1, the least squares estimator has the form: Thus, again by the continuous mapping theorem: In this case continuity again allows us to separate convergence of each term in the product: using the weak law of large numbers in the first term, and the induction assumption for the remaining terms. In the same way Note that the induction assumption also implies limc→0ẽ n (c) = e n. Thus, putting it together, by A3, A4 and the same reasoning as in Theorem 1 we have the : solves and converges to the true feedback matrix, in the sense that: Proof. For a deep linear network notice that the node perturbation estimator can be expressed as: where the first term represents the true gradient, given by the simple linear backpropagation, the second and third terms are the remainder and a noise term, as in Theorem 1. Define Wj. Then following the same reasoning as the proof of Theorem 1, we have: Then we have: lim It is worth making the following points on each of the assumptions: • A1. In the paper we assume ξ is Gaussian. Here we prove the more general of convergence for any subgaussian random variable. • A2. In practice this may be a fairly restrictive assumption, since it precludes using relu non-linearities. Other common choices, such as hyperbolic tangent and sigmoid non-linearities with an analytic cost function do satisfy this assumption, however. • A3. It is hard to establish general conditions under whichẽ n (ẽ n) T will be full rank. While it may be a reasonable assumption in some cases. Extensions of Theorem 2 to a non-linear network may be possible. However, the method of proof used here is not immediately applicable because the continuous mapping theorem can not be applied in such a straightforward fashion as in Equation. In the non-linear case the ing sums over all observations are neither independent or identically distributed, which makes applying any law of large numbers complicated.
Perturbations can be used to learn feedback weights on large fully connected and convolutional networks.
1,144
scitldr
Most deep neural networks (DNNs) require complex models to achieve high performance. Parameter quantization is widely used for reducing the implementation complexities. Previous studies on quantization were mostly based on extensive simulation using training data. We choose a different approach and attempt to measure the per-parameter capacity of DNN models and interpret the to obtain insights on optimum quantization of parameters. This research uses artificially generated data and generic forms of fully connected DNNs, convolutional neural networks, and recurrent neural networks. We conduct memorization and classification tests to study the effects of the number and precision of the parameters on the performance. The model and the per-parameter capacities are assessed by measuring the mutual information between the input and the classified output. We also extend the memorization capacity measurement to image classification and language modeling tasks. To get insight for parameter quantization when performing real tasks, the training and test performances are compared. Deep neural networks (DNNs) have achieved impressive performance on various machine learning tasks. Several DNN architectures are known, and the most famous ones are fully connected DNNs (FCDNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).It is known that neural networks do not need full floating-point precision for inference BID10 BID16 BID23. A 32-bit floating-point parameter can be reduced to 8-bit, 4-bit, 2-bit, or 1-bit, but this can incur performance degradation. Therefore, precision should be optimized, which is primarily conducted by extensive computer simulations using training data. This not only takes much time for optimization but also can incorrectly predict the performance in real environments when the characteristics of input data are different from the training data. In this study, we attempt to measure the capacity of DNNs, including FCDNN, CNN, and RNN, using a memorization and classification task that applies random binary input data. The per-parameter capacities of various models are estimated by measuring the mutual information between the input data and the classification output. Then, the fixed-point performances of the models are measured to determine the relation between the quantization sensitivity and the per-parameter capacity. The memorization capacity analysis are extended to real models for performing image classification and language modeling, by which the parameter quantization sensitivity is compared between memorization and generalization tasks. The contributions of this paper are as follows.• We experimentally measure the memorization capacity of DNNs and estimate the perparameter capacity. The capacity per parameter is between 2.3 bits to 3.7 bits, according to the network structure, which is FCDNN, CNN, or RNN. The value is fairly independent of the model size.• We show that the performance of the quantized networks is closely related to the capacity per parameter, and FCDNNs show the most resilient quantization performance while RNNs suffer most from parameter quantization. The network size hardly effects the quantization performance when DNN models are trained to use full capacity.• We explain that severe quantization, such as binary or ternary weights, can be employed without much performance degradation when the networks are in the over-parameter region.• We suggest the sufficient number of bits for representing weights of neural networks, which are approximately 6 bits, 8 bits, and 10 bits for FCDNNs, CNNs, and RNNs, respectively. This estimate of the number of bits for implementing neural networks is very important considering that many accelerators are designed without any specific training data or applications.• The study with real-models shows that neural networks are more resilient to quantization when performing generalization tasks than conducting memorization. Thus, the optimum bits obtained with the memorization tasks are conservative and safe estimate when solving real problems. The paper is organized as follows. In Section 2, previous works on neural network capacity and fixedpoint optimization are briefly presented. Section 3 explains the capacity measurement methods for DNN models. Section 4 presents parameter capacity measurement for FCDNNs, CNNs, and RNNs. The quantization performances measured on DNNs are presented in Section 5. Concluding remarks follow in Section 6. The capacity of neural networks has been studied since the early days of DNN research. Although the capacity can be defined in many ways, it is related to the learnability of networks. The capacity of networks is shown as the number of uncorrelated random samples that can be memorized BID8. A single-layer perceptron with n parameters can memorize at least 2n random samples BID11. In other words, the network can always construct a hyperplane with n parameters that divides 2n samples. Additionally, the capacity of a three-layer perceptron is proportional to the number of parameters BID0. Recently, RNNs were trained with random data to measure the capacity per parameter BID6. Our study is strongly motivated by this research, and extends it to the quantization performance interpretation of generic DNN models, including FCDNN, CNN, and RNN. Recent studies have showed that neural networks have a generalization ability even if the expressive capacity of the model is sufficiently large BID33 BID2. In this paper, we also discuss the effect of network quantization when performing generalization tasks. Early works on neural network quantization usually employed 16-bit parameters obtained by directly quantizing the floating-point numbers BID10. Recently, a retraining technique was developed to improve the performance of quantized networks BID16 BID23. Retraining-based quantization was applied to CNN and RNN models, showing superior performance compared to directly quantized ones BID1 BID31. Many studies attempting extreme quantization have been published, such as 2-bit ternary BID16 BID21 BID35, 1-bit binary weight quantization, and XNOR networks BID7 BID28. Some aggressive model compression techniques also employed vector quantization or table look-up BID12 BID4. However, not all CNNs show the same quantization performance. For example, AlexNet BID20 shows almost the same performance with only 1-bit quantized parameters. However, the same quantization technique incurs a very severe performance loss when applied to ResNet BID28. A previous study shows that large sized networks are more resilient to severe quantization than smaller ones. Theoretical works and many practical implementation optimization techniques have been studied BID13 BID17 BID19 BID30 BID24. Recent work increases the number of network parameters to preserve the performance under low-precision quantization BID26. Our works are not targeted to a specific data or model, but introduce the general understanding of parameter quantization. We assess the network capacity of DNN models using a random data memorization and classification task BID6. In this task, N random binary vectors, X, are generated and each is randomly and uniformly assigned to the output label Y. The size of the binary vector depends on the DNN model. For FCDNN, the input X is a one dimensional vector whose size is determined by the hidden layer dimension. In CNN, the input needs to be a 2-D or 3-D tensor. Input samples of CNNs are generated by concatenating and reshaping random binary vectors. During the training process, the DNN is trained to correctly predict the label, which is 0 or 1, of the random input X. As the number of input data size, N, increases, the classification accuracy drops because of the limited memorization capacity. Note that the accuracy for the memorization task refers to the training performance after convergence because there is no proper test dataset for random training samples. The capacity is measured using the mutual information, defined as a measure of the amount of information that one random variable contains about another random variable (Cover & BID9 . The mutual information of a trained network with N input samples is calculated as follows: DISPLAYFORM0 where p is the mean classification accuracy for all samples under trained parameter θ. The network capacity is defined as DISPLAYFORM1 The accuracy, p, may vary depending on the training method of the model. We find N and p that maximize the mutual information of the networks by iteratively training the models. This optimization employs both grid search-and Bayesian optimization-based hyper-parameter tuning BID5 . The optimization procedure consists of three stages. First, we try to find the largest input data size whose accuracy is slightly lower than 1. Second, we perform a grid search to determine the boundary values of the hyper-parameters. The searched hyper-parameters can include initialization, optimizer, initial learning rate, learning rate decay factor, batch size, and optimizer variables. Finally, we conduct hyper-parameter tuning within the search space using Scikit-learn library BID27 . We add the number of training samples N as a hyper-parameter and use the mutual information of Eq. FORMULA0 as the metric for the optimization. Quantization of model parameters perturbs the trained network, therefore, fixed-point training or retraining with full-precision backpropagation is usually needed BID16 BID21 BID7 BID34 . However, the performance of the quantized networks does not always meet that of the floating-point models, even after retraining. This suggests that model capacity is reduced by quantization, especially when the number of bits used is very small. In this research, we observe the memorization capacity degradation caused by quantization in generic FCDNN, CNN, and RNN models. The uniform quantization is used for the sake of convenient arithmetic, and the same step size is assigned to each layer in the FCDNN, each kernel in the CNN, or each weight matrix in the LSTM layer. The bias values are not quantized, because they have a large dynamic range. It is important to note that the weights connected to the output are not quantized, because their optimum bit-widths depend on the number of labels in the output. Quantization is performed from floating-point to 8-bit, 6-bit, 5-bit, 4-bit, 3-bit, and 2-bit precision, in sequence. Retraining is performed after every quantization, but requires only a small number of epochs, because only fine-tuning is needed BID16 . We compare the generalization performance of floating-point and fixed-point DNNs by visualizing the loss surface. Loss is measured by applying Gaussian random noise to the parameters of the trained network as shown in Eq.. DISPLAYFORM0 Here, L(θ) is the loss according to the network parameters. The distribution of weights may vary depending on the model size and learning method. We apply the normalized filter noise to the θ noise for fair comparison on different models BID22.We employ two real networks, one is for image classification with CIFAR-10 dataset and the other is language modeling with Penn Tree Bank (PTB) dataset. One large and one small model are trained for these networks. We quantize those networks with the precision of 8, 6, 4, and 2 bits and analyze the variation of the surface according to the precision. θ noise is added to the quantized parameters. In order to reduce the error due to randomness of noise, all loss values are measured with 10 different trials and the average values are plotted. The capacities of FCDNNs, CNNs, and RNNs are measured via the memorization task explained in Section 3.1. The models used for the test employ floating-point parameters. The training data for FCDNNs is a 1-D vector of size n in. N input data are used as for the training data. The output, Y, is the randomly assigned label, either 0 or 1, for each input. Thus, inputs, X and Y, are represented as X ∈ {0, 1} N ×nin and Y ∈ {0, 1} N, respectively. The input data dimension, n in, should be larger than log 2 N so that no overlapped data is contained among N input data. In the experiments for FCDNNs, the input vector dimension, n in, is chosen to be equal to the number of units in the hidden layer. We conduct experiments for FCDNNs with hidden layer dimensions of 32, 64, 128, and 256, and with hidden layer depths of 1, 2, 3, and 4. The initialization method chosen is the'He' initialization BID14 and gradients are updated following the rule in SGD, with momentum, which shows the best performance in our grid search. The initial learning rate for hyper parameter tuning is chosen between 0.001 and 0.05 on the log scale. The decay factor and momentum are set to have even distance values in the linear scale between 0.1 and 0.5 and between 0.6 and 0.99, respectively. For each model, experiments are conducted to measure the accuracy of memorization while increasing the size of the input data, N. Note that only the training error is measured in this memorization task, because there is no unseen data. Experimental are based upon the best accuracy obtained when attempted with different hyper parameters. The capacity of the model is estimated according to Eq., where p is the training accuracy. The experimentally obtained memorization capacities of the FCDNN models are presented in FIG1, where depths of 1, 2, 3, and 4, and widths of 32, 64, 128, and 256 are used. When the number of hidden layers is the same, the amount of data that can be almost perfectly memorized quadruples when the dimension of the hidden layer is doubled. This means that the memorization capacity is linearly proportional to the number of parameters. Similarly, the FCDNN models with 2, 3, or 4 hidden layer depths can memorize 2, 3, or 4 times the input data as compared to the single layer DNN, respectively. FIG2 shows the memorization accuracy and the mutual information obtained using Eq. on the FCDNN. The model is composed of three layers and the hidden layer of size 64. Here, we find that the amount of mutual information steadily increases as the input data size grows. However, it begins to drop as the input size grows farther, and the memorization accuracy drops. By analyzing the accuracy trend of the model, it is possible to distinguish the input data size into three regions: the over-parameterized, the maximum performance, and the under-parameterized sections, as shown in FIG2. For example, if the model is trained to memorize only 10,000 data, it can be regarded as over-parameterized. The number of data that can be memorized by maximally utilizing all the parameters is between 30,000 and 40,000. In over-parameterized regions, performance can be maintained, even if the capacity of the networks is reduced. The per-parameter capacity of FCDNNs is shown in FIG2. Regardless of the width or depth, one parameter has a capacity of 1.7 to 2.5 bits, and FCDNNs have an average of 2.3-bit capacity per parameter. This is consistent with theoretical study BID11 BID0. The total capacity of the model may be interpreted as an optimal storage that can store a maximum of random binary samples BID11 BID3. The capacity of CNNs is also measured via a similar memorization task. CNNs can have a variety of structures according to the number of channels, the size of the kernels, and the number of layers. The kernel size of CNNs in this test are either (3 × 3) or (5 × 5), which are the same for all layers, the number of convolution layers from 3 to 9. The dimensions of the inputs are n height = n width = 32 and n channel = 1 for all experiments. Three max-pooling operations are applied to reduce the number of parameters in the fully connected layer. CNN structures used in our experiments are shown in Supplementary materials. The CNN models contain not only convolution layers but also fully connected layers. Thus, the per-parameter capacity for convolution layers is calculated after subtracting the capacity for fully connected layers from the measured total capacity. We assume the per-parameter capacity of the fully connected layer as 2.3 bits to calculate the capacity for convolution layers. As shown in FIG2, the convolution layers have the per-parameter capacity of between 2.86 and 3.09 except the smallest model, which is higher than that of FCDNNs. The average capacity per parameter of the tested models is 3.0 bits. Results show that the per-parameter capacity of CNNs is higher than that of FCDNNs, even when CNNs memorize uncorrelated data. Note that one parameter of FCDNNs is used only once for each inference. However, the parameter of CNNs is used multiple times. This parameter-sharing nature of CNNs seems to increase the amount of information that one parameter can store. It has been shown that the various structures of RNNs all have similar capacity per parameter of 3 to 6 bits BID6. We train RNNs with a dataset with no sequence correlation to show the capacity of the parameters. The random input dataset is composed of inputs, X ∈ {0, 1} N ×nseq×nin and labels Y ∈ {0, 1} N, which are uniformly set to 0 or 1. The training loss is calculated using the cross-entropy of the label at the output of the last step. We train RNNs with a single LSTM layer of 32-D. The input dimension, n in, is also 32-D and the amount of unrolling sequence, n seq, is five-step. It has been reported that unrolling of five-step almost saturates the performance in this setup BID6. We apply 5 input random vectors, X 0, X 1, X 2, X 3, and X 4, each with 32-D, and assign one label to this 160-D vector at the last time step. The error propagates from the last step only, and the outputs at intermediate time-steps are ignored. The number of parameters in the network is 8,386. In this case, the maximum mutual information is obtained when the number of samples is 32K, and the memorization accuracy is 99.52 %. Therefore, the per-parameter capacity of the model is 3.7 bits. The RNN shows higher per-parameter capacity than FCDNNs and CNNs. We have shown that FCDNNs, CNNs, and RNNs have different per-parameter capacities. According to the parameter-data ratio, a trained DNN can be an over-parameterized, max-capacity, or underparameterized model. Thus, we can assume that the DNN performance under quantization would depend on not only the network structure, such as FCDNN, CNN, or RNN, but also the parameter-data ratio. The experiments are divided into two cases. The first is to measure performance degradation via quantization precision when each model is in the maximum capacity region. The second analyzes performance when the models are in the over-parameterized region. When the FCDNN, CNN, and RNN are trained to have the maximum memorization capacity, the performances with parameter quantization are shown in Fig. 3(a). The FCDNN, CNN, and RNN models are shown. The fixed-point performances of two FCDNNs, two CNNs, and two RNNs are illustrated. With 6-bit parameter quantization, the FCDNN shows no accuracy drop. However, those for CNNs and RNNs are 5 % and 18 %, respectively. Because the RNN contains the largest amount of information at each parameter, the loss caused by parameter quantization seems to be the most severe. We also find that there is no decline in performance until the parameter precision is lowered to 6-bit for FCDNNs, 8-bit for CNNs, and 10-bit for RNNs, even when all models use full capacity. Next, we show the fixed-point performance of DNNs when they are trained to be in the overparameterized region. Note that the per-parameter capacity is lowered in the over-parameterized region. We conducted simulation with half size of the maximum number of data that can be memorized. For example, an FCDNN used for the measurement has 3 hidden layers with a hiddenlayer dimension of 128; the capacity of the corresponding model is about 2 17 bits. The network is FORMULA6 FC FORMULA0 FC FORMULA1 FC FC FORMULA0 FC FORMULA1 (a) LSTM FORMULA0 LSTM FORMULA1 LSTM FORMULA0 (c) over-parameterized when the number of memorized samples is 2 16. Fig. 3(b) shows that the FCDNN model memorizes all samples even with 4-bit parameter quantization when the model uses half of the capacity. Also, over-parmeterized model is less sensitive to bit-precision on CNNs and RNNs. The performances of fixed-point DNNs with the number of samples are shown in Fig. 4. The shows that DNNs are more robust when the networks are more over-parameterized.2 We have assessed the required precision of networks for performing memorization tasks. The memorization test only uses the training data that are artificially generated. However, most neural networks should conduct more than memorization because the test data are not seen during the training. In this section, we analyze the effects of network quantization for performing real tasks. We train two different sized CNN models with CIFAR-10 data. The structures of the two models are as follows: DISPLAYFORM0 The size of kernels of both models is (3×3), 16C represents a convolution layer with 16 channels and 128F C means a fully connected layer with 128-dimension. The number of parameters is 0.22M for the small model and 3.5M for the large one. Both models were trained with the same hyper-parameter setting. To analyze the impact of network quantization on the test performance, we plot the loss and accuracy surfaces of floating-point and quantized CNN models in Fig. 5. For simplicity, the of floatingpoint and 2-bit fixed-point CNN are given. Please refer to the Supplementary materials for other . When applying the training data that may have been memorized during the training phase, the large model shows indifferent performance surface regardless of the parameter precision. But, for the small model, the 2-bit model shows quite degraded performance when compared to the floating-point network. However, the test accuracy of small 2-bit model is not much lowered. We can notice that the loss surface of the 2-bit model shown in Fig. 5 is much wider than that of the floating-point model. This is consistent with recent studies on generalization BID18 BID22. This observation suggests that the quantized networks are more resilient when performing generalization tasks. Thus, the required precision of the network obtained with the memorization task can be considered a conservative estimate. Fig. 6 shows the training and test data based performance of fixed-point CNN and RNN on real data. T iny model has the same structure as the small model, but reducing the number of channels by half and the size of the fully connected layers by a quarter. The RNNs are trained for language modeling with Penn Treebank corpus BID25, and the models consists of two LSTM layers with the same dimension. Here, both for CNN and RNN models, we can confirm that large networks are more robust to quantization. Also, the networks need more parameter precision when conducting memorization tasks using the training set, rather than solving real problems using the test set. We have measured fixed-point DNN performance on real tasks and are shown in FIG7. FCDNN models are trained with MNIST dataset, ResNet BID15 models are trained using ILSVRC-2012 dataset BID29 and RNN based word-level language models (WLMs) are designed using PTB dataset. Experimented FCDNNs and RNNs are composed of two FC layers and two LSTM layers with the same dimension, respectively. 4-bit quantized FCDNNs shows almost same performance compared to the floating-point networks even when the number of neurons are only 8. Performances are preserved up to 6 bits on ResNets and RNN WLMs. Their resiliency to quantization increases as networks become larger. Quantization of parameters is a straightforward way of reducing the complexity of DNN implementations, especially when VLSI or special purpose neural processing engines are used. Our study employed simulations on varying sizes of generic forms of DNN models. Memorization tests using random binary input data were conducted to determine the total capacity by measuring the mutual information. Our simulation show that the per-parameter capacity is not sensitive to the model size, but is dependent on the structure of the network models, such as FCDNN, CNN, and RNN. The maximum per-parameter capacities of FCDNNs, CNNs, and RNNs are approximately 2.3 bits, 3.0 bits, and 3.7 bits per parameter. Thus, RNNs have the tendency of demanding more bits when compared to FCDNNs. We quantized DNNs under various capacity-utilization regions and showed that the capacity of parameters are preserved up to 6 bits, 8 bits, and 10 bits on FCDNNs, CNN, and RNNs, respectively. The performance of the quantized networks was also tested with image classification and language modeling tasks. The show that networks need more parameter precision when conducting memorization tasks, rather than inferencing with unseen data. Thus, the precision obtained through the memorization test can be considered a conservative estimate in implementing neural networks for solving real problems. This research not only gives valuable insights on the capacity of parameters but also provides practical strategies for training and optimizing DNN models. DISPLAYFORM0 In FCDNNs, each weight matrix, W l between two layers, demands |h l−1 | × |h l | weights, where |h| is the number of units for the layer, l. CNNs, which are popular for image processing, usually receive 2-dimensional (D) or 3-D input data, whose size is much larger than the filter or kernel size. The set of weights between layers is referred to as the'kernel' and the output is referred to as the'feature map'. Because the input size is usually much larger than the kernel size, the CNN parameters are reused many times. A kernel slides over the input feature map and produces an output feature map, and the sliding step is determined by the stride, s. The convolution weights is denoted as W l ∈ R k l,h ×k l,w ×n l−1 ×n l and feature map of the layer as C l ∈ R c l,h ×c l,w ×n l. k l,h and k l,w are height and width of each kernel and c l,h, c l,w are height and width of the feature map in layer l, respectively. n l is the number of feature map in the layer l. CNNs can have a variety of structures according to the number of channels, the size of the kernels, and the number of layers. We attempted to produce a general setting for CNNs. The experiments models are shown in Table. 1. We constructed CNNs to have twice the number of feature maps and half the height/width after pooling. Also, to minimize side-effects by fully connected layers, the output feature of the last convolution layer is flattened and directly propagated to the sof tmax layer. The capacity per parameter is measured only for parameters in convolution layers, by subtracting capacity of the fully connected layer. RNNs have a feedback structure that reflects the information in the previous steps when processing sequence data. RNNs are composed of one or multiple recurrent layers, and each layer computes the output, y t, and the hidden state, h t, using the previous hidden state, h t−1, and the input, x t.We use LSTM as the recurrent layers, showing stable performance in various applications. The mutual information equation of a network can be obtained as follows. We first re-write our random variables X, Y, andŶ. DISPLAYFORM0 DISPLAYFORM1 where f (θ, X i) is the predict of a network when the input is X i. Under our experimental setting, both X and Y have uniform random distribution. Note that X and Y are independent as well as Y i and Y j when i = j. Therefore, DISPLAYFORM2 And we use the network's average accuracy p as a probability of Y i =Ŷ i, so that DISPLAYFORM3 Finally, the equation is derived as: DISPLAYFORM4 The loss surfaces of fixed-point CNNs are shown in Fig. 8 and Fig. 9. The models are trained with CIFAR-10 dataset. The loss surfaces of RNN LMs for PTB dataset are also shown in FIG1 and FIG1. The performance degradation on test dataset is lower than the degradation on training dataset in all experiments.
We suggest the sufficient number of bits for representing weights of DNNs and the optimum bits are conservative when solving real problems.
1,145
scitldr
Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables. This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution. This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption. An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields (CRFs), is that it is not any more necessary to define an explicit energy function linking the output variables. To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique. We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs. Based on response timing and propagation delays in the brain, a plausible hypothesis is that the visual cortex can perform fast feedforward BID21 inference when an answer is needed quickly and the image interpretation is easy enough (requiring as little as 200ms of cortical propagation for some object recognition tasks, i.e., just enough time for a single feedforward pass) but needs more time and iterative inference in the case of more complex inputs BID23. Recent deep learning research and the success of residual networks BID9 BID8 point towards a similar scenario BID16: early computation which is feedforward, a series of non-linear transformations which map low-level features to high-level ones, while later computation is iterative (using lateral and feedback connections in the brain) in order to capture complex dependencies between different elements of the interpretation. Indeed, whereas a purely feedforward network could model a unimodal posterior distribution (e.g., the expected target with some uncertainty around it), the joint conditional distribution of output variables given inputs could be more complex and multimodal. Iterative inference could then be used to either sample from this joint distribution or converge towards a dominant mode of that distribution, whereas a unimodaloutput feedfoward network would converge to some statistic like the conditional expectation, which might not correspond to a coherent configuration of the output variables when the actual conditional distribution is multimodal. This paper proposes such an approach combining a first feedforward phase with a second iterative phase corresponding to searching for a dominant mode of the conditional distribution while tackling the problem of semantic image segmentation. We take advantage of theoretical BID0 on denoising autoencoder (DAE), which show that they can estimate the score or negative gradient of the energy function of the joint distribution of observed variables: the difference between the reconstruction and the input points in the direction of that estimated gradient. We propose to condition the autoencoder with an additional input so as to obtain the conditional score, Given an input image x, we extract a segmentation candidate y and intermediate feature maps h by applying a pre-trained segmentation network. We add some noise to y and train a DAE that takes as input both y and h by minimizing Eq. 3. Training scenario 1 (a) yields the best and uses the corrupted prediction as input to the DAE during training. Training scenario 2 (b) corresponds to the original DAE prescription in the conditional case, and uses a corruption of the ground truth as input to the DAE during training.i.e., the gradient of the energy of the conditional density of the output variables, given the input variables. The autoencoder takes a candidate output y as well as an input x and outputs a valueŷ so thatŷ − y estimates the direction DISPLAYFORM0. We can then take a gradient step in that direction and update y towards a lower-energy value and iterate in order to approach a mode of the implicit p(y|x) captured by the autoencoder. We find that instead of corrupting the segmentation target as input of the DAE, we obtain better by training the DAE with the corrupted feedforward prediction, which is closer to what will be seen as the initial state of the iterative inference process. The use of a denoising autoencoder framework to estimate the gradient of the energy is an alternative to more traditional graphical modeling approaches, e.g., with conditional random fields (CRFs) BID14 BID10, which have been used to model the joint distribution of pixel labels given an image BID13. The potential advantage of the DAE approach is that it is not necessary to decide on an explicitly parametrized energy function: such energy functions tend to only capture local interactions between neighboring pixels, whereas a convolutional DAE can potentially capture dependencies of any order and across the whole image, taking advantage of the state-of-the-art in deep convolutional architectures in order to model these dependencies via the direct estimation of the energy function gradient. Note that this is different from the use of convolutional networks for the feedforward part of the network, and regards the modeling of the conditional joint distribution of output pixel labels, given image features. The main contributions of this paper are the following: 1. A novel training framework for modeling structured output conditional distributions, which is an alternative to CRFs, inspired by denoising autoencoder estimation of energy gradients. 2. Showing how this framework can be used in an architecture for image pixel-wise segmentation, in which the above energy gradient estimator is used to propose a highly probable segmentation through gradient descent in the output space. 3. Demonstrating that this approach to image segmentation outperforms or matches classical alternatives such as combining convolutional nets with CRFs and more recent state-of-theart alternatives on the CamVid dataset. In this section, we describe the proposed iterative inference method to refine the segmentation of a feedforward network. As pointed in section 1, DAE can estimate a density p(y) via an estimator of the score or negative gradient − ∂E ∂y of the energy function E BID25 BID24 BID0. These theoretical analyses of DAE are presented for the particular case where the corruption noise added to the input is Gaussian. Results show that DAE can estimate the gradient of the energy function of a joint distribution of observed variables. The main is the following: DISPLAYFORM0 where σ 2 is the amount of Gaussian noise injected during training, y is the input of the autoencoder and r(y) is its output (the reconstruction). The approximation becomes exact as σ → 0 and the autoencoder is given enough capacity, training examples and training time. The direction of (r(y) − y) points towards more likely configurations of y. Therefore, the DAE learns a vector field pointing towards the manifold where the input data lies. In our case, we seek to rapidly learn a vector field pointing towards more probable configurations of y|x. We propose to extend the summarized in subsection 2.1 and condition the autoencoder with an additional input. If we condition the autoencoder with features h, which are a function of x, the DAE framework with Gaussian corruption learns to estimate DISPLAYFORM0, where y is a segmentation candidate, x an input image and E is an energy function. Gradient descent in energy can thus be performed in order to iteratively reach a mode of the estimated conditional distribution: DISPLAYFORM1 with step size. In addition, whereas Gaussian noise around the target y true would be the DAE prescription for the corrupted input to be mapped to y true, this may be inefficient at visiting the configurations we really care about, i.e. those produced by our feedforward predictor, which we use to obtain a first guess for y, as initialization of the iterative inference towards an energy minimum. Therefore, we propose that during training, instead of using a corrupted y true as input, the DAE takes as input a corrupted segmentation candidate y and either the input x or some features h extracted from a feedforward segmentation network applied to DISPLAYFORM2, where f k is a non-linear function and l ∈ {1, ..., L} is the index of a layer in the feedforward segmentation network. The output of the DAE is computed asŷ = r(ỹ, h), where r is a non-linear function which is trained to denoise conditionally andỹ is a corrupted form of y. During training,ỹ is y plus noise, while at test time (for inference) it is simply y itself. In order to train the DAE, we extract both y and h from a feedforward segmentation network; we corrupt y intoỹ; and we train the DAE by minimizing the following loss DISPLAYFORM3 where H is the categorical cross-entropy and y true is the segmentation ground truth. Figure 1(a) depicts the pipeline during training. First, a fully convolutional feedforward network for segmentation is trained. In practice, we use one of the state-of-the-art pre-trained networks. Second, given an input image x, we extract a segmentation proposal y and intermediate features h from the segmentation network. Both y and h are fed to a DAE network (adding Gaussian noise to y). The DAE is trained to properly reconstruct the clean segmentation (ground truth y true). FIG0 (b) presents the original DAE prescription, where the DAE is trained by taking as input y true and h. Once trained, we can exploit the trained model to iteratively take gradient steps in the direction of the segmentation manifold. To do so, we first obtain a segmentation proposal y from the feedforward network and then we iteratively refine this proposal by applying the following rule DISPLAYFORM4 For practical reasons, we collapsed the corruption noise σ 2 into the step size. Given an input image x, we extract a segmentation candidate y and intermediate feature maps h by applying a pre-trained segmentation network. We then feed x and h to the trained DAE and iteratively refine y by applying Eq. 4. The final prediction is the last value of y computed in this way. Figure 2 depicts the test pipeline. We start with an input image x that we feed to a pre-trained segmentation network. The segmentation networks outputs some intermediate feature maps h and a segmentation proposal y. Then, both y and h are fed to the DAE to compute the outputŷ. The DAE is used to take iterative gradient steps y = y − (y −ŷ) towards the manifold of segmentation masks, with no noise added at inference time. On one hand, recent advances in semantic segmentation mainly focus on improving the architecture design BID19 BID1 BID4 BID12, increasing the context understanding capabilities BID6 BID26 BID3 BID28 and building processing modules to enforce structure consistency to segmentation outputs BID13 BID3 BID31. Here, we are interested in this last research direction. CRFs are among the most popular choices to impose structured information at the output of a segmentation network, being fully connected CRFs BID13 and CRFs as RNNs BID31 among best performing variants. More recently, an alternative to promote structure consistency by decomposing the prediction process into multiple steps and iteratively adding structure information, was introduced by BID15. Another iterative approach was introduced by BID7 to tackle image semantic segmentation by repeatedly detecting, replacing and refining segmentation masks. Finally, the reinterpretation of residual networks BID16 BID8 was exploited by, in the context of biomedical image segmentation, by iteratively refining learned pre-normalized images to generate segmentation predictions. On the other hand, there has recently been some research devoted to exploit of DAE on different tasks, such as image generation BID18, high resolution image estimation BID20 and semantic segmentation BID27. BID18 propose plug & play generative networks, which, in the best reported , train a fully-connected DAE to reconstruct a denoised version of some feature maps extracted from an image classification network. The iterative update rule at inference time is performed in the feature space. Sønderby et al. FORMULA1 use DAE in the context of image super-resolution to learn the gradient of the density of high resolution images and apply it to refine the output of an upsampled low resolution image. BID27 exploit convolutional pseudo-priors trained on the ground-truth labels in semantic segmentation task. During the training phase, the pseudo-prior is combined with the segmentation proposal from a segmentation model to produce joint distribution over data and labels. At test time, the ground truth is not accessible, thus feedforward segmentation predictions are fed iteratively to the convolutional pseudo-prior network. In this work, we exploit DAEs in the context of image segmentation and extend them in two ways, first by using them to learn a conditional score, and second by using a corrupted feedforward prediction as input during training to obtain better segmentations. The main objective of these experiments is to answer the following questions: Can a conditional DAE be used successfully as the building block of iterative inference for image segmentation? Does our proposed corruption model (based on the feedforward prediction) work better than the prescribed target output corruption? Does the ing segmentation system outperform more classical iterative approaches to segmentation such as CRFs?4.1 CAMVID DATASET CamVid 1 BID2 ) is a fully annotated urban scene understanding dataset. It contains videos that are fully segmented. We used the same split and image resolution as BID1. The split contains 367 images (video frames) for training, 101 for validation and 233 for test. Each frame has a resolution of 360x480 and pixels are labeled with 11 different classes. We experimented with two feedforward architectures for segmentation: the classical fully convolutional network FCN-8 of BID17 and the more recent state-of-the-art fully convolutional densenet (FC-DenseNet103) of BID12, which do not make use of any additional synthetic data to boost their performances. BID17: FCN-8 is a feedforward segmentation network, which consists of a convolutional downsampling path followed by a convolutional upsampling path. The downsampling path successively applies convolutional and pooling layers, and the upsampling path successively applies transposed convolutional layers. The upsampling path recovers spatial information by merging features skipped from the various resolution levels on the downsampling path. BID12: FC-DenseNet is a feedforward segmentation network, that exploits the feature reuse idea of and extends it to perform semantic segmentation. FC-DenseNet103 consists of a convolutional downsampling path, followed by a convolutional upsampling path. The downsampling path iteratively concatenates all feature outputs in a feedforward fashion. The upsampling path applies a transposed convolution to feature maps from the previous stage and recovers information from higher resolution features from the downsampling path of the network by using skip connections. Our DAE is composed of a downsampling path and an upsampling path. The downsampling path contains convolutions and pooling operations, while the upsampling path is built from unpooling with switches (also known as unpooling with index tracking) BID30 BID1 and convolution operations. As discussed in, reverting the max pooling operations more faithfully, significantly improves the quality of the reconstructed images. Moreover, while exploring potential network architectures, we found out that using fully convolutional-like architectures with upsampling and skip connections (between downsampling and upsampling paths) decreases segmentation when compared to unpooling with switches. This is not surprising, since we inject noise to the model's input when training the DAE. Skip connections directly propagate this added noise to the end layers; making them responsible for the data denoising process. Note that the last layers of the model might not have enough capacity to accomplish the denoising task. In our experiments, we use DAE built from 6 interleaved pooling and convolution operations, followed by 6 interleaved unpooling and convolution operations. We start with 64 feature maps in the first convolution and duplicate the number of feature maps in consecutive convolutions in the downsampling path. Thus, the number of feature maps in the networks downsampling path is: 64, 128, 256, 512, 1024 and 2048. In the upsampling path, we progressively reduce the number of feature maps up to the number of classes. Thus, the number of feature maps in consecutive layers of the upsampling path is the following: 1024, 512, 256, 128, 64 and 11 (number of classes). We concatenate the output of 4th pooling operation in downsampling path of DAE together with the feature maps h corresponding to 4th pooling operation in downsampling path of the segmentation network. We train our DAE by means of stochastic gradient descent with RMSprop BID22, initializing the learning rate to 10 −3 and applying an exponential decay of 0.99 after each epoch. All models are trained with data augmentation, randomly applying crops of size 224 × 224 and horizontal flips. We regularize our model with a weight decay of 10 −4. We use a minibatch size of 10. While training, we add zero-mean Gaussian noise (σ = 0.1 or σ = 0.5) to the DAE input. We train the models for a maximum of 500 epochs and monitor the validation reconstruction error to early stop the training using a patience of 100 epochs. At test time, we need to determine the step size and the number of iterations to get the final segmentation output. We select and the number of iterations by evaluating the pipeline on the validation set. Therefore, we try ∈ {0.01, 0.02, 0.05, 0.08, 0.1, 0.5, 1} for up to 50 iterations (iteration ∈ {1, 2, ..., 50}). For each iteration, we compute the mean intersection over union (mean IoU) on the validation set and keep the combination (, number of iterations) that maximizes this metric to evaluate the test set. TAB0 reports our for FCN-8 and FC-DenseNet103 without any post-processing step, applying fully connected CRF BID13, context network BID28 as trained post-processing step, CRF-RNN BID31 trained end-to-end with the segmentation network and DAE's iterative inference. For CRF, we use publicly available implementation of BID13.As shown in the table, using DAE's iterative inference on the segmentation candidates of a feedforward segmentation network (DAE(y)) outperforms state-of-the-art post-processing variants; improving upon FCN-8 by a margin of 3.0% IoU. When applying CRF as a post-processor, the FCN-8 segmentation improve 1.2%. Note that similar improvements for CRF were reported on other architectures for the same dataset (e.g. BID1). Comparable improvements are achieved when using the context module BID28 as post-processor (1.3%) and when applying CRF-RNN (1.7%). It is worth noting that our method does not decrease the performance of any class with respect to FCN-8. However, CRF loses 2.8% when segmenting column poles, whereas CRF-RNN loses 1.1% when segmenting signs. When it comes to more recent state-ofthe-art architectures such as FC-DenseNet103, the post-processing increment on the segmentation metrics is lower, as expected. Nevertheless, the improvement is still perceivable (+ 0.5% in IoU). When comparing our method to other state-of-the-art post-processors, we observe a slight improvement. End-to-end training of CRF-RNN with FC-DenseNet103 did not yield any improvement over FC-DenseNet103.It is worth comparing the performance of the proposed approach DAE(y) with DAE(y true) trained from the ground truth. As shown in the table, DAE(y) consistently outperforms DAE(y true). For FCN-8, the proposed method outperforms DAE(y true) by a margin of 2.2%. For FC-DenseNet103, differences are smaller but still noticeable. In both cases, DAE(y) not only outperforms DAE(y true) globally, but also in the vast majority of classes that exhibit an improvement. Note that the model trained on the ground truth requires a bigger Gaussian noise σ in order to slightly increase the performance of the pre-trained feedforward segmentation networks. It is worth mentioning that training our model end-to-end with the segmentation network didn't improve the , while being more memory demanding. FIG4 (d), the FCN-8 segmentation network fails to properly find the fence in the image, mistakenly classifying it as part of a building (highlighted with a white box on the image). CRF is able to clean the segmentation candidate, for example, by filling in missing parts of the sidewalk but is not able to add non-existing structure (see FIG4 (e)). Our method not only improves the segmentation candidate by smoothing large regions such as the sidewalk, but also corrects the prediction by incorporating missing objects such as the fence on FIG4 In this subsection, we analyze the influence of the two inference parameters of our method, namely the step size and the number of iterations. This analysis is performed on the validation set of CamVid dataset, for the above-mentioned feedforward segmentation networks. For the sake of comparison, we perform a similar analysis on densely connected CRF; by fixing the best configuration and only changing the number of CRF iterations. plot the in the case of FCN-8 and FC-DenseNet103, respectively. As expected, there is a trade-off between the selected step size and the number of iterations. The smaller the, the more iterations are required to achieve the best performance. Interestingly, all within a reasonable range lead to similar maximum performances. We have proposed to use a novel form of denoising autoencoders for iterative inference in structured output tasks such as image segmentation. The autoencoder is trained to map corrupted predictions to target outputs and iterative inference interprets the difference between the output and the input as a direction of improved output configuration, given the input image. Experiments provide positive evidence for the three questions raised at the beginning of Sec. 4: a conditional DAE can be used successfully as the building block of iterative inference for image segmentation, the proposed corruption model (based on the feedforward prediction) works better than the prescribed target output corruption, and the ing segmentation system outperforms state-of-the-art methods for obtaining coherent outputs.
Refining segmentation proposals by performing iterative inference with conditional denoising autoencoders.
1,146
scitldr
Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent code- based schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to their promising . In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation. Text generation is of particular interest in many natural language processing (NLP) applications such as dialogue systems, machine translation, image captioning and text summarization. Recent deep learning-based approaches to this problem can be categorized into three classes: auto-regressive or maximum likelihood estimation (MLE)-based, generative adversarial network (GAN)-based and reinforcement learning (RL)-based approaches. BID26 ) model the text (language) as an auto-regressive process, commonly using RNNs. RNNs compactly represent the samples history in the form of recurrent states. In these models, text is generated by predicting next token (character, word, etc) based on the previously generated ones BID9. One of the main challenges involved with auto-regressive methods is exposure bias BID3. This problem arises due to discrepancy between the training and generation phases. In fact, ground-truth samples from the past are used in training, while past generated ones are used in generation. A number of solutions have been proposed to address this problem by modifying the training procedure including scheduled sampling BID3, Gibbs sampling BID25, and Professor forcing BID16.Over the past few years, researchers have extensively used GANs BID8 as a powerful generative model for text BID29 BID5, inspired by the great success in the field of image generation. GANs are believed to be capable of solving the exposure bias problem in text generation raised from using MLE. The reason is that they solved a similar issue of blurry image generation in MLE-based variational autoencoders (VAEs). It is belived that the discriminator is able to guide the text generator, through their training exchange, how to generate samples similar to real (training) data. However, there are other challenges involved in GAN-based text generation. A few of these challenges in text generation are inherent to GANs themselves, such as mode collapse and training instability. The mode collapse problem happens when the adversarially trained generator does not produce diverse texts. These issues can be mitigated by using well-known techniques such as feature matching, and entropy regularization BID24. Another challenge is due to the discrete nature of text, which causes the generator sampling to be non-differentiable over the categorical distribution of the words. In this paper, we take advantage of Transformer self-attention mechanism BID27 and incorporate it in two state-of-the-art adversarial latent code-based schemes proposed for text generation. More specifically:• We incorporate the Transformer structure in the design of encoder and decoder blocks of AAE BID17 and ARAE BID14 setups for text generation.• Blocks closely inspired from the Transformer's encoder layers, incorporating self-attention and element-wise fully-connected layers in a residual configuration and with positional encodings, are used along with spectral normalization to propose a novel GAN (both generator and discriminator) structure for AAE and ARAE setups.• The performance improvement obtained from the proposed architectures is demonstrated via objective and subjective measures used in extensive experiments.2 RELATED WORK 2.1 SPECTRAL NORMALIZATION Spectral normalization BID19 ) is a weight normalization method proposed to stabilize the training of GANs. The authors show that the Lipshitz norm of a neural networks can be bounded by normalizing the spectral norm of layer weight matrices. As opposed to local regularizations used in WGAN-GP, etc., the network-wide spectral regularization stabilizes the GAN training, produces more diverse outputs and in higher inception scores. We use spectral normalization in our adversarial setups for the same reasons. In sequence modeling literature, attention was initially proposed by BID2. It recognizes the fixed-length latent representation of the input sequence as the main performance bottleneck in the seq-to-seq models and proposed using soft-attention in the decoder. Using attention, the decoder can also attend to any desired token in the input sequence besides consuming the compressed representation ing at the end of encoding operation. Self-attention was initially proposed for language inference in BID21. The authors named it as "intra-attention" and showed that their structure can be an effective alternative for LSTMs in the task of natural language inference BID4, at the time achieving state of the art performance with much fewer parameters as well as requiring a training time an order of magnitude shorter. Self-attention structures have since been used to set the state of the art in a number of different tasks BID27 BID7 BID22 BID0. They drastically reduce the path length between any two sequence inputs, making the learning of long term dependencies much easier BID27. They are considerably easier to parallelize, reducing the number of operations that are required to be sequential. Recently, applied self attention along with spectral normalization to the task of image generation using GANs. It showed by visualization that using attention, the generator can attend to far neighborhoods of any shape rather than close-by fixed-shape ones at each level in a hierarchical generation. The authors claim that applying spectral normalization to generator as well as discriminator helps training dynamics (stability). Similarly, we also adopt self attention and spectral normalization in our architecture designs. Transformer BID27 extended the use of self attention mechanism and was proved to be the state-of-the-art in sequence transduction applications such as machine translation. It dispenses convolutional and recurrent layers and relies entirely on attention-only layers and element-wise feed forward layers. One of the main challenges of the language generation task originates from the discrete nature of text. Similarly to generating other discrete tokens, the back propagation of error through argmax operator is not well-defined. To address this problem, various approaches have been proposed in the literature including continuous approximation of discrete sampling BID10 BID13, using policy gradient from reinforcement learning BID11 BID24, etc. One of the most successful solutions is based on autoencoders with continuous latent spaces (i.e. latent code-based methods). Various training setups have been proposed for training these autoencoders including adversarial BID14 and variational BID12 setups. A recent paper BID6 ) performs a thorough review of the state-of-the-art latent codebased text generation methods. It studies the performance of a number of code-based text generation schemes and uses a unified rigorous evaluation protocol to evaluate them. We got inspired by their evaluation protocol to demonstrate the strength of our self attention-based approach in the context. They use a broad set of measures to perform a comprehensive study. We adopt forward and reverse perplexity as well as BLEU from their objective measures and fluency from the subjective ones. In this section, we briefly explain two prominent baseline methods using adversarial latent code-based generation techniques and present the technical details in Section 3. Adversarial autoencoder (AAE) BID18 proposes an adversarial setup to train probabilistic autoencoders. It matches the aggregated posterior of the encoder output (latent codes) to an arbitrary distribution that can be easily sampled from. Although authors demonstrate the applications of their setup in semi-supervised learning, style and content disentanglement, etc, AAE decoder can be effectively used as a generative model, converting samples of the arbitrary distribution (noise) to real-like outputs. From application perspective, authors only evaluated AAE performance in vision-related applications. In this paper, we tailor AAE for text generation, following guidelines proposed by BID6 and incorporate self attention and Transformer as novel parts in the model. The adversarially regularized autoencoder (ARAE) BID15 learns an autoencoder with continuous contracted codes that highly correlate with discrete inputs. That is, similar inputs get encoded (mapped) to nearby continuous codes. ARAE aims at exploiting GAN's ability to force the generator to output continuous codes corresponding to the code space obtained from encoding the real text data. By matching the outputs of generator and encoder, ARAE provides an implicit latent code GAN that serves as a generative model for decoding text. In this section, we explain the details of our self attention-based models following the ARAE and AAE setups proposed in BID6. These setups have shown comparable to the state-of-theart in text generation. We select similar setups to provide fair comparisons and report the best techniques/parameters based on our experiments. In our architectures, Transformer BID27 is used in designing all autoencoders. In both encoder and decoder, we use three blocks of Transformer.' Block' and'layer' names are used, respectively, instead of'layer' and'sub-layer' in the original paper. Layer normalization BID1 ) is applied on every layer (multi-head attention, masked multihead attention and feed forward layers) within each Transformer block. Multi-head attentions have eight heads and embedding layers are of size 304 (a multiple of eight). Similarly to BID27, positional encoding is used at the very first layer of the encoder and decoder. The dimensions and encoding place were found empirically for the best objective and subjective performance. For GAN structures, i.e. the generator and discriminator architectures, we use modified Transformer encoder layers combined with spectral normalization, as depicted in FIG1. As in the regular transformer blocks, all connections are residual. Inspired by spectral normalization successes in the Figure 1: SALSA-TEXT generator and discriminator architecture designed using Transformer encoder structure GAN-based image generation, especially proved in SAGAN, we apply it to the weights of the discriminator and the generator in our network. We did not find layer normalization (used in original Transformer) to be useful, when applied along with spectral normalization in the generator and discriminator architectures. Hence, only use spectral normalization in our GAN structures. We use self attention-based structures in two well-known adversarial setups BID17 and BID14 ).AAE We use the AAE-SPH setup used in BID6. It is based on the original setup proposed in BID17. The discriminator forces encoder outputs to follow a uniform distribution on the unit sphere. Similarly to BID17, a two-phase training is used, where there is regular alternation between minimizing reconstruction and adversarial (regularization) costs. The trade-off factor (λ) between reconstruction and adversarial costs is 20 (as in BID6). All over the encoder, decoder and discriminator, input and attention heads are dropped with a probability of 0.1. The general architecture and the proposed (self) attention-based changes are depicted in FIG0.ARAE We use the original setup from BID14 with fixed-size full codes as inputs to the decoder. Inside the encoder and decoder, word and attention head dropout is performed with a probability of 0.1 and a maximum of 3-word shift is applied to input words. The general architecture and the proposed (self) attention-based changes are depicted in FIG1. We study the performance of our self attentive (SALSA) architectures and compare it with that of the code-based setups studied in BID6. The performance of the models is evaluated in sentence generation (sampling), on the Google Sentence Compression (GSC) dataset 1 (as in BID6). Training on this dataset is very challenging as the sentences are relatively long (average of 24.8 words) and diverse in terms of content, grammar, etc. GSC comprises 200, 000 training and 9, 995 test sentences. For all the trained models, we use Google's SentencePiece 2 tokenizer using byte-pair encoding (BPE BID23) as in BID6. − The world's largest car maker, said it will buy back a new US$4 million fine for the first time in three years. − London, May 6 A 48-year-old man has been charged with raping a woman in the face of livelihood on Tuesday. − In a bid to save money, the US government's most expensive land. − The Governors of the city of Caterpillar, who was in talks with the world's most expensive officials. − Israel has launched a new website that would help the Gaza Strip, and the first such incident in a deadly attack on the West Bank and the United States. − Harrington has been found guilty of two counts of driving under the influence of a semi-final at a local court. ARAE − A man has been arrested and charged with sexual assault on an alleged assault and assault for allegedly assaulting him to death and a child. − A man is accused of stealing a child in her own home case of her husband. − A man, who was accused of killing the two-year-old girl in connection with several of them who died from injuries last week. − A man is facing charges of sexual assault for allegedly assaulting a woman and her wife. − A man accused of killing the man in his home is being sentenced to life and is expected to be on the way. − A man has been arrested in connection with her death and then sex with her husband, who was in critical condition and will be released on Monday. − Former PSV Ethanolve said he has "two and the title of his contract," the first-round pick of the season. − The Queen will present a "Curprone of the Donegal Plan" in a new biopic of the forthcoming musical, and "Kalways," the school said in a statement. − This week between the "Daily Show" flights, a year after the airing, a day before the launch of a summer vacation. − However, the Myssenon, is planning to open its first wedding, with a move that will include a number of its customers, but they are not planning to sell their services. − The Dallas Morning News reported that a Houston man is in "a new, a state that is leaving the Houston Rockets. − The Dallas Morning News Corp. said it is to open a subsidiary of Houston, the largest newspaper and its staff, to be in the city. We filter the dataset to only include sentences with a maximum of 50 PBE tokens. This only lowers the average number of words per sentence and total number of sentences to 23.1 and 183739, respectively in the training set. The test dataset is also reduced to 9254 lines with an average of 22.7 words per sentence. Samples of generated sentences from all models is listed in Section 4.3.The input noise to the generator is of size 100 (as in BID6). We upsample the noise to the embedding size of 304 by using a fully connected layer. The same upsampled noise is copied a number of times equal to the maximum number of steps in the sentence. We use T = 50 times in our experiments. The noise is then fed to the generator, where positional encodings are added to each step. The previously mentioned fully connected layer also serves to allow the model to learn to protect the information of the positional encodings from the noise. Positional encodings are also added at the start of each transformer encoder block. As we use fixed size sequences, the attention depth is always fixed (T bpe). Positional encodings are also added to the input of each transformer encoder block, inside of the generator. We use various objective and subjective measures to evaluate the models. As objective measures, we use BLEU BID20, Self-BLEU BID32, forward and reverse perplexity. BLEU BID20 ) is a widely used metric to compute the similarity of a set of generated sentences with a reference dataset. The are described in TAB1.Self BLEU BID32 (Table 3) is a measure of diversity for generated texts. In Self-BLEU, for each generated sentence, we compute the BLEU using the sentence as hypothesis and the rest of the generated sentences as the reference. When averaged over all the references, it gives us a measure of how diverse the sentences are. Lower Self-BLEU scores are better, as high BLEU scores indicate great similarity. In Perplexity evaluation TAB2, the goal is to measure the individual quality of the sentences generated. We train an LSTM language model on the WMT News 2017 Dataset 3 filtered for lines of a maximum of 50 BPE tokens (a total of 200000 sentences). The perplexity of the language model is computed over 100000 generated sentences for each model. Reverse perplexity evaluation TAB2 aims to measure variety of the generated sentences. For each model, we train an LSTM-based language model based on 100000 generated sentences, and then evaluate the perplexity on the GSC test dataset, filtered to a maximum length of 50 BPE. Diverse generated sentences that cover dataset to a good extent would in better (lower) reverse perplexity measures ing from the trained LSTM network (language model).For the subjective evaluation TAB3, we use Amazon mechanical Turk 4 online platform. 18 sentences are sampled from each model, i.e. a total of 162 sentences. We assign 81 randomly selected sentences to 50 native English speakers (among Mechanical Turk Masters with hit approval ratings greater than 75%). The remaining 81 are assigned to another group of 50 people with the same qualifications. Each person was asked to evaluate the assigned 81 sentences in one and a half hours. In the evaluation, the 5-point Likart scale is used to measure grammaticality, semantic consistency and overall (Fluency). The overall reflects both grammar and semantic consistency in addition to other human-specific factors. Hence, it is a good representative of "Fluency" measure used in BID6. In TAB0, we list six generated sentences for each model. As seen, AAE generates rather short sentences, while the corresponding SALSA version (SALSA-AAE) has alleviated the issue to a good extent. Finally, ARAE suffers from extreme mode collapse as opposed to its SALSA counterpart. The of objective and subjective evaluations are presented in Tables 2 to 5. As seen, the proposed self attention-based (SALSA) architectures consistently outperform the non-attention-based benchmarks in terms of diversity (measured by reverse perplexity). Moreover, they often show better performance in terms of output quality (measured by BLEU, self BLEU, preplexity and human evaluations) on the long and complicated sentences of the GSC dataset. As seen in the generated samples TAB0, human evaluation TAB3 ) and objective metrics (Tables 2 to 4), the original AAE and ARAE setups perform very poorly on GSC with long sentences. With reverse perplexities of over 8000 and high self-BLEU scores close to 0.9, they suffer from a high level of mode collapse (repeated sentences).Human evaluations do not account for lack of diversity. The reason is humans are presented with a number of shuffled sentences and asked to evaluate them independently (without knowing which sentence coming from which model). Hence, in our experiments for the original AAE and ARAE, a model can generate similar sentences (maybe due to mode collapse) and still receives high subjective scores. It seems that, in our experiments, the original ARAE model suffers from mode collapse. We can see that it has slightly higher human evaluation scores, but extremely poor diversity metrics, i.e. very high reverse perplexity and self-BLEU scores. It can also be seen in the randomly selected generated sentences TAB0, where all the sentences start with "A man" and invariably mention he is being arrested or accused of grievous crimes. This is likely because the sentences in the GSC dataset are long and that their structure is elaborate. SALSA-ARAE on the other hand reliably produces sentences of quality with great diversity. SALSA-AAE has both considerably higher individual quality metrics than the original AAE and much better diversity metrics. It is the strongest pure adversarial text model. As seen in TAB3, SALSA-AAE provides the best grammaticality, semantic consistency and Fluency performance. In this paper, we introduced SALSA-TEXT, a Transformer-based architecture for adversarial codebased text generation. It incorporates self-attention mechanism by utilizing Transformer architecture in autoencoder and GAN setups. Our extensive experiments demonstrate the better performance of our models compared to the state-of-the-art in adversarial code-based text generation (without self-attention). The proposed architectures provide diverse, long and high quality output sentences as confirmed by objective metrics and human evaluations in extensive experiments. As a future direction, it is beneficial to study the performance of self attention in other text generation methods including variational code-based and reinforcement learning-based approaches. Another interesting direction is to experiment with deeper Transformer-based autoencoders to better capture the underlying language model and perform unsupervised pre-training isnpired by the success of BID0 and Radford et al..
We propose a self-attention based GAN architecture for unconditional text generation and improve on previous adversarial code-based results.
1,147
scitldr
Watermarks have been used for various purposes. Recently, researchers started to look into using them for deep neural networks. Some works try to hide attack triggers on their adversarial samples when attacking neural networks and others want to watermark neural networks to prove their ownership against plagiarism. Implanting a backdoor watermark module into a neural network is getting more attention from the community. In this paper, we present a general purpose encoder-decoder joint training method, inspired by generative adversarial networks (GANs). Unlike GANs, however, our encoder and decoder neural networks cooperate to find the best watermarking scheme given data samples. In other words, we do not design any new watermarking strategy but our proposed two neural networks will find the best suited method on their own. After being trained, the decoder can be implanted into other neural networks to attack or protect them (see Appendix for their use cases and real implementations). To this end, the decoder should be very tiny in order not to incur any overhead when attached to other neural networks but at the same time provide very high decoding success rates, which is very challenging. Our joint training method successfully solves the problem and in our experiments maintain almost 100\% encoding-decoding success rates for multiple datasets with very little modifications on data samples to hide watermarks. We also present several real-world use cases in Appendix. Security issues of deep learning have been very actively being studied. It had been already demonstrated that deep learning methods are vulnerable to some carefully devised adversarial attacks BID7 BID4 BID6 BID1. At the same time, many researchers are also studying about how to make them more robust against such attacks. A couple of recent works, for example, proposed to use watermarks BID9 BID0 to protect neural networks. At the same time, other work wanted to use a similar watermark technique to attack neural networks BID9.The method of adding watermarks to data samples can be used in various ways to protect deep learning models. First, the decoder can be implanted into a trained deep learning model and later one can prove the ownership, when other people copied the model, by showing that the copied model reacts to one's watermarked samples. Second, the implanted decoder may allow only legitimately watermarked samples and reject other non-watermarked samples. In this case, only people that have the encoder can access the deep learning model. However, there is one very strict requirement that the decoder should be tiny to minimize the incurred overheads by attaching it as part of the main deep learning model. Similar techniques can also be used to attack neural networks. In this paper, we do not propose any specific watermarking techniques. Instead, we want the encoder and decoder discuss and decide their watermarking method. Inspired from generative adversarial networks (GANs) BID3, the encoder and decoder work for the same goal and are jointly trained. They do not perform the adversarial game of GANs. Their relationship is rather cooperative than adversarial in our method. The decoder is a tiny neural network to decode watermarks and the encoder is a high-capacity neural network that can watermark samples in such a way that the tiny neural network can successfully decode. Therefore, those two neural networks should cooperate to find such a watermarking scheme -in GANs, one neural network (generator) tries to fool the other neural network (discriminator). Because the decoder has a limited capacity due to its tiny neural network size, the encoder should not decide the watermarking scheme alone. The encoder should receive feedback from the decoder to revise its watermarking scheme. After training them, one should keep the encoder in a secure place but can deploy the decoder to as many places as one wants. We also show that our method can be used for both defences and attacks (refer to Appendix for some of these examples we implemented using our proposed method).We adopt residual blocks BID5 to design the encoder. Each residual block of the encoder is supposed to learn f (x)+x where x is an input to the block. One can consider f (x) as a watermark signal discovered by the joint training of the encoder and the decoder. The signal produced by f (x) should be strong enough to be detected by the decoder but weak enough not to be detected by human eyes. We design our training loss definition to achieve this goal. The encoder should modify original samples to implant watermarks. As more modifications are allowed, stronger watermarks will be implanted but they can be readily detected by human eyes. Our loss definition has a parameter that can be set by user to limit the modifications by the encoder. Our experiments show that we can find a well-balanced watermarking scheme that be detected only by the decoder. We tested many different datasets: face recognition(VGG-Face Data-set), speech recognition BID11, images with general objects BID7, and flowers (Flowers Data-set). Two of them are reported in the main paper with the comparison with other watermarking methods and others are introduced in Appendix. During experiments, our methods marked 100% decoding success rates for all datasets (in at least one hyper-parameter configuration). This well outperforms other baseline methods. In addition, we also found that different watermarking schemes are trained for different datasets. For instance, the encoder modified the tone of colors for the face recognition images. For the general object images, however, the encoder explicitly marks some dots rather than modifying their color tones (see FIG1 and Figure 4). This proves our goal that two neural networks cooperate to find the best suited watermarking method for each dataset. Watermarking data samples, such as images, videos, etc., is a long-standing research problem. In many cases, watermarking systems merge a specific watermark signal s (set by user) and a data sample x to produce a watermarked sample x, i.e., x = encode(x, s) and s = decode(x). In general, the signal s is secret and later used to check where the watermarked sample x is originated from. There exist many different watermarking techniques for relational databases, images, videos, and so forth. However, watermarking deep neural networks is still under-explored except for a couple of recent papers BID9 BID0. For instance, one can implant a certain signal on neural network weights -technically, this is similar to implanting a signal on a column of table for watermarking a relational database. However, the signal on the weights will disappear after finetuning the neural network which can preserve its accuracy but reorganize its weight values. Instead, we need a more robust way for watermarking neural networks. To this end, a backdoor based watermarking method has been recently proposed BID9 BID0. In general, a backdoor means a certain malware piece that can be exploited to avoid authentication processes in computer security. In their contexts, however, a neural network backdoor means a way to control the final prediction of a target neural network -for instance, retraining a target neural network so that it classifies a certain type of cats as dogs. The authors want to use the backdoor mechanism to protect the ownership of a neural network. Because the backdoor reacts to the samples specially watermarked by the owner, the proof of its ownership is available when other people copied the neural network. Of course, if the backdoor is successfully identified and removed, the proof of the ownership is not possible. However, this incurs additional costs and greatly decreases the motivation of copying the model. The same watermarking technique can be used for attacks. In BID9, the attacker implants an attack trigger into a data sample using a simple watermarking technique and the target neural network is already compromised by the attack to make it react to their trigger. Their goal is to induce the compromised neural network outputs a certain label encoded in the attack trigger and preferred by the attacker. Because this paper uses a very strong watermark signal, their watermarked images are visually impaired. Due to its strong watermarks, however, their attack shows very high success rates. GANs are one of the most successful generative models. They consist of two neural networks, one generator and one discriminator. They perform the following zero-sum minimax game: DISPLAYFORM0 where p(z) is a prior distribution, G(·) is a generator function, and D(·) is a discriminator function whose output spans DISPLAYFORM1 indicates that the discriminator D classifies a sample x as generated (resp. real).The generator tries to obfuscate the task of the discriminator by producing realistic fake samples. We redesign the adversarial game model for our purposes. In our case, two neural networks, one encoder and one decoder, perform a cooperative game. A watermarking framework consists of encoder and decoder. The encoder modifies original samples by adding a watermark signal into them and the decoder is a binary classification to detect the presence of the watermark signal. Watermarks are used for various purposes in deep learning. They were used for both of defenses and attacks for deep neural networks. In our case, we are interested in developing a pair of encoder and decoder and the decoder should be pluggable to other neural networks (as in the malware piece or backdoor in computer security). Our encoder-decoder pair can be used for both defenses and attacks (refer to Appendix for our case studies).There are several watermarking methods based on CNNs that can be described by x = encode(x, s) and s = decode(x) BID10. However, existing methods do not care about the size of the decoder and we are not interested in implanting a watermark signal s into a data sample x. We let the encoder modify x in a way that the decoder wants and the decoder performs the binary classification of watermarked or non-watermarked. Thus, our model can be described as x = encode(x) and decode(x) ∈ {0, 1} without s. In real world applications, this binary classification decoder suffices (refer to our use cases in Appendix) and the decoder should be so tiny that it does not incur any overheads when attached to other main neural networks -this is a strong requirement especially for the backdoor based watermarking method. Our goal is to develop a watermarking framework that consists of one large encoder (a fatty network) and one tiny decoder (a skinny network) and they should decide their own watermarking scheme without human efforts. Our overall idea is greatly inspired by generative adversarial networks (GANs). In GANs, there are two neural networks, generator and discriminator, that are comparable to each other in terms of their neural network capacity. In our case, however, the encoder and decoder are highly imbalanced in their neural network capacity and they perform a cooperative game (rather than the zero-sum adversarial game of GANs).In our method, the encoder should be capable of generating simple but robust watermarked samples because the decoder has very low capacity and as a , it may not be able to decode complicated watermarks. Therefore, the encoder should be large and trained enough to find the watermarking. + Figure 1: The proposed encoder architecture. Based on the attention map, a series of residual blocks generate a watermark signal specific to the input sample x which will be later merged with the generated watermark signal. All those convolutions in this encoder use the stride of 1 and the channel of 3 to maintain the identical input and output dimensions.mechanism suitable for the low-capacity decoder. In other words, we do not teach any watermarking mechanism but let them discover on their own considering the neural network capacity difference. The encoder (comparable to the generator of GANs) should modify original samples to implant a watermark signal. We adopt residual blocks to design the encoder as shown in Figure 1. Residual blocks are proven to be effective in designing a deep architecture and adopted by many works (e.g., ResNet BID5). Each residual block that can be described as x + f (x) is suitable to perform the watermarking task. After the multiple stages of residual blocks, the encoder generates a watermark signal 1 that will be merged with the original sample x. We use the multiple residual blocks because it is very unlikely that one residual block is able to generate a robust watermark signal. The overall watermarked sample generation process can be described as follows: DISPLAYFORM0 where x is the original sample; A is the attention map of x produced after two convolutions, one activation, and a softmax; means the Hadamard product; f i (·) represent an additive term by i-th residual block. In particular, we use the swish activation BID14. Thus, one can consider our generated watermark signal is an ensemble of all those additive terms . Note that our watermark signal is generated after ignoring unimportant parts of x after the element-wise product with the attention map. After merging the input sample x and the generated watermark, we have one post-processing block to refine the watermarked sample. This process includes a couple of more convolutions. In FIG0, and 4, we show watermarking examples for various datasets. In FIG0, watermarks are generated for the parts where the attention map focuses on. In FIG1, watermakrs are dispersed over many pixels and in this case, the attention also provides similar weights for those pixels. The decoder (comparable to the discriminator of GANs) should classify if an input sample has a watermark signal or not. We adopt the discriminator of DCGAN BID13 (after shrinking its size) as decoder. Its discriminator follows a standard CNN architecture. One of its ) and (e) are generated watermarks (before being merged with images). These are cases where watermarks are generated, aided by attention. For (e), there is a watermark in the most lower right corner and its attention also focuses on the same area. However, attention maps sometimes provide similar weights over almost all pixels, in which cases watermarks are scattered over all pixels -examples in FIG1 correspond to this case.advantageous is that it is very hard to identity the decoder after being implanted into a neural network model because it is tiny and uses only very standard neural operators. We perform experiments by varying the number of convolution layers in order to find the smallest decoder configuration. We introduce our training method. The main training loss can be described as follows: DISPLAYFORM0 where E(·) is an encoder, and D(·) is a decoder, and x is a data sample. This loss definition looks similar to the one in GANs. However, we do not perform the minimax zero-sum game of GANs. Both the encoder and decoder cooperate to find the best performing watermarking scheme. Its equilibrium state analysis is rather meaningless because they do not perform the zero-sum adversarial game of GANs. It is obvious that the main loss representing equation 3 will be optimized when the decoding success rate of watermarked and non-watermarked cases is 100%. The main loss can be implemented using the cross-entropy loss as in other GANs. In addition, we also use one more regularization term to limit the modification by the encoder. Let L be the main loss in equation 3. The final loss term is defined as follows: DISPLAYFORM1 where φ(·) s,t means the feature map taken after t-th convolution (after activation) before s-th maxpooling layer in the VGG19 network 2, γ is the maximum margin in the hinge-loss based regularization. We allow the modification up to γ. Note that the hinge-loss based regularization does not incur any loss up to γ. L content compares two samples, the original sample x and the watermarked sample E(x), in terms of the feature maps created by the VGG19 network BID8. We found that this is better than the pixel-wise mean squared error regularization. In our case, we add the hinge-loss to control the modification of the input sample x. If γ is large, more modifications are allowed and as a , our watermark signals will be more robust. However, the modified sample can be very different from the original sample in this case, which is not a desired . Therefore, γ should be adjusted very carefully. Our training algorithm is similar to that of GANs. The encoder and the decoder are alternately trained to collaboratively minimize L f inal. We omit the detailed algorithm due to its similarity to the training algorithm of GANs. We first select several neural networks and their official datasets, considering the diversity in their task and dataset types. After that, we train the encoder-decoder network using 80% of training samples and check the decoding error rate for the remaining 20% of testing samples. For this, we test both cases where each testing sample is watermarked or not -i.e., the decoder should successfully distinguish watermarked and non-watermarked cases for the same set of samples. By varying the number of convolution layers in the decoder and the margin γ, we repeat the experiment. We also test how much damage those implanted watermarks introduce to data samples. If the watermark signal is weak, there should not be any differences between them for several popular image comparison metrics. We introduce detailed experiment for two neural networks in this paper and some more in Appendix. To evaluate our method, we compare with the following watermarking techniques. Note that our baseline selection is so extensive that all different types of watermarking methods are included.1. In the statistical watermarking method (SWM) introduced in BID15, authors proposed a method to hide a series of bits (set by user) in a column of table -after flattening an image to an array of pixels, this method can be applied. It explicitly solves an optimization problem to find the weakest watermark (enough to hide the bits) and performs some statistical tests to decode the watermarked bit pattern. We test the following two bit patterns to hide:'0101010101', and'0000100001'. This method cannot be implemented by neural networks but we use this method only for comparison purposes.2. Trojan in BID9 ) uses a relatively stronger watermark signal, called attack trigger in their paper, than SWM. This papers proposed a very effective backdoor attack method and our motivation is also influenced by the paper. We choose the following neural networks and their datasets. All selected neural networks include their official datasets and we use them. et al., 2015) as VGG-FACE. It has 16 layers and its data-set is available at (VGG-Face Data-set). A CNN model proposed in BID11 is to recognize spoken languages. It achieves superhuman performance in recognizing spoken numbers. It uses the dataset of pulse-code modulation (PCM) images of spoken numbers. We also tested for the ImageNet BID7 and Flowers (Flowers Data-set) datasets. Experiments for those other neural networks and datasets are in Appendix. We report i) how many non-watermarked and watermarked samples are correctly recognized and ii) how much damage each watermarking method brings to data samples in each method. We compare the proposed method with the aforementioned baseline methods. Our method (decoder size = 3 and γ = 0.01) marks the best decoding success rate, i.e., 100% in our method vs. 95.5% in the method of BID9 vs. 89.3% in the statistical watermarking method. Other configurations in our method also outperform all the baseline methods. Our method Decoder Size = 1 Decoder Size = 2 Decoder Size = 3 γ = 0.01 γ = 0.05 γ = 0.1 γ = 0.01 γ = 0.05 γ = 0.1 γ = 0.01 γ = 0.05 γ = 0. BID9. Others are watermarked by our method. The decoder has 3 convolution layers in these examples. Note that there are more modifications on the color tone of images as γ increases. For all cases, the trained decoder can successfully decode their watermarks. Refer to Appendix for examples of watermarking other samples. Sometimes watermarks incur irreparable damage on images, and as a , its contents are changed a lot. We visualize watermarked samples and measure the difference from their original images using the multi-scale structural similarity (MS-SSIM), the peak signal to noise ratio (PSNR), and the Shannon entropy increase after watermarked. (i) are watermarked by the method of BID9 and their attack trigger signal (in the lower right corner) is very strong. Compared to them, our methods provide much weaker watermarks. However, our decoding success rates are much higher than other methods including BID9. This proves the efficacy of the joint training mechanism of the encoder and decoder. Our method is clearly better than Trojan for both PSNR and the entropy change, i.e., 36.580 vs. 21.029 for PSNR. SWM solves an optimization problem to find the best case to hide watermarks with the smallest changes. Thus, its PSNR and entropy change are better than our method and Trojan. However, SWM does not provides reliable decoding success rates. We also checked the accuracy drop after watermarking images. With the original images, the FR network's accuracy is 0.795576 and after watermarking them with γ = 0.01 and the decoder with 3 convolutions, it becomes 0.797864. After watermarking, the accuracy is slightly improved but we think this is within its error margin and not significant. They are more or less the same. Likewise, in almost all cases, their accuracy difference is very trivial. Other watermarking examples are in Appendix. For example, Figure 4 in Appendix shows several watermarking examples for the ImageNet dataset. Watermarks in FIG1 and FIG0 are very different. In FIG1, watermarks are implanted in the tone of colors but in FIG0, several small dots are explicitly marked. This is because the encoder and decoder networks discover a suitable watermarking method for each dataset. It is very interesting that they discover how to hide watermarks on their own. For this SR network netork ans dataset, we repeat the same experiments as the FR case. In general, these experiment have the same pattern as the FR . In SR, SWM shows very poor decoding success rates. Both our method and Trojan provides the rate of 100%. Considering the large damage on samples by Trojan which will be shortly described, however, Trojan's 100% decoding success rate is rather meaningless. In many configurations in our method, their success rates are more than 99%. SWM marked the smallest damage but considering it very low success rates, we don't think SWM is suitable for SR. It cannot be even implemented by neural networks. Our method introduces less damage to samples than that of Trojan. Especially, the PSNR of Trojan is much worse than other method. Because the PSNR is in the log scale, those values mean huge differences. Its MS-SSIM is also greatly damaged. Our method shows very stable values for those three metrics. We present a joint training method of the watermark encoder and decoder. Our decoder is a very lowcapacity neural network and the encoder is a very high-capacity neural network. These two skinny and fatty neural networks collaborate to find the best watermarking scheme given data samples. In particular, we use residual blocks to build the encoder because the definition of the residual block is very appropriate for the task of watermarking samples. We demonstrated that two different types of watermarks (one to change the color tone and the other to add dots) are found by them without human interventions. For our experiments with various datasets, our method marked 100% decoding success rates, which means our tiny decoder is able to distinguish watermarked and non-watermarked samples perfectly. We also listed three use cases in Appendix about how to utilize our proposed encoder and decoder for real-world attacks and defenses. Our future research will be to implement those use cases. Figure 4: Examples of watermarking ImageNet images. Some dots are marked explicitly to hide watermarks when γ >= 0.05. Recall that watermarks are hidden in the tone of colors for FR images. This is a very interesting point because our proposed method can discover two very different watermarking schemes for them. This is because adding dots does not make the regularization term greatly exceed the margin γ. When γ = 0.01, a similar watermarking scheme to the FR exmaples will be used. This proves that our method is able to fine the best suited watermarking scheme given data samples. The decoder has 3 convolution layers in these examples. Note that there are more modifications in general as γ increases. For all cases, the trained decoder can successfully decode their watermarks. Figure 5: The decoding success rate in the ImageNet dataset. We report the decoding success rate for non-watermarked/watermarked cases with our method after varying the convolution numbers in the decoder (i.e. decoder size) and γ. Our method Decoder size = 1 Decoder size = 3 γ = 0.01 γ = 0.05 γ = 0.1 γ = 0.01 γ = 0.05 γ = 0.1 81.2%/100.0% 89.2%/100.0% 92.0%/100.0% 99.0%/100.0% 98.0%/99.4% 99.5%/100.0% A ADDITIONAL EXPERIMENT We introduce additional experiments that were removed from the main paper. In Table 5, we report the decoding success rate for the ImageNet dataset. In all configurations, their success rates are very high. In particular, the decoder with 3 convolution layers provides the highest decoding success rate. DISPLAYFORM0 label ← sof tmax(logit) Figure 4 shows several watermarked and non-watermarked samples. With γ = 0.01, modifications are very limited. In Figure 4 (e), it is very hard to recognize its watermark with human eyes, but the decoder can detect its hidden watermark signal surprisingly. The smallest decoder with only one convolution works well too. However, its decoding success rates are smaller than that of the decoder with three convolutions. We also test the flower images in (Flowers Data-set). We choose this dataset to test with various types of images. We tested with face and object images. Flower images have different characteristics from the previous image datasets. In the decoder size of 3, the decoding success rates are very high for all γ configurations. When there is only one convolution in the decoder, the decoding success rate is proportional to γ. In this section, we introduce three use cases to both attack and protect neural networks. The first use case is to utilize the proposed encoder and decoder for backdoor attacks. The second use case is to allow only legitimately watermarked input samples and comparable to the admission control in operating systems and computer networks, and the last use case is to prove the ownership using the proposed watermarking technique. The backdoor attack in the context of machine learning means that the attacker modifies a target model and the modified model reacts to samples specially marked by the attacker -the attacker may redistribute it after the modification, and careless users may download and use it. The special marker is called attack trigger and it contains a target label that is different from its ground-truth label but preferred by the attacker. The attack trigger is usually implanted using a watermarking method. We demonstrate that how the attacker utilize our encoder and decoder networks. We first describe the proposed decode&inject module and how to attach it to the target neural network. The code snippet in the left column of TAB5 represents a typical multi-class image classification target neural network -we use this image classification neural network as an example but our attack can be applied to any other neural networks. We attach the module into one of its convolution layers as shown in the right column of the table, i.e., c i ← relu(conv(c i−1)) + decode&inject(x) where x is an image and c i is feature maps in i-th convolution layer. Note that the module reads the input image x and outputs a tensor whose dimensionality is the same as that of the convolution layer if watermarked by the attacker. Thus, the role of the module is i) decoding the watermark signal and ii) injecting a signal (tensor) to the target neural network to control its final output. If not watermarked, the module should inject a zero tensor (i.e., keep silent). The module can be defined as follows: DISPLAYFORM0 It should outputs 0 if no watermark, i.e., x is a non-modified image. Because of this, the module has zero influences on the target neural network for non-modified images. If x has a certain watermark, it should output a corresponding tensor w for the label preferred by the attacker. For instance, all watermarked images with cats can be classified as dogs with the additional feature map w injected to the target neural network. All the convolution, linear, softmax are initialized and fixed with the weights of the original target neural network and our trained decoder, and we train only w. After being trained, the module can inject a trained feature map w that is able to control the final softmax outputs, i.e., class labels. The implementation of decode&inject(x) is very straightforward. On top of the proposed decoder, one trick to implement an if-else statement is enough to make the module fully functioning. We attacked the neural networks of FR and SR, and the following one more using the proposed backdoor attack mechanism based on our encoder-decoder neural networks. Inception-v4 Network (IN) Inception-v4 Network is a CNN-based classifier developed by Google BID16. It uses inception modules to make training very deep networks very efficient. We use the Flowers dataset released in (Flowers Data-set).To evaluate the proposed attack, we followed the steps used in BID9. A backdoor modification proposed by BID9 makes the target neural network react to their watermarked attack trigger and output the preferred label by the attacker -this is the same as our decode&inject module. We first prepare the modified target neural network where the decode&inject module is attached 3. To perform attacks, we use the original testing set for each target neural network. Each sample is attacked multiple times for all non-ground-truth labels. Their attack success rates are summarized in TAB6. As you see, our method provides better success rates than the other stateof-the-art method. The same method can be used for admission control. For this, we can use the following module that reject or forward input samples to target neural networks. reject or bypass(x) = 0 if no watermark x if watermark exists on xThe module in equation 6 says that x will be delivered to the target neural network only if x is properly watermarked. The implementation of the module is similar to that of the backdoor attack. However, we do not need to train w in this case. Recall that our watermarks did not decrease the accuracy for both FR and SR netural networks. This property of no (or very little) accuracy drop is required to use the watermarking method for admission control. Our method meets the requirement. The proposed decode&inject module can be used to prove the ownership of neural networks. One can implant the module in the way we described and later use it to prove the ownership against the plagiarism of neural networks. If other people copy the neural network protected by our watermarking method, you can show that the copied neural network reacts to your watermarked samples and prove that the copied neural network is originally designed by you.
We propose a novel watermark encoder-decoder neural networks. They perform a cooperative game to define their own watermarking scheme. People do not need to design watermarking methods any more.
1,148
scitldr
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The ing estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings. Put replacement in your basement! We derive the unordered set estimator: an unbiased (gradient) estimator for expectations over discrete random variables based on (unordered sets of) samples without replacement. In particular, we consider the problem of estimating (the gradient of) the expectation of f (x) where x has a discrete distribution p over the domain D, i.e. This expectation comes up in reinforcement learning, discrete latent variable modelling (e.g. for compression), structured prediction (e.g. for translation), hard attention and many other tasks that use models with discrete operations in their computational graphs (see e.g.). In general, x has structure (such as a sequence), but we can treat it as a'flat' distribution, omitting the bold notation, so x has a categorical distribution over D given by p(x), x ∈ D. Typically, the distribution has parameters θ, which are learnt through gradient descent. This requires estimating the gradient ∇ θ E x∼p θ (x) [f (x)], using a set of samples S. A gradient estimate e(S) is unbiased if The samples S can be sampled independently or using alternatives such as stratified sampling which reduce variance to increase the speed of learning. In this paper, we derive an unbiased gradient estimator that reduces variance by avoiding duplicate samples, i.e. by sampling S without replacement. This is challenging as samples without replacement are dependent and have marginal distributions that are different from p(x). We further reduce the variance by deriving a built-in control variate, which maintains the unbiasedness and does not require additional samples. Related work. Many algorithms for estimating gradients for discrete distributions have been proposed. A general and widely used estimator is REINFORCE . Biased gradients based on a continuous relaxations of the discrete distribution (known as Gumbel-Softmax or Concrete) were jointly introduced by and. These can be combined with the straight through estimator if the model requires discrete samples or be used to construct control variates for REINFORCE, as in REBAR or RELAX . Many other methods use control variates and other techniques to reduce the variance of REINFORCE (; ; ; ; ;). Some works rely on explicit summation of the expectation, either for the marginal distribution (Titsias & Lázaro-) or globally summing some categories while sampling from the remainder . Other approaches use a finite difference approximation to the gradient (; 2019). introduced ARSM, which uses multiple model evaluations where the number adapts automatically to the uncertainty. In the structured prediction setting, there are many algorithms for optimizing a quantity under a sequence of discrete decisions, using (weak) supervision, multiple samples (or deterministic model evaluations), or a combination both (; ; ; ; ; ; ;). Most of these algorithms are biased and rely on pretraining using maximum likelihood or gradually transitioning from supervised to reinforcement learning. Using Gumbel-Softmax based approaches in a sequential setting is difficult as the bias accumulates because of mixing errors . Throughout this paper, we will denote with B k an ordered sample without replacement of size k and with S k an unordered sample (of size k) from the categorical distribution p. Restricted distribution. When sampling without replacement, we remove the set C ⊂ D already sampled from the domain and we denote with p D\C the distribution restricted to the domain D \ C: Ordered sample without replacement B k. Let B k = (b 1, ..., b k), b i ∈ D be an ordered sample without replacement, which is generated from the distribution p as follows: first, sample b 1 ∼ p, then sample b 2 ∼ p D\{b1}, b 3 ∼ p D\{b1,b2}, etc. i.e. elements are sampled one by one without replacement. Using this procedure, B k can be seen as a (partial) ranking according to the PlackettLuce model and the probability of obtaining the vector B k is We can also restrict B k to the domain D \ C, which means that b i ∈ C for i = 1,..., k: Unordered sample without replacement. Let S k ⊆ D be an unordered sample without replacement from the distribution p, which can be generated simply by generating an ordered sample and discarding the order. We denote elements in the sample with s ∈ S k (so without index) and we write B(S k) as the set of all k! permutations (orderings) B k that correspond to (could have generated) S k. It follows that the probability for sampling S k is given by:. The last step follows since B k ∈ B(S k) is an ordering of S k, such that, but in Appendix B we show how to compute it efficiently. When sampling from the distribution restricted to D \ C, we sample S k ⊆ D \ C with probability: The Gumbel-Top-k trick. As an alternative to sequential sampling, we can also sample B k and S k by taking the top k of Gumbel variables (; ;). Following notation from Kool et al. (2019c), we define the perturbed log-probability g φi = φ i + g i, where φ i = log p(i) and g i ∼ Gumbel. Then let b 1 = arg max i∈D g φi, b 2 = arg max i∈D\{b1} g φi, etc., so B k is the top k of the perturbed log-probabilities in decreasing order. The probability of obtaining B k using this procedure is given by equation 4, so this provides an alternative sampling method which is effectively a (non-differentiable) reparameterization of sampling without replacement. For a differentiable reparameterization, see. It follows that taking the top k perturbed log-probabilities without order, we obtain the unordered sample set S k. This way of sampling underlies the efficient computation of p(S k) in Appendix B. In this section, we derive the unordered set policy gradient estimator: a low-variance, unbiased estimator of based on an unordered sample without replacement S k. First, we derive the generic (non-gradient) estimator for E[f (x)] as the Rao-Blackwellized version of a single sample Monte Carlo estimator (and two other estimators!). Then we combine this estimator with REINFORCE and we show how to reduce its variance using a built-in baseline. A very crude but simple estimator for E[f (x)] based on the ordered sample B k is to only use the first element b 1, which by definition is a sample from the distribution p. We define this estimator as the single sample estimator, which is unbiased, since Discarding all but one sample, the single sample estimator is inefficient, but we can use RaoBlackwellization to signficantly improve it. To this end, we consider the distribution B k |S k, which is, knowing the unordered sample S k, the conditional distribution over ordered samples B k ∈ B(S k) that could have generated The Rao-Blackwellized version of the single sample estimator computes the inner conditional expectation exactly. Since B k is an ordering of S k, we have b 1 ∈ S k and we can compute this as where, in a slight abuse of notation, P (b 1 = s|S k) is the probability that the first sampled element b 1 takes the value s, given that the complete set of k samples is S k. Using Bayes' Theorem we find The step p(S k |b 1 = s) = p D\{s} (S k \ {s}) comes from analyzing sequential sampling without replacement: given that the first element sampled is s, the remaining elements have a distribution restricted to D \ {s}, so sampling S k (including s) given the first element s is equivalent to sampling the remainder S k \ {s} from the restricted distribution, which has probability p D\{s} (S k \ {s}) (see equation 7). The unordered set estimator. For notational convenience, we introduce the leave-one-out ratio. Definition 1. The leave-one-out ratio of s w.r.t. the set S is given by R(S k, s) = shows that the probability of sampling s first, given S k, is simply the unconditional probability multiplied by the leave-one-out ratio. We now define the unordered set estimator as the Rao-Blackwellized version of the single-sample estimator. Theorem 1. The unordered set estimator, given by is the Rao-Blackwellized version of the (unbiased!) single sample estimator. Proof. The implication of this theorem is that the unordered set estimator, in explicit form given by equation 11, is an unbiased estimator of E[f (x)] since it is the Rao-Blackwellized version of the unbiased single sample estimator. Also, as expected by taking multiple samples, it has variance equal or lower than the single sample estimator by the Rao-Blackwell Theorem (Lehmann & Scheffé, 1950). The unordered set estimator is also the of Rao-Blackwellizing two other unbiased estimators: the stochastic sum-and-sample estimator and the importance-weighted estimator. The sum-and-sample estimator. We define as sum-and-sample estimator any estimator that relies on the identity that for any For the derivation, see Appendix C.1 or;. In general, a sum-andsample estimator with a budget of k > 1 evaluations sums expectation terms for a set of categories C (s.t. |C| < k) explicitly (e.g. selected by their value f or probability p ), and uses k − |C| (down-weighted) samples from D \ C to estimate the remaining terms. As is noted by , selecting C such that is minimized guarantees to reduce variance compared to a standard minibatch of k samples (which is equivalent to setting C = ∅). See also for a discussion on selecting C optimally. The ability to optimize C depends on whether p(c) can be computed efficiently a-priori (before sampling). This is difficult in high-dimensional settings, e.g. sequence models which compute the probability incrementally while ancestral sampling. An alternative is to select C stochastically (as equation 13 holds for any C), and we choose C = B k−1 to define the stochastic sum-and-sample estimator: For simplicity, we consider the version that sums k − 1 terms here, but the following also hold for a version that sums k − m terms and uses m samples (without replacement) (see Appendix C.3). Sampling without replacement, it holds that, so the unbiasedness follows from equation 13 by separating the expectation over B k into expectations over B k−1 and b k |B k−1: In general, a sum-and-sample estimator reduces variance if the probability mass is concentrated on the summed categories. As typically high probability categories are sampled first, the stochastic sum-and-sample estimator sums high probability categories, similar to the estimator by which we refer to as the deterministic sum-and-sample estimator. As we show in Appendix C.2, Rao-Blackwellizing the stochastic sum-and-sample estimator also in the unordered set estimator. This even holds for a version that uses m samples and k−m summed terms (see Appendix C.3), which means that the unordered set estimator has equal or lower variance than the optimal (in terms of m) stochastic sum-and-sample estimator, but conveniently does not need to choose m. The importance-weighted estimator. The importance-weighted estimator is This estimator is based on the idea of priority sampling . It does not use the order of the sample, but assumes sampling using the Gumbel-Top-k trick and requires access to κ, the (k + 1)-th largest perturbed log-probability, which can be seen as the'threshold' since g φs > κ ∀s ∈ S k. q(s, a) = P (g φs > a) can be interpreted as the inclusion probability of s ∈ S k (assuming a fixed threshold a instead of a fixed sample size k). For details and a proof of unbiasedness, see or Kool et al. (2019c). As the estimator has high variance, Kool et al. (2019c) resort to normalizing the importance weights, ing in biased estimates. Instead, we use Rao-Blackwellization to eliminate stochasticity by κ. Again, the is the unordered set estimator (see Appendix D.1), which thus has equal or lower variance. Writing p θ to indicate the dependency on the model parameters θ, we can combine the unordered set estimator with REINFORCE to obtain the unordered set policy gradient estimator. Corollary 1. The unordered set policy gradient estimator, given by is an unbiased estimate of the policy gradient. Proof. Using REINFORCE combined with the unordered set estimator we find: Variance reduction using a built-in control variate. The variance of REINFORCE can be reduced by subtracting a baseline from f. When taking multiple samples (with replacement), a simple and effective baseline is to take the mean of other (independent!) samples . Sampling without replacement, we can use the same idea to construct a baseline based on the other samples, but we have to correct for the fact that the samples are not independent. Theorem 2. The unordered set policy gradient estimator with baseline, given by where is the second order leave-one-out ratio, is an unbiased estimate of the policy gradient. Proof. See Appendix E.1. This theorem shows how to include a built-in baseline based on dependent samples (without replacement), without introducing bias. By having a built-in baseline, the value f (s) for sample s is compared against an estimate of its expectation E[f (s)], based on the other samples. The difference is an estimate of the advantage , which is positive if the sample s is'better' than average, causing p θ (s) to be increased (reinforced) through the sign of the gradient, and vice versa. By sampling without replacement, the unordered set estimator forces the estimator to compare different alternatives, and reinforces the best among them. Including the pathwise derivative. So far, we have only considered the scenario where f does not depend on θ. If f does depend on θ, for example in a VAE , then we use the notation f θ and we can write the gradient as The additional second ('pathwise') term can be estimated (using the same samples) with the standard unordered set estimator. This in the full unordered set policy gradient estimator: Equation 20 is straightforward to implement using an automatic differentiation library. We can also include the baseline (as in equation 17) but we must make sure to call STOP GRADIENT (DETACH in PyTorch) on the baseline (but not on f θ (s)!). Importantly, we should never track gradients through the leave-one-out ratio R(S k, s) which means it can be efficiently computed in pure inference mode. We can use the unordered set estimator for any discrete distribution from which we can sample without replacement, by treating it as a univariate categorical distribution over its domain. This includes sequence models, from which we can sample using Stochastic Beam Search (c), as well as multivariate categorical distributions which can also be treated as sequence models (see Section 4.2). In the presence of continuous variables or a stochastic function f, we may separate this stochasticity from the stochasticity over the discrete distribution, as in. The computation of the leave-one-out ratios adds some overhead, although they can be computed efficiently, even for large k (see Appendix B). For a moderately sized model, the costs of model evaluation and backpropagation dominate the cost of computing the estimator. Relation to Murthy's estimator. We found out that the'vanilla' unordered set estimator (equation 11) is actually a special case of the estimator by , known in statistics literature for estimation of a population total Θ = i∈D y i. Using Murthy's estimator can be used to estimate expectations (see equation 11). Murthy derives the estimator by'unordering' a convex combination of estimators, which, using y i = p(i)f (i), are stochastic sum-and-sample estimators in our analogy. also provides an unbiased estimator of the variance, which may be interesting for future applications. Since Murthy's estimator can be used with arbitrary sampling distribution, it is straightforward to derive importance-sampling versions of our estimators. In particular, we can sample S without replacement using q(x) > 0, x ∈ D, and use equations 11, 16, 17 and 20, as long as we compute the leave-one-out ratio R(S k, s) using q. While part of our derivation coincides with , we are not aware of previous work using this estimator to estimate expectations. Additionally, we discuss practical computation of p(S) (Appendix B), we show the relation to the importance-weighted estimator, and we provide the extension to estimating policy gradients, especially including a built-in baseline without adding bias. Relation to the empirical risk estimator. The empirical risk loss estimates the expectation in equation 1 by summing only a subset S of the domain, using normalized proba- s ∈S p θ (s). Using this loss, the (biased) estimate of the gradient is given by The risk estimator is similar to the unordered set policy gradient estimator, with two important differences: 1) the individual terms are normalized by the total probability mass rather than the leave-one-out ratio and 2) the gradient is computed through the normalization factor. Intuitively, by taking the gradient through the normalization factor, samples are forced to'compete' for probability mass, such that only the best can be reinforced. This has the same effect as using a built-in baseline, which we prove in the following theorem. Theorem 3. By taking the gradient w.r.t. the normalization factor into account, the risk estimator has a built-in baseline, which means it can be written as This theorem highlights the similarity between the biased risk estimator and our unbiased estimator (equation 17), and suggests that their only difference is the weighting of terms. Unfortunately, the implementation by has more sources of bias (e.g. length normalization), which are not compatible with our estimator. However, we believe that our analysis helps analyze the bias of the risk estimator and is a step towards developing unbiased estimators for structured prediction. Relation to VIMCO. VIMCO is an estimator that uses k samples (with replacement) to optimize an objective of the form log, which is a multi-sample stochastic lower bound in the context of variational inference. VIMCO reduces the variance by using a local baseline for each of the k samples, based on the other k − 1 samples. While we do not have a log term, as our goal is to optimize general E[f (x)], we adopt the idea of forming a baseline based on the other samples, and we define REINFORCE without replacement (with built-in baseline) as the estimator that computes the gradient estimate using samples with replacement This estimator is unbiased, as Kool et al. (2019b) ). We think of the unordered set estimator as the without-replacement version of this estimator, which weights terms by p θ (s)R(S k, s) instead of 1 k. This puts more weight on higher probability elements to compensate for sampling without replacement. If probabilities are small and (close to) uniform, there are (almost) no duplicate samples and the weights will be close to 1 k, so the gradient estimate of the with-and without-replacement versions are similar. Relation to ARSM. The ARSM estimator also uses multiple evaluations of p θ and f. It determines a number of'pseudo-samples', from which duplicates should be removed for efficient implementation. This can be seen as similar to sampling without replacement, and the estimator also has a built-in control variate. Compared to ARSM, our estimator allows direct control over the computational cost (through the sample size k) and has wider applicability, for example it also applies to multivariate categorical variables with different numbers of categories per dimension. Relation to stratified/systematic sampling. Our estimator aims to reduce variance by changing the sampling distribution for multiple samples by sampling without replacement. There are alternatives, such as using stratified or systematic sampling (see, e.g. Douc & Cappé ). Both partition the domain D into k strata and take a single sample from each stratum, where systematic sampling uses common random numbers for each stratum. In applications involving high-dimensional or structured domains, it is unclear how to partition the domain and how to sample from each partition. Additionally, as samples are not independent, it is non-trivial to include a built-in baseline, which we find is a key component that makes our estimator perform well. We use the code by to reproduce their Bernoulli toy experiment. Given a vector p = (0.6, 0.51, 0.48) the goal is to minimize the loss L(η) = E x1,x2,x3∼Bern(σ(η)) Here x 1, x 2, x 3 are i.i.d. from the Bernoulli(σ(η)) distribution, parameterized by a scalar η ∈ R, where σ(η) = (1 + exp(−η)) −1 is the sigmoid function. We compare different estimators, with and (b) Low entropy (η = −4) Figure 1: Bernoulli gradient variance (on log scale) as a function of the number of model evaluations (including baseline evaluations, so the sum-and-sample estimators with sampled baselines use twice as many evaluations). Note that for some estimators, the variance is 0 (log variance −∞) for k = 8. without baseline (either 'built-in' or using additional samples, referred to as REINFORCE+ in). We report the (log-)variance of the scalar gradient ∂L ∂η as a function of the number of model evaluations, which is twice as high when using a sampled baseline (for each term). As can be seen in Figure 1, the unordered set estimator is the only estimator that has consistently the lowest (or comparable) variance in both the high (η = 0) and low entropy (η = −4) regimes and for different number of samples/model evaluations. This suggests that it combines the advantages of the other estimators. We also ran the actual optimization experiment, where with as few as k = 3 samples the trajectory was indistinguishable from using the exact gradient (see). We use the code from to train a categorical Variational Auto-Encoder (VAE) with 20 dimensional latent space, with 10 categories per dimension (details in Appendix G.1). To use our estimator, we treat this as a single factorized distribution with 10 20 categories from which we can sample without replacement using Stochastic Beam Search (c), sequentially sampling each dimension as if it were a sequence model. We also perform experiments with 10 2 latent space, which provides a lower entropy setting, to highlight the advantage of our estimator. Measuring the variance. In Table 1, we report the variance of different gradient estimators with k = 4 samples, evaluated on a trained model. The unordered set estimator has the lowest variance in both the small and large domain (low and high entropy) setting, being on-par with the best of the (stochastic 2) sum-and-sample estimator and REINFORCE with replacement 3. This confirms the toy experiment, suggesting that the unordered set estimator provides the best of both estimators. In Appendix G.2 we repeat the same experiment at different stages of training, with similar . ELBO optimization. We use different estimators to optimize the ELBO (details in Appendix G.1). Additionally to the baselines by we compare against REINFORCE with replacement and the stochastic sum-and-sample estimator. In Figure 2 we observe that our estimator performs on par with REINFORCE with replacement (and built-in baseline, equation 23) and outperforms other estimators in at least one of the settings. There are a lot of other factors, e.g. exploration that may explain why we do not get a strictly better despite the lower variance. We note some overfitting (see validation curves in Appendix G.2), but since our goal is to show improved optimization, and to keep directly comparable to , we consider regularization a separate issue outside the scope of this paper. These are using MNIST binarized by a threshold of 0.5. In Appendix G.2 we report using the standard binarized MNIST dataset from. 2 We cannot use the deterministic version by since we cannot select the top k categories. 3 We cannot compare against VIMCO as it optimizes a different objective. For reference, we also include the biased risk estimator, either'sampling' using stochastic or deterministic beam search (as in). In Figure 3a, we compare training progress (measured on the validation set) as a function of the number of training steps, where we divide the batch size by k to keep the total number of samples equal. Our estimator outperforms REINFORCE with replacement, the stochastic sum-and-sample estimator and the strong greedy rollout baseline (which uses additional baseline model evaluations) and performs on-par with the biased risk estimator. In Figure 3b, we plot the same against the number of instances, which shows that, compared to the single sample estimators, we can train with less data and less computational cost (as we only need to run the encoder once for each instance). We introduced the unordered set estimator, a low-variance, unbiased gradient estimator based on sampling without replacement, which can be used as an alternative to the popular biased GumbelSoftmax estimator . Our estimator is the of RaoBlackwellizing three existing estimators, which guarantees equal or lower variance, and is closely related to a number of other estimators. It has wide applicability, is parameter free (except for the sample size k) and has competitive performance to the best of alternatives in both high and low entropy regimes. In our experiments, we found that REINFORCE with replacement, with multiple samples and a built-in baseline as inspired by VIMCO , is a simple yet strong estimator which has performance similar to our estimator in the high entropy setting. We are not aware of any recent work on gradient estimators for discrete distributions that has considered this estimator as baseline, while it may be often preferred given its simplicity. This means that F φ (g) is the CDF and f φ (g) the PDF of the Gumbel(φ) distribution. Additionally we will use the identities by: Also, we will use the following notation, definitions and identities (see Kool et al. (2019c) ): For a proof of equation 30, see. We can sample the set S k from the Plackett-Luce distribution using the Gumbel-Top-k trick by drawing Gumbel variables G φi ∼ Gumbel(φ i) for each element and returning the indices of the k largest Gumbels. If we ignore the ordering, this means we will obtain the set S k if min i∈S k G φi > max i∈D\S k G φi. Omitting the superscript k for clarity, we can use the Gumbel-Max trick, i.e. that G φ D\S = max i ∈S G φi ∼ Gumbel(φ D\S) (equation 30) and marginalize over G φ D\S: Here we have used a change of variables u = F φ D\S (g φ D\S). This expression can be efficiently numerically integrated (although another change of variables may be required for numerical stability depending on the values of φ). Exact computation in O(2 k). The integral in equation 31 can be computed exactly using the identity i∈S Computation of p D\C (S \ C). When using the Gumbel-Top-k trick over the restricted domain D \ C, we do not need to renormalize the log-probabilities φ s, s ∈ D \ C since the Gumbel-Top-k trick applies to unnormalized log-probabilities. Also, assuming This means that we can compute p D\C (S \ C) similar to equation 31: Computation of R(S k, s). Note that, using equation 10, it holds that This means that, to compute the leave-one-out ratio for all s ∈ S k, we only need to compute p D\{s} (S k \ {s}) for s ∈ S k. When using the numerical integration or summation in O(2 k), we can reuse computation, whereas using the naive method, the cost is O(k · (k − 1)!) = O(k!), making the total computational cost comparable to computing just p(S k), and the same holds when computing the'second-order' leave one out ratios for the built-in baseline (equation 17). Details of numerical integration. For computation of the leave-one-out ratio (equation 35) for large k we can use the numerical integration, where we need to compute equation 34 with C = {s}. For this purpose, we rewrite the integral as Here we have used change of variables v = u exp(−b) and a = b − φ D\S. This form allows to compute the integrands efficiently, as where the numerator only needs to computed once, and, since C = {s} when computing equation 35, the denominator only consists of a single term. The choice of a may depend on the setting, but we found that a = 5 is a good default option which leads to an integral that is generally smooth and can be accurately approximated using the trapezoid rule. We compute the integrands in logarithmic space and sum the terms using the stable LOGSUMEXP trick. In our code we provide an implementation which also computes all second-order leave-one-out ratios efficiently. We show that the sum-and-sample estimator is unbiased for any set C ⊂ D (see also ;): In this section we give the proof that Rao-Blackwellizing the stochastic sum-and-sample estimator in the unordered set estimator. Theorem 4. Rao-Blackwellizing the stochastic sum-and-sample estimator in the unordered set estimator, i.e. Proof. To give the proof, we first prove three Lemmas. Lemma 1. Proof. Similar to the derivation of P (b 1 = s|S k) (equation 10 in the main paper), we can write: The step from the first to the second row comes from analyzing the event S k ∩b k = s using sequential sampling: to sample S k (including s) with s being the k-th element means that we should first sample S k \ {s} (in any order), and then sample s from the distribution restricted to D \ (S k \ {s}). Lemma 2. Dividing equation 33 by 1 − s ∈S p(s) on both sides, we obtain Proof. Multiplying by 1 − s ∈S p(s) and rearranging terms proves Lemma 2. Lemma 3. Proof. First using Lemma 1 and then Lemma 2 we find to the estimator, moving the terms independent of B k outside the expectation and using Lemma 3: As was discussed in , one can trade off the number of summed terms and number of sampled terms to maximize the achieved variance reduction. As a generalization of Theorem 4 (the stochastic sum-and-sample estimator with k − 1 summed terms), we introduce here the stochastic sum-and-sample estimator that sums k − m terms and samples m > 1 terms without replacement. To estimate the sampled term, we use the unordered set estimator on the m samples without replacement, on the domain restricted to D \ B k−m. In general, we denote the unordered set estimator restricted to the domain D \ C by where R D\C (S k, s) is the leave-one-out ratio restricted to the domain D \ C, similar to the second order leave-one-out ratio in equation 18: While we can also constrain S k ⊆ (D \ C), this definition is consistent with equation 18 and allows simplified notation. Theorem 5. Rao-Blackwellizing the stochastic sum-and-sample estimator with m > 1 samples in the unordered set estimator, i.e. Proof. Recall that for the unordered set estimator, it holds that which for the restricted equivalent (with restricted distribution p D\C) translates into Now we consider the distribution b k−m+1 |S k, B k−m: the distribution of the first element sampled (without replacement) after sampling B k−m, given (conditionally on the event) that the set of k samples is S k, so we have b k−m+1 ∈ S k and b k−m+1 ∈ B k−m. This means that its conditional expectation of f (b k−m+1) is the restricted unordered set estimator for C = B k−m since e US,D\B Observing that the definition (equation 42) of the stochastic sum-and-sample estimator does not depend on the actual order of the m samples, and using equation 45, we can reduce the multisample estimator to the stochastic sum-and-sample estimator with k = k − m + 1, such that the follows from equation 36. D THE IMPORTANCE-WEIGHTED ESTIMATOR In this section we give the proof that Rao-Blackwellizing the importance-weighted estimator in the unordered set estimator. Theorem 6. Rao-Blackwellizing the importance-weighted estimator in the unordered set estimator, i.e.: Here we have slightly rewritten the definition of the importance-weighted estimator, using that q(s, a) = P (g φs > a) = 1 − F φs (a), where F φs is the CDF of the Gumbel distribution (see Appendix A). Proof. We first prove the following Lemma: Proof. Conditioning on S k, we know that the elements in S k have the k largest perturbed logprobabilities, so κ, the (k + 1)-th largest perturbed log-probability is the largest perturbed logprobability in D \ S k, and satisfies κ = max s∈D\S k g φs = g φ D\S k ∼ Gumbel(φ D\S k). Computing p(κ|S k) using Bayes' Theorem, we have which allows us to compute (using equation 34 with C = {s} and g φ D\S = κ) Using Lemma 4 we find For self-containment we include this section, which is adapted from our unpublished workshop paper (b). The importance-weighted policy gradient estimator combines REIN-FORCE with the importance-weighted estimator in equation 15 which in an unbiased estimator of the policy gradient Recall that κ is the (k + 1)-th largest perturbed log-probability (see Section 3.2). We compute a lower variance but biased variant by normalizing the importance weights using the normalization As we show in Kool et al. (2019b), we can include a'baseline' B(q θ,κ (s) f (s) and correct for the bias (since it depends on the complete sample S k) by weighting individual terms of For the normalized version, we use the normalization q θ,κ (s) for the baseline, and q θ,κ (s) + p θ (s) to normalize the individual terms: It seems odd to normalize the terms in the outer sum by, but equation 52 can be rewritten into a form similar to equation 17, i.e. with a different baseline for each sample, but this form is more convenient for implementation (b). To prove the unbiasedness of we need to prove that the control variate has expectation 0: Lemma 5. Proof. Similar to equation 10, we apply Bayes' Theorem conditionally on b 1 = s to derive for s = s For s = s we have R D\{s} (S k, s) = 1 by definition, so using equation 54 we can show that Now we can show that the control variate is actually the of Rao-Blackwellization: This expression depends only on b 1 and b 2 and we recognize the stochastic sum-and-sample estimator for k = 2 used as'baseline'. As a special case of equation 13 for C = {b 1}, we have Using this, and the fact that We show that the RISK estimator, taking gradients through the normalization factor actually has a built-in baseline. We first use the log-derivative trick to rewrite the gradient of the ratio as the ratio times the logarithm of the gradient, and then swap the summation variables in the double sum that arises: Published as a conference paper at ICLR 2020 This assumes we can compute the KL divergence analytically. Alternatively, we can use a sample estimate for the KL divergence, and use equation 56 with equation 19 to obtain ∇ φ L(φ, θ) = E z∼q φ (z|x) [∇ φ ln q φ (z|x)(ln p θ (x|z) + ln p(z) − ln q φ (z|x)) + ∇ φ ln q φ (z|x)] = E z∼q φ (z|x) [∇ φ ln q φ (z|x)(ln p θ (x|z) − ln q φ (z|x))]. Here we have left out the term E z∼q φ (z|x) [∇ φ ln q φ (z|x)] = 0, similar to , and, assuming a uniform (i.e. constant) prior ln p(z), the term E z∼q φ (z|x) [∇ φ ln q φ (z|x) ln p(z)] = 0. With a built-in baseline, this second term cancels out automatically, even if it is implemented. Despite the similarity of the equation 56 and equation 57, their gradient estimates (equation 60 and equation 59) are structurally dissimilar and care should be taken to implement the REINFORCE estimator (or related estimators such as ARSM and the unordered set estimator) correctly using automatic differentiation software. Using Gumbel-Softmax and RELAX, we take gradients'directly' through the objective in equation 57. We optimize the ELBO using the analytic KL for 1000 epochs using the Adam optimizer. We use a learning rate of 10 −3 for all estimators except Gumbel-Softmax and RELAX, which use a learning rate of 10 −4 as we found they diverged with a higher learning rate. For ARSM, as an exception we use the sample KL, and a learning rate of 3 · 10 −4, as suggested by the authors. All reported ELBO values are computed using the analytic KL. Our code is publicly available 6. Gradient variance during training. We also evaluate gradient variance of different estimators during different stages of training. We measure the variance of different estimators with k = 4 samples during training with REINFORCE with replacement, such that all estimators are computed for the same model parameters. The during training, given in Figure 4, are similar to the for the trained model in Table 1, except for at the beginning of training, although the rankings of different estimator are mostly the same. Negative ELBO on validation set. Figure 5 shows the -ELBO evaluated during training on the validation set. For the large latent space, we see validation error quickly increase (after reaching a minimum) which is likely because of overfitting (due to improved optimization), a phenomenon observed before . Note that before the overfitting starts, both REINFORCE without replacement and the unordered set estimator achieve a validation error similar to the other estimators, such that in a practical setting, one can use early stopping. Results using standard binarized MNIST dataset. Instead of using the MNIST dataset binarized by thresholding values at 0.5 (as in the code and paper by) we also experiment with the standard (fixed) binarized dataset by; , for which we plot train and validation curves for two runs on the small and large domain in Figure 6. This gives more realistic (higher) -ELBO scores, although we still observe the effect of overfitting. As this is a bit more unstable setting, one of the runs using REINFORCE with replacement diverged, but in general the relative performance of estimators is similar to using the dataset with 0.5 threshold. The Travelling Salesman Problem (TSP) is a discrete optimization problem that consists of finding the order in which to visit a set of locations, given as x, y coordinates, to minimize the total length of the tour, starting and ending at the same location. As a tour can be considered a sequence of locations, this problem can be set up as a sequence modelling problem, that can be either addressed using supervised or reinforcement learning (; a). Kool et al. (2019a) introduced the Attention Model, which is an encoder-decoder model which considers a TSP instances as a fully connected graph. The encoder computes embeddings for all nodes (locations) and the decoder produces a tour, which is sequence of nodes, selecting one note at the time using an attention mechanism, and uses this autoregressively as input to select the next node. In Kool et al. (2019a), this model is trained using REINFORCE, with a greedy rollout used as baseline to reduce variance. We use the code by Kool et al. (2019a) to train the exact same Attention Model (for details we refer to Kool et al. (2019a) ), and minimize the expected length of a tour predicted by the model, using different gradient estimators. We did not do any hyperparameter optimization and used the exact same training details, using the Adam optimizer with a learning rate of 10 −4 (no decay) for 100 epochs for all estimators. For the baselines, we used the same batch size of 512, but for estimators that use k = 4 samples, we used a batch size of 512 4 = 128 to compensate for the additional samples (this makes multi-sample methods actually faster since the encoder still needs to be evaluated only once).
We derive a low-variance, unbiased gradient estimator for expectations over discrete random variables based on sampling without replacement
1,149
scitldr
We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy. Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures. Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set. The architectural details of convolutional neural networks (CNNs) have undergone rapid exploration and improvement via both human hand-design BID33 BID11 BID13 BID45 and automated search methods BID46 ). Yet, this vast array of work limits itself to a circuit-like view of neural networks. Here, a CNN is regarded as a fixed-depth feed-forward circuit, with a distinct parameter governing each internal connection. These circuits are often trained to perform tasks which, in a prior era, might have been (less accurately) accomplished by running a traditional computer program coded by humans. Programs, and even traditional hardware circuits, have a more reusable internal structure, including subroutines or modules, loops, and associated control flow mechanisms. We bring one aspect of such modularity into CNNs, by making it possible to learn a set of parameters that is reused across multiple layers at different depths. As the pattern of reuse is itself learned, our scheme effectively permits learning the length (iteration count) and content of multiple loops defining the ing CNN. We view this approach as a first step towards learning neural networks with internal organization reminiscent of computer programs. Though we focus solely on loop-like structures, leaving subroutines and dynamic control flow to future work, this simple change suffices to yield substantial quantitative and qualitative benefits over the standard baseline CNN models. While recurrent neural networks (RNNs) possess a loop-like structure by definition, their loop structure is fixed a priori, rather than learned as part of training. This can actually be a disadvantage in the event that the length of the loop is mismatched to the target task. Our parameter sharing scheme for CNNs permits a mix of loops and feed-forward layers to emerge. For example, trained with our scheme, a 50-layer CNN might learn a 2-layer loop that executes 5 times between layers 10 and 20, a 3-layer loop that runs 4 times from layers 30 to 42, while leaving the remaining layers to assume independent parameter sets. Our approach generalizes both CNNs and RNNs, creating a hybrid. where parameter templates T, T are shared among each layer i, which now only contains a 2-dimensional parameter α (i). Weights W (i) (no longer parameters, illustrated with dotted boxes) used by layer i are generated from α (i) and templates T, T. Right: If weights W (i) are outputs of a linear function (as in our method), learning parameter templates can be viewed as learning layer templates, offering a new (although equivalent) perspective for the middle diagram. Non-linearities are omitted for simplicity. FIG0 diagrams the parameter sharing scheme facilitating this hybridization. Inspired by dictionary learning, different network layers share, via weighted combination, global parameter templates. This re-parameterization is fully differentiable, allowing learning of sharing weights and template parameters. Section 3 elaborates, and also introduces tools for analyzing learned loop structures. Section 4 demonstrates advantages of our hybrid CNNs across multiple experimental settings. Taking a modern CNN design as a baseline, and re-parameterizing it according to our scheme improves:• Parameter efficiency. Here, we experiment with the standard task of image classification using modern residual networks BID11 BID41. This task is a good proxy for general usefulness in computer vision, as high-performance classification architectures often serve as a backbone for many other vision tasks, such as semantic segmentation BID1 BID44. Our parameter sharing scheme drastically reduces the number of unique parameters required to achieve a given accuracy on CIFAR BID20 ) or ImageNet (classification tasks. Re-parameterizing a standard residual network with our scheme cuts parameters, without triggering any drop in accuracy. This suggests that standard CNNs may be overparameterized in part because, by design (and unlike RNNs), they lack capacity to learn reusable internal operations.• Extrapolation and generalization. Here, we explore whether our hybrid networks expand the class of tasks that one can expect to train neural networks to accomplish. This line of inquiry, focusing on synthetic tasks, shares motivations with work on Neural Turing Machines BID5. Specifically, we would like neural networks to be capable of learning to perform tasks for which there are concise traditional solution algorithms. BID5 uses sorting as an example task. As we examine an extension of CNNs, our tasks take the form of queries about planar graphs encoded as image input. On these tasks, we observe improvements to both generalization ability and learning speed for our hybrid CNNs, in comparison to standard CNNs or RNNs. Our parameter sharing scheme, by virtue of providing an architectural bias towards networks with loops, appears to assist in learning to emulate traditional algorithms. An additional side effect, seen in practice in many of our experiments, is that two different learned layers often snap to the same parameter values. That is, layers i and j, learn coefficient vectors α DISPLAYFORM0 and α (j) (see FIG0) that converge to be the same (up to scaling). This is a form of architecture discovery, as it permits representation of the CNN as a loopy wiring diagram between repeated layers. Section 4.3 presents example . We also draw comparisons to existing neural architec-ture search (NAS) techniques. By simply learning recurrent structure as byproduct of training with standard stochastic gradient descent, we achieve accuracy competitive with current NAS procedures. Before delving into the details of our method, Section 2 provides additional context in terms of prior work on recurrent models, parameter reduction techniques, and program emulation. Sections 3 and 4 describe our hybrid shared-parameter CNN, experimental setup, and . Section 5 concludes with commentary on our and possible future research pathways. 1 2 RELATED WORK Recurrent variants of CNNs are used extensively for visual tasks. Recently, BID42 propose utilizing a convolutional LSTM BID32 as a generic feedback architecture. RNN and CNN combinations have been used for scene labeling BID26, image captioning with attention BID39, and understanding video BID3, among others. These works combine CNNs and RNNs at a coarse scale, and in a fixed hand-crafted manner. In contrast, we learn the recurrence structure itself, blending it into the inner workings of a CNN.Analysis of residual networks BID11 reveals possible connections to recurrent networks stemming from their design BID21. BID7 provide evidence that residual networks learn to iteratively refine feature representations, making an analogy between a very deep residual network and an unrolled loop. BID17 further explore this connection, and experiment with training residual networks in which some layers are forced to share identical parameters. This hard parameter sharing scheme again builds a predetermined recurrence structure into the network. It yields successfully trained networks, but does not exhibit the type of performance gains that Section 4 demonstrates for our soft parameter sharing scheme. Closely related to our approach is the idea of hypernetworks BID9, in which one part of a neural network is parameterized by another neural network. Our shared template-based reparameterization could be viewed as one simple choice of hypernetwork implementation. Perhaps surprisingly, this class of ideas has not been well explored for the purpose of reducing the size of neural networks. Rather, prior work has achieved parameter reduction through explicit representation bottlenecks BID15, sparsifying connection structure BID27 BID45, and pruning trained networks.Orthogonal to the question of efficiency, there is substantial interest in extending neural networks to tackle new kinds of tasks, including emulation of computer programs. Some approach this problem using additional supervision in the form of execution traces (Reed & de ; BID0, while other focus on development of network architectures that can learn from input-output pairs alone BID5 BID29 BID43 BID36 . Our experiments on synthetic tasks fall into the latter camp. At the level of architectural strategy, BID36 benefit from changing the form of activation function to bias the network towards correctly extrapolating common mathematical formulae. We build in a different implicit bias, towards learning iterative procedures within a CNN, and obtain a boost on correctly emulating programs. In convolutional neural networks (CNNs) and variants such as residual CNNs (ResNets) BID11 and DenseNets BID13, each convolutional layer i contains a set of parameters W (i), with no explicit relation between parameter sets of different layers. Conversely, a strict structure is imposed to layers of recurrent neural networks (RNNs), where, in standard models , a single parameter set W is shared among all time steps. This leads to a program-like computational flow, where RNNs can be seen as loops with fixed length and content. While some RNN variants BID4 BID19 BID40 are less strict on the length or content of loops, these are still typically fixed beforehand. As an alternative to learning hard parameter sharing schemes -which correspond to the strict structure present in RNNs -our method consists of learning soft sharing schemes through a relaxation of DISPLAYFORM0 Figure 2: Connection between the LSM matrix S where DISPLAYFORM1 and the structure of the network. White and black entries correspond to maximum and minimum similarities (Si,j = 1 and Si,j = 0, respectively). Left: Empirically, CNNs present no similarity between parameters of different layers. Middle: Trained with our method, the layer similarity matrix (LSM) captures similarities between different layers, including pairs with close to maximum similarity. Such pairs (depicted by same-colored coefficients and weights, and by white entries in the LSM) perform similar operations on their inputs. Right: We can tie together parameters of similar layers, creating a hard parameter sharing scheme. The network can then be folded, creating self-loops and revealing an explicit recurrent computation structure.this structure. We accomplish this by expressing each layer's parameters W (i) as a linear combination of parameter templates T,..., T (k), each with the same dimensionality as W (i): DISPLAYFORM2 where k is the number of parameter templates (chosen freely as a hyperparameter) and α (i), a kdimensional vector, is the coefficients of layer i. FIG0 (left and middle) illustrates the difference between networks trained with and without our method. This relaxation allows for coefficients and parameter templates to be (jointly) optimized with gradient-based methods, yielding negligible extra computational cost, with a single constraint that only layers with same parameter sizes can share templates. Note that constraining coefficients α (i) to be one-hot vectors leads to hard sharing schemes, at the cost of non-differentiability. Having k as a free parameter decouples the number of parameters in network from its depth. Typically, L convolutional layers with constant channel and kernel sizes C, K have O(LC 2 K 2) total parameters. Our soft sharing scheme changes the total number of parameters to O(kL + kC 2 K 2) = O(kC 2 K 2). Sections 4.1 and 4.2 show that we can decrease the parameter count of standard models without significantly impacting accuracy, or simply attain higher accuracy with k = L.In the next two subsections, we discuss two consequences of the linearity of Equation. First, it enables alternative interpretations of our method. Second, and a major advantage, as is the case in many linear relaxations of integer problems, we are able to extract hard sharing schemes in practice, and consequently detect implicit self-loops in a CNN trained with our method. For layers i that are linear in W (i) (e.g. matrix multiplication, convolution), we can view our method as learning template layers which are shared among a network. More specifically, for a convolutional layer U (i) (X) = W (i) * X, and considering Equation: DISPLAYFORM0 where T (j) * X, the of a convolution with filter sets T (j), can be seen as the output of a template layer with individual parameters T (j). Such layers can be seen as global feature extractors, and coefficients α (i) determine which features are relevant for the i'th computation of a network. This is illustrated in FIG0 (right diagram).This view gives a clear connection between coefficients α and the network's structure. Having DISPLAYFORM1, and hence layers i and i + 2 are functionally equivalent. Such a network can be folded to generate an equivalent model with two layers and a self-loop, an explicitly recurrent network. While this is also possible for networks without parameter sharing, a learned alignment of C 2 K 2 parameters is required (unlikely in practice), instead of aligning only k ≤ L parameters. To identify which layers in a network perform approximately the same operation, we can simply check whether their coefficients are similar. We can condense this information for all pairs of layers i, j in a similarity matrix S, where S i,j = s(α (i), α (j) ) for some similarity measure s. For networks with normalization layers, the network's output is invariant to weight rescaling. In this setting, a natural measure is s(DISPLAYFORM0 (absolute value of cosine similarity), since it possess this same property. 2 We call S the layer similarity matrix (LSM). Figure 2 illustrates and Section 4.3 shows experimentally how it can be used to extract recurrent loops from trained CNNs. While structure might emerge naturally, having a bias towards more structured (recurrent) models might be desirable. In this case, we can add a recurrence regularizer to the training objective, pushing parameters to values which in more structure. For example, we can add the negative of sum of elements of the LSM: DISPLAYFORM1 where L is the original objective. The larger λ R is, the closer the elements of S will be to 1. At an extreme case, this regularizer will push all elements in S to 1, ing in a network with a single layer and a self-loop. We begin by training variants of standard models with soft parameter sharing, observing that it can offer parameter savings with little impact on performance, or increase performance at the same parameter count. Section 4.3 demonstrates conversion of a trained model into explicitly recurrent form. We then examine synthetic tasks (Section 4.4), where parameter sharing improves generalization. Appendix B contains details on the initialization for the coefficients α. The CIFAR-10 and CIFAR-100 datasets BID20 ) are composed of 60, 000 colored 32×32 images, labeled among 10 and 100 classes respectively, and split into 50, 000 and 10, 000 examples for training and testing. We pre-process the training set with channel-wise normalization, and use horizontal flips and random crops for data augmentation, following BID11.Using Wide ResNets (WRN) BID41 ) as a base model, we train networks with the proposed soft parameter sharing method. Since convolution layers have different number of channels and kernel sizes throughout the network, we create 3 layer groups and only share templates among layers in the same group. More specifically, WRNs for CIFAR consist of 3 stages whose inputs and outputs mostly have a constant number of channels (C, 2C and 4C, for some C). Each stage contains L−4 3 layers for a network with depth L, hence we group layers in the same stage together, except for the first two, a residual block whose input has a different number of channels. Thus, all layers except for the first 2 in each stage perform parameter sharing (illustrated in left diagram of Figure 4). Having k templates per group means that L−4 3 − 2 convolution layers share k parameter templates. We denote by SWRN-L-w-k a WRN with L layers, widen factor w and k parameter templates per group (trained with our method). Setting k = L−4 3 − 2 means we have Table 1: Test error (%) on CIFAR-10 and CIFAR-100. SWRN 28-10, the of training a WRN 28-10 with our method and one template per layer, significantly outperforms the base model, suggesting that our method aids optimization (both models have the same capacity). SWRN 28-10-1, with a single template per sharing group, performs close to WRN 28-10 while having significantly less parameters and capacity. * indicates models trained with dropout p = 0.3 BID34. Results are average of 5 runs. one parameter template per layer, and hence no parameter reduction. We denote SWRN-L-w (thus omitting k) as a model in this setting. Following BID41, we train each model for 200 epochs with SGD and Nesterov momentum of 0.9 and a batch size of 128. The learning rate is initially set to 0.1 and decays by a factor of 5 at epochs 60, 120 and 160. We also apply weight decay of 5 × 10 −4 on all parameters except for the coefficients α. TAB1 present . Networks trained with our method yield superior performance in the setting with no parameter reduction: SWRN 28-10 presents 6.5% and 2.5% lower relative test errors on C-10 and C-100, compared to the base WRN 28-10 model. With fewer templates than layers, SWRN 28-10-1 (all 6 layers of each group perform the same operation), performs virtually the same as the base WRN 28-10 network, while having 1 3 of its parameters. On CIFAR-10, parameter reduction (k = 2) is beneficial to test performance: the best performance is achieved by SWRN 28-18-2 (3.43% test error), outperforming the ResNeXt-29 16x64 model BID37, while having fewer parameters (55M against 68M) and no bottleneck layers. FIG2 shows that our parameter sharing scheme uniformly improves accuracy-parameter efficiency; compare the WRN model family (solid red) to our SWRN models (dotted red). Table 4 presents a comparison between our method and neural architecture search (NAS) techniques BID46 BID38 BID25 BID28 on CIFAR-10 - differ from Table 2 solely due to cutout BID2, which is commonly used in NAS literature; NAS are quoted from their respective papers. Our method outperforms architectures discovered by recent NAS algorithms, such as DARTS, SNAS BID38 and ENAS BID25, while having similarly low training cost. We achieve 2.69% test error after training less than 10 hours on a single NVIDIA GTX 1080 Ti. This accuracy is only bested by NAS techniques which are several orders of magnitude more expensive to train. Being based on Wide ResNets, our models do, admittedly, have more parameters. Comparison to recent NAS algorithms, such as DARTS and SNAS, is particularly interesting as our method, though motivated differently, bears some notable similarities. Specifically, all three methods are gradient-based and use an extra set of parameters (architecture parameters in DARTS and SNAS) to perform some kind of soft selection (over operations/paths in DARTS/SNAS; over templates in our method). As Section 4.3 will show, our learned template coefficients α can often be used to transform our networks into an explicitly recurrent form -a discovered CNN-RNN hybrid. To the extent that our method can be interpreted as a form of architecture search, it might be complementary to standard NAS methods. While NAS methods typically search over operations (e.g. activation functions; 3 × 3 or 5 × 5 convolutions; non-separable, separable, or grouped filters; dilation; pooling), our soft parameter sharing can be seen as a search over recurrent patterns (which layer processes the output at each step). These seem like orthogonal aspects of neural architectures, both of which may be worth examining in an expanded search space. When using SGD to drive architecture search, these aspects take on distinct forms at the implementation level: soft parameter sharing across layers (our method) vs hard parameter sharing across networks (recent NAS methods). We use the ILSVRC 2012 dataset BID30 as a stronger test of our method. It is composed of 1.2M training and 50, 000 validation images, drawn from 1000 classes. We follow BID8, as in BID41 BID13; BID37, and report Top-1 and Top-5 errors on the validation set using single 224 × 224 crops. For this experiment, we use WRN 50-2 as a base model, and train it with soft sharing and no parameter reduction. Having bottleneck blocks, this model presents less uniform number of channels of layer inputs and outputs. To apply our method, we group convolutions in 12 groups: for each of the 4 stages in a WRN 50-2, we create 3 groups, one for each type of layer in a bottleneck unit (C → B, B → B and B → C channel mappings, for bottleneck B). Without any change in hyperparameters, the network trained with our method outperforms the base model and also deeper models such as DenseNets (though using more parameters), and performs close to ResNet-200, a model with four times the number of layers and a similar parameter count. See TAB2. DISPLAYFORM0 Figure 4: Extracting implicit recurrences from a SWRN 28-10-4. Left: Illustration of the stages of a SWRN-28-10-4 (residual connections omitted for clarity). The first two layers contain individual parameter sets, while the other six share four templates. All 3 stages of the network follow this structure. Middle: LSM for each stage after training on CIFAR-10, with many elements close to 1. Hard sharing schemes can be created for pairs with large similarity by tying their coefficients (or, equivalently, their effective weights). Right: Folding stages 2 and 3 leads to self-loops and a CNN with recurrent connections -LSM for stage 2 is a repetition of 2 rows/columns, and folding decreases the number of parameters. Results on CIFAR suggest that training networks with few parameter templates k in our soft sharing scheme in performance comparable to the base models, which have significantly more parameters. The lower k is, the larger we should expect the layer similarities to be: in the extreme case where k = 1, all layers in a sharing scheme have similarity 1, and can be folded into a single layer with a self-loop. For the case k > 1, there is no trivial way to fold the network, as layer similarities depend on the learned coefficients. We can inspect the model's layer similarity matrix (LSM) and see if it presents implicit recurrences: a form of recurrence in the rows/columns of the LSM. Surprisingly, we observe that rich structures emerge naturally in networks trained with soft parameter sharing, even without the recurrence regularizer. Figure 4 shows the per-stage LSM for CIFAR-trained SWRN 28-10-4. Here, the six layers of its stage-2 block can be folded into a loop of two layers, leading to an error increase of only 0.02%. Appendix A contains an additional example of network folding, diversity of LSM patterns across different runs, and an epoch-wise evolution of the LSM, showing that many patterns are observable after as few as 5 epochs of training. While the propensity of our parameter sharing scheme to encourage learning of recurrent networks is a useful parameter reduction tool, we would also like to leverage it for qualitative advantages over standard CNNs. On tasks for which a natural recurrent algorithm exists, does training CNNs with soft parameter sharing lead to better extrapolation?To answer this, we set up a synthetic algorithmic task: computing shortest paths. Examples are 32 × 32 grids containing two query points and randomly (with probability 0.1) placed obstacles. The objective is to indicate which grid points belong to a shortest path between the query points. We use curriculum learning for training, allowing us to observe how well each model adapts to more difficult examples as training phases progress. Moreover, for this task curriculum learning causes faster learning and superior performance for all trained models.(a) Generated example for the synthetic shortest paths task. Blue pixels indicate the query points; red pixels represent obstacles, and white pixels are points in a shortest path (in terms of Manhattan distance) between query pixels. The task consists of predicting the white pixels (shortest paths) from the blue and red ones (queries and obstacles). Training consists of 5 curriculum phases, each one containing 5000 examples. The maximum allowed distance between the two query points increases at each phase, thus increasing difficulty. In the first phase, each query point is within a 5 × 5 grid around the other query point, and the grid size increases by 2 on each side at each phase, yielding a final grid size of 21 × 21 at phase 5.We train a CNN, a CNN with soft parameter sharing and one template per layer (SCNN), and an SCNN with recurrence regularizer λ R = 0.01. Each model trains for 50 epochs per phase with Adam and a fixed learning rate of 0.01. As classes are heavily unbalanced and the balance itself changes during phases, we compare F 1 scores instead of classification error. Each model starts with a 1 × 1 convolution, mapping the 2 input channels to 32 output channels. Next, there are 20 channel-preserving 3 × 3 convolutions, followed by a final 1 × 1 convolution that maps 32 channels to 1. Each of the 20 3 × 3 convolutions is followed by batch normalization BID16, a ReLU non-linearity BID24, and has a 1-skip connection. FIG4 shows one example from our generated dataset and the training curves for the 3 trained models: the SCNN not only outperforms the CNN, but adapts better to harder examples at new curriculum phases. The SCNN is also advantaged over a more RNN-like model: with the recurrence regularizer λ R = 0.01, all entries in the LSM quickly converge 1, as in a RNN. This leads to faster learning during the first phase, but presents issues in adapting to difficulty changes in latter phases. In this work, we take a step toward more modular and compact CNNs by extracting recurrences from feed-forward models where parameters are shared among layers. Experimentally, parameter sharing yields models with lower error on CIFAR and ImageNet, and can be used for parameter reduction by training in a regime with fewer parameter templates than layers. Moreover, we observe that parameter sharing often leads to different layers being functionally equivalent after training, enabling us to collapse them into recurrent blocks. Results on an algorithmic task suggest that our shared parameter structure beneficially biases extrapolation. We gain a more flexible form of behavior typically attributed to RNNs, as our networks adapt better to out-of-domain examples. Our form of architecture discovery is also competitive with neural architecture search (NAS) algorithms, while having a smaller training cost than state-of-the-art gradient-based NAS.As the only requirement for our method is for a network to have groups of layers with matching parameter sizes, it can be applied to a plethora of CNN model families, making it a general technique with negligible computational cost. We hope to raise questions regarding the rigid definitions of CNNs and RNNs, and increase interest in models that fall between these definitions. Adapting our method for models with non-uniform layer parameter sizes BID13 BID45 might be of particular future interest. A ADDITIONAL FOR IMPLICIT RECURRENCES Section 4.3 presents an example of implicit recurrences and folding of a SWRN 28-10-4 trained on CIFAR-10, where, for example, the last 6 layers in the second stage of the network fold into 2 layers with a self-loop. Figure 6 presents an additional example, where non-trivial recurrences (unlike the one in Figure 4) emerge naturally, ing in a model that is rich in structure. − 2 = 10 layers) trained with soft parameter sharing on CIFAR-10. Each stage (originally with 12 layers -the first two do not participate in parameter sharing) can be folded to yield blocks with complex recurrences. For clarity, we use colors to indicate the computational flow: red takes precedence over green, which in turn has precedence over blue. Colored paths are only taken once per stage. Although not trivial to see, recurrences in each stage's folded form are determined by row/column repetitions in the respective Layer Similarity Matrix. For example, for stage 2 we have S5,3 ≈ S6,4 ≈ 1, meaning that layers 3, 4, 5 and 6 can be folded into layers 3 and 4 with a loop (captured by the red edge). The same holds for S7,1, S8,2, S9,3 and S10,4, hence after the loop with layers 3 and 4, the flow returns to layer 1 and goes all the way to layer 4, which generates the stage's output. Even though there is an approximation when folding the network (in this example, we are tying layers with similarity close to 0.8), the impact on the test error is less than 0.3%. Also note that the folded model has a total of 24 layers (20 in the stage diagrams, plus 4 which are not shown, corresponding to the first layer of the network and three 1 × 1 convolutions in skip-connections), instead of the original 40. Figure 7: LSMs of a SWRN 40-8-8 (composed of 3 stages, each with 10 layers sharing 8 templates) trained on CIFAR-10 for 5 runs with different random seeds. Although the LSMs differ across different runs, hard parameter sharing can be observed in all cases (off-diagonal elements close to 1, depicted by white), characterizing implicit recurrences which would enable network folding. Moreover, the underlying structure is similar across runs, with hard sharing typically happening among layers i and i + 2, leading to a "chessboard" pattern. During our initial experiments, we explored different initializations for the coefficients α of each layer, and observed that using an orthogonal initialization BID31 ed in superior performance compared to uniform or normal initialization schemes. Denote A as the L × k matrix (L is the number of layers sharing parameters and k the number of templates) with each i'th row containing the coefficient of the i'th layer α (i). We initialize it such that A T A = I, leading to ∀ i, α (i), α (i) = 1 and ∀ i =j, α (i), α (j) = 0. While our choice for this is mostly empirical, we believe that there is likely a connection with the motivation for using orthogonal initialization for RNNs. Moreover, we discovered that other initialization options for A work similarly to the orthogonal one. More specifically, either initializing A with the identity matrix when L = k (which naturally leads to A T A = I) or enforcing some sparsity (initialize A with a uniform or normal distribution and randomly setting half of its entries to zero) performs similarly to the orthogonal initialization in a consistent manner. We believe the sparse initialization to be the simplest one, as each coefficient α can be initialized independently. Finally, note that having A T A = I in the Layer Similarity Matrix also being the identity at initialization (check that DISPLAYFORM0, so if (A T A) i,j = 1, then S i,j = 1, and the same holds for 0. Surprisingly, even though the orthogonal initialization leads to a LSM that has no structure in the beginning of training, the rich patterns that we observe still emerge naturally after optimization.
We propose a method that enables CNN folding to create recurrent connections
1,150
scitldr
Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum. This intuition has been made precise in a line of recent works, which show that suitable clipping can yield significantly faster convergence than vanilla gradient descent. In this paper, we propose a new lens for studying gradient clipping, namely, robustness: informally, one expects clipping to provide robustness to noise, since one does not overly trust any single sample. Surprisingly, we prove that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness. On the other hand, we show that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function. This yields a simple, noise-robust alternative to the standard cross-entropy loss which performs well empirically. In this paper, we propose a new lens with which to study gradient clipping, namely, robustness: intuitively, clipping the gradient prevents over-confident descent steps, which is plausibly beneficial in the presence of noise. Given this intuition, our interest is whether gradient clipping can mitigate the problem of label noise in classification, which has received significant recent interest (; ; ; ; ; ; ; ; ;). We study this question, and provide three main contributions: (a) we show that gradient clipping alone does not endow label noise robustness to even simple models. Specifically, we show that under stochastic gradient descent with linear models, gradient clipping is related to optimising a "Huberised" loss (Lemma 1, 2). While such Huberised losses preserve classification calibration (Lemma 3), they are not robust to label noise (Proposition 4). (b) we propose composite loss-based gradient clipping, a variant that does have label noise robustness. Specifically, for losses comprising a base loss composed with a link function, (e.g., softmax crossentropy), we only clip the contribution of the base loss. The ing partially Huberised loss preserves classification calibration (Lemma 6), while being robust to label noise (Proposition 7). (c) we empirically verify that on both synthetic and real-world datasets, partially Huberised versions of standard losses (e.g., softmax cross-entropy) perform well in the presence of label noise (§5). To illustrate the difference between standard and composite loss-based gradient clipping in (a) and (b), consider the pervasive softmax cross-entropy loss, viz. the logistic loss for binary classification. Recall that the logistic loss comprises the log-loss (or cross-entropy) composed with a sigmoid link. The Huberised loss arises from loss-based gradient clipping, and linearises the entire logistic loss beyond a threshold. The partially Huberised loss arises from composite loss-based gradient clipping, and linearises only the base loss (i.e., log-loss) beyond a threshold (right panel). Once combined with the sigmoid link, the overall partially Huberised loss asymptotically saturates. Our analysis in §3, §4 establishes that Huberised losses are not robust to label noise, but partially Huberised losses are (Proposition 4, 7). In (a), we relate gradient clipping to loss "Huberisation", which linearises the logistic loss when it exceeds a fixed threshold. In (b), we introduce composite loss-based gradient clipping, which is equivalent to a "partial loss Huberisation" that linearises only the cross-entropy loss but leaves the sigmoid link untouched. Figure 1 illustrates the Huberised and partially Huberised logistic loss. We provide on gradient clipping, loss functions for classification, and label noise. Gradient clipping. Consider a supervised learning task over instances X and labels Y, where we have a family of models indexed by θ ∈ Θ, and the quality of a particular model is measured by a loss function θ: X × Y → R. The gradient for a mini-batch {(x n, y n)} N n=1 is g(θ) One may instead compute the clipped gradient, which for user-specified threshold τ > 0 is if ||w|| 2 ≥ τ w else. Employingḡ τ (θ) for optimisation corresponds to clipped gradient descent. This is closely related to normalised gradient descent (NGD) , wherein one employsg(θ) ||g(θ)||2. showed that NGD can lead to convergence for a wider class of functions than standard gradient descent. showed that NGD can escape saddle points for non-convex problems. showed that gradient clipping can lead to accelerated convergence over gradient descent. Gradient clipping has also been explored for privacy , motivated by it preventing any single instance from dominating parameter updates. Loss functions. In binary classification, one observes samples from a distribution D over X × {±1}, and seeks a predictor f: X → R with low risk R(f). = E (y, f (x)) according to a loss which, in an overload of notation, we denote: {±1} × R → R +. For the zero-one loss 01 (y, f) = y · f < 0, R(f) is known as the misclassification risk. Rather than directly using 01, for computational convenience one often employs a margin loss (y, f) = φ(y · f), for convex φ: R → R +. We say that φ is classification calibrated if driving the excess risk over the Bayes-optimal predictor for φ to zero also drives the excess risk for 01 to zero; that is, minimising the φ-risk is statistically consistent for classification. A canonical example is the hinge loss φ(z) = [1 − z] +. We call φ admissible if it is "well-behaved" in the sense of being bounded from below, strictly convex, continuously differentiable, non-increasing, and classification calibrated. We say that φ is proper composite (or for brevity composite) if its minimising scores can be interpreted as probabilities. "Composite" here refers to such losses comprising a base loss ϕ composed with an invertible link function F: R →. While φ accepts as input real-valued scores (e.g., the final layer logits of a neural network), these are internally converted to probabilities via F. A canonical example is the logistic loss φ(z) = − log F (z) with F: z → σ(z) for sigmoid σ(z). = (1 + e −z) −1. The multiclass analogue is the softmax cross-entropy loss, wherein the sigmoid becomes a softmax. See Appendix B for a more technical discussion. Learning under label noise. In classification under label noise, one has samples from distribution D where PD(y | x) is a noisy version of P D (y | x), e.g., all labels are corrupted with a fixed constant probability. The goal remains to ensure low risk with respect to the clean D. This problem has a long history in statistics , and has emerged as a topic of recent interest in machine learning. There has been particular interest in the problem of learning under symmetric label noise, wherein all instances have a constant probability of their labels being flipped uniformly to any of the other classes.; proposed a framework for analysing the more general setting of class-conditional label noise. Takenouchi (; ;) to abstention . We first show that gradient clipping in general does not endow robustness to label noise, even in simple settings. Specifically, we establish that stochastic gradient clipping with linear models is equivalent to modifying the underlying loss (Lemma 1). This modified loss is closely related to a Huberised loss (Lemma 2), which is equivalent to loss-based gradient clipping (L-gradient clipping). Unfortunately, Huberised losses, and thus L-gradient clipping, are not robust to label noise (Proposition 4). Consider a binary classification problem, with Y = {±1}. Suppose we use a scoring model s θ (x) for θ ∈ Θ with margin m θ (x, y). = y · s θ (x), and margin loss θ (x, y) Now suppose we perform stochastic gradient descent (i.e., we use N = 1 in) and use a linear scorer s θ (x) = θ T x. Here, gradient clipping of is equivalent to modifying the loss as follows. Lemma 1. Pick any admissible margin loss φ, and τ > 0. Then, for loss θ (x, y). = φ(m θ (x, y)) with linear scorer, the clipped loss gradient is equivalent to the gradient of a modified loss¯: The loss¯ in is intuitive: if the derivative of the original loss φ exceeds a certain effective clipping threshold, we replace φ with a linear loss function. This effective threshold τ · (∇m θ (x, y) 2 ) −1 takes into account the instance-dependent margin gradient, viz. x 2 for linear s θ (x). We will comment in §4.4 as to the properties of gradient clipping in more general scenarios. For the moment, we note that Lemma 1 is closely related to the "Huber loss" , This replaces the extremes of the square loss with the absolute loss. Both the absolute and Huber losses are widely employed in robust regression (, Chapter 3), (, pg. 100). Note that slightly differs from as the effective clipping threshold is instance-dependent. One may nonetheless arrive at Huber-style losses via a variant of gradient clipping; this connection will prove useful for our subsequent analysis. Per, gradient clipping involves using wherein the clipping considers the margin gradient. Consider now the following loss-based gradient clipping (L-gradient clipping), wherein we clip only the contribution arising from the loss: Compared to, we effectively treat the margin gradient norm ∇m θ (x, y) 2 as constant across instances, and focus on bounding the loss derivative. The latter may be a significant component in the gradient norm; e.g., for linear models, ∇m θ (x, y) 2 = x 2 is often bounded. Observe further that for linear models with x 2 ≡ R across instances, is a rescaled version of the clipped gradient: Interestingly, L-gradient clipping equivalently uses the following "Huberised" version of the loss. Lemma 2. Pick any admissible margin loss φ: R → R + with Fenchel conjugate φ *, and τ > 0. Then, the clipped gradient in is equivalent to employing a Huberised loss functionφ τ such that Evidently,φ τ linearises φ once its derivative is sufficiently large, akin to the Huber loss in. One may verify that for φ(z) = (1 − z) 2, one arrives exactly at. Example 3.1: For the logistic loss φ(z) = log(1 + e −z), the Huberised loss for τ ∈ is Per Figure 1a, this linearises φ beyond a fixed threshold. See Appendix C for further illustrations. The use of Huberised losses in classification is not new (; ;). However, we now provide a novel study of their label noise robustness. Before studying the effect of label noise on Huberised losses, it is apposite to consider whether they are suitable for use even in the absence of noise. One way to formalise this is to ask whether the losses maintain classification calibration, in the sense of §2. This is a minimal requirement on a loss to be useful for classification . One may further ask whether the losses preserve class-probabilities, i.e., are proper composite in the sense of §2. It is desirable to preserve this key trait of losses such as the logistic loss. The following clarifies both points. Lemma 3. Pick any admissible margin loss φ and τ > 0. Then, the Huberised lossφ τ in is classification calibrated. If φ is proper composite and τ ≥ −φ, thenφ τ is also proper composite. L-gradient clipping is thus benign for classification, generalising Rosset & Zhu (2007, Section 3.4), which was for the square-hinge loss. Interestingly, for composite φ and small τ, the proof reveals that φ τ has a non-invertible link function. Intuitively, suchφ τ are effectively linear, and linear losses are unsuited for estimating probabilities ). We now turn to our central object of inquiry: does gradient clipping endow robustness to label noise? To study this, we consider L-gradient clipping, which as noted above is a special case of gradient clipping under linear models with constant ∇m θ (x, y) 2. Since L-gradient clipping is in turn equivalent to using a Huberised loss, we may study the robustness properties of this loss. Surprisingly, when our loss is convex (e.g., softmax cross-entropy), Huberised losses are not robust to even very simple forms of label noise. Essentially, since these losses are still convex, they can still be affected by errant outlier observations. Formally, using a of (see Appendix F.1), we arrive at the following. Proposition 4. Pick any admissible margin loss φ and τ > 0. Then, ∃ a separable distribution for which the optimal linear classifier underφ τ is equivalent to random guessing under symmetric noise. To situate Proposition 4 in a broader context, we note that for regression problems, it is well known that the Huber loss is susceptible to "high leverage" outliers (, pg. 313), (, pg. 13), (, pg. 104), i.e., extremal instances which dominate the optimal solution. Proposition 4 complements these for the case of label noise in classification. Given that gradient clipping does not endow label noise robustness, how else might we proceed? In a regression context, the outlier vulnerability of the Huber loss can be addressed by using a trimmed average of the loss (; ;). Such ideas have been successfully explored for label noise problems . We will however demonstrate that a simple variant of clipping yields a loss that does possess label noise robustness. We now show that noise robustness can be achieved with CL-gradient clipping, a variant wherein for composite losses (e.g., softmax cross-entropy), we perform partial Huberisation of the base loss only. Consider a composite margin loss θ (x, y), where φ = ϕ • F for some base loss ϕ and invertible link F:R →; e.g., the logistic loss has ϕ(u) = − log u and F (z) = σ(z). We can interpret p θ (x, y). = F (m θ (x, y)) as a probability estimate; e.g., p θ (x, 1) = σ(m θ (x, 1)) is the probability of x being positive. Now, rewriting θ (x, y) = ϕ(p θ (x, y)), we may express as: L-gradient clipping in was defined as only clipping φ = (ϕ • F) above, which ensures that the ing Huberised loss is Lipschitz. Typically, however, F is already Lipschitz; e.g., this is the case for the commonly used sigmoid or softmax link. This suggests the following composite loss-based gradient clipping (CL-gradient clipping), wherein we only clip the derivative for the base loss ϕ: As before, CL-gradient clipping corresponds to optimising a new, "partially Huberised" loss. Lemma 5. Pick any admissible, composite margin loss φ = ϕ • F, and τ > 0. Then, the clipped gradient in is equivalent to employing a partially Huberised lossφ τ =φ τ • F, wherẽ Compared to the Huberised loss, the partially Huberised loss only linearises the base loss ϕ, while retaining the link F. Consequently, the composite lossφ τ behaves like the link beyond a certain threshold, and will thus be bounded: Example 4.1: For the logistic loss, φ(z) = ϕ(F (z)) with ϕ(u) = − log u and F (z) = σ(z), Note that partial Huberisation readily generalises to a multi-class setting. Indeed, suppose we have softmax probability estimates p θ (x, y) ∝ exp(m θ (x, y)). Then, whereas the softmax cross-entropy employs θ (x, y) = − log p θ (x, y), our partially Huberised softmax cross-entropy for τ > 1 is Following §3.2, we establish that CL-gradient clipping is always benign from a classification perspective, and provided τ is sufficiently large, from a probability estimation perspective as well. As before, we do this by exploiting the equivalence of CL-gradient clipping to a partially Huberised loss. Lemma 6. Pick any admissible composite margin loss φ = ϕ • F and τ > 0. Then, the lossφ τ in is classification calibrated. If further τ ≥ −ϕ (1 /2), thenφ τ is also proper composite. We now show that partially Huberised losses have an important advantage over Huberised losses: under symmetric label noise, the optimal solution on the clean distribution cannot be too far away from the optimal solution on the noisy distribution. This implies that label noise (such as that considered in Proposition 4) cannot have an excessively deleterious influence on the loss. Proposition 7. Pick any proper loss ϕ and τ > 0. Let f * be the risk minimiser ofφ τ on the clean distribution. For any non-trivial level of symmetric label noise, let reg τ (f *) denote the excess risk of f * with respect toφ τ on the noisy distribution. Then, there exists C > 0 such that Note that by van Rooyen et al. (2015, Proposition 4), it is impossible for the above bound to hold with C = 0 without using a linear loss. Nonetheless, by virtue of partially Huberised losses being partially linear, we are able to bound the degradation under label corruption. The saturating behaviour of the partially Huberised loss also implies robustness to outliers in feature space; see Appendix E. The partially Huberised log loss in, can be related to a family of losses studied in several works (; ; ; b; a): 3 for an illustration of ϕ α. There are two similarities between and ϕ α. First, both proposals interpolate between the log and linear losses: when α → 0 +, ϕ α approaches the log loss, and when α = 1, ϕ α equals the linear loss. Second, both proposals modify the base loss, allowing the link F to be chosen independently. In particular, one may use the heavy-tailed F of; Amid et al. (2019b) in conjunction with our partially Huberised loss. One difference between and ϕ α is that the partially Huberised loss is exactly linear for a suitable region; consequently, it is guaranteed to be Lipschitz, unlike ϕ α. This can be understood in terms of the loss gradients: for a class-probability estimate p θ (x, y), let θ (x, y). Both gradients thus take into account whether a sample is "informative", in the sense of being poorly-predicted (p θ (x, y) ∼ 0). Further, to guard against such samples being the of label noise, both ensure this influence is not overwhelming, but in different ways: the generalised cross-entropy softens the influence, while still allowing it to be unbounded as p θ (x, y) → 0. On the other hand, the partially Huberised loss enforces a hard cap of τ −1 on the influence. This is to be contrast with a truncated loss also considered in , which enforces a hard cap on the loss, thus completely discarding poorly-predicted instances. Table 1 summarises our , highlighting the perspective of gradient clipping as equivalently modifying the loss. Before proceeding, we make some qualifying comments. First, our analysis has assumed symmetric label noise. Often, one encounters asymmetric or instance-dependent noise. While corresponding guarantees for the linear loss may be ported over to the partially Huberised loss, they require stronger distributional assumptions . Second, Proposition 4 exhibits a specific distribution which defeats the Huberised loss under linear models. In practice, distributions may be more benign , and models are often nonlinear, meaning that Huberised losses (and thus gradient clipping) are thus unlikely to succumb as extremely to label noise as Proposition 4 suggests. The aim of §4 is however to establish that a simple modification of clipping avoids worst-case degradation, without adding significant complexity. Equivalent loss Reference Label noise robust? (Proposition 7) Table 1: Summary of types of gradient clipping considered in this paper. We consider binary classification problems involving a labelled example (x, y), parametrised scoring function s θ (x) with margin m(θ). = y · s θ (x), and differentiable composite margin loss φ(z). This loss internally converts scores to probabilities p(θ). = F (m(θ)) for link function F (·), which is evaluated with some base loss ϕ; i.e., φ = ϕ • F. Gradient clipping applies to the full loss, i.e., φ(m(θ)). L-gradient clipping applies only to the composite loss, leaving the score untouched; this is equivalent to using a Huberised loss. CL-gradient clipping applies only to the base loss, leaving the link untouched; this is equivalent to using a partially Huberised loss. Only the latter has robustness guarantee under symmetric label noise. Third, for minibatch size N > 1, the effect of clipping is not a simple loss modification, since the loss gradients for each sample will be modified by the entire minibatch loss gradient norm g(θ) 2. Since this minibatch is randomly drawn, one cannot mimic gradient clipping by a simple modification of the loss function. However, the for N = 1 suffice to establish that gradient clipping cannot in general endow robustness. One may use our partially Huberised loss in conjunction with minibatch gradient clipping, to potentially obtain both robustness and optimisation benefits. Finally, partially Huberised losses such as require setting a hyperparameter τ (e.g., by crossvalidation), similar to α in the generalised cross-entropy per §4.3. Intuitively, the optimal τ trades off the noise-robustness of the linear loss, and the gradient informativeness of the base loss (per the discussion in §4.3). Setting τ to be large tacitly assumes that one's samples are largely noise-free. We now present experiments illustrating that: (a) we may exhibit label noise scenarios that defeat a Huberised but not partially Huberised loss, confirming Propositions 4, 7, and (b) partially Huberised versions of existing losses perform well on real-world datasets subject to label noise. Synthetic datasets. Our first experiments involve two synthetic datasets, which control for confounding factors. We begin with a setting from , comprising a 2D linearly separable distribution. (See Appendix F.1 for an illustration.) We draw N = 1, 000 random samples from this distribution, and flip each label with ρ = 45% probability. We train a linear classifier to minimise one of several losses, and evaluate the classifier's accuracy on 500 clean test samples. We compare the logistic loss with its Huberised and partially Huberised versions, using τ = 1 and τ = 2 respectively. Figure 2a presents the over 500 independent trials. The logistic loss and its Huberised counterpart suffer significantly under noise, while the partially Huberised loss often achieves perfect near-discrimination. This confirms that in the worst case, L-gradient clipping may succumb to noise, while CL-gradient clipping performs well in the same scenario. We next consider a 1D setting based on Ding (2013, Section 3.2.3), comprising 10, 000 linearly separable "inliers" and 50 "outliers". Assuming the use of a linear model parameterised by scalar 91.6 ± 0.1 88.6 ± 0.0 83.6 ± 0.1 72.2 ± 0.0 PHuber-GCE τ = 10 92.0 ± 0.1 88.5 ± 0.1 80.8 ± 0.1 62.6 ± 0.2 CIFAR-100 CE 66.6 ± 1.4 49.7 ± 0.3 29.9 ± 0.9 11.4 ± 0.2 CE + clipping 28.8 ± 0.1 20.6 ± 0.4 14.7 ± 0.6 9.0 ± 0.4 Linear 12.1 ± 1.6 6.6 ± 1.2 5.7 ± 0.9 3.6 ± 0.1 GCE 70.1 ± 0.1 63.9 ± 0.1 52.0 ± 0.2 29.9 ± 0.5 PHuber-CE τ = 10 66.2 ± 1.5 56.2 ± 2.2 44.4 ± 0.7 18.5 ± 0.4 PHuber-GCE τ = 10 69.8 ± 0.2 64.4 ± 0.4 52.4 ± 0.2 31.5 ± 0.8 Table 2: Test set accuracy where the training labels are corrupted with probability ρ. We report the mean and standard error over 3 trials. The highlighted cells are the best performing loss at a given ρ. "PHuber" here refers to our partial Huberisation from §4, which is equivalent to a variant of gradient clipping. θ ∈ R, we plot the empirical risk of the samples with and without the outlier observations as θ is varied. Figure 2b shows that the logistic loss and its Huberised variant are strongly affected by the outliers: their optimal solution goes from θ * = +∞ to θ * = 0. However, the partially Huberised loss is largely immune to the outliers. Appendix F contains additional synthetic experiments. Real-world datasets. We now demonstrate that partially Huberised losses perform well with deep neural networks trained on MNIST, CIFAR-10 and CIFAR-100 . For MNIST, we train a LeNet using Adam with batch size N = 32, and weight decay of 10 −3. For CIFAR-10 and CIFAR-100, we train a ResNet-50 using SGD with momentum 0.1, weight decay of 5 × 10 −3, batch normalisation, and N = 64, 128 respectively. For each dataset, we corrupt the training labels with symmetric noise at flip probability ρ ∈ {0.0, 0.2, 0.4, 0.6}. We compare the test set accuracy of various losses combined with a softmax link. Our baseline is the cross-entropy loss (CE). As representative noise-robust losses, we consider the linear or unhinged loss ), and the generalised cross-entropy (GCE) with α = 0.7, following. We additionally assess global gradient clipping (with τ = 0.1) of the CE, which per §3 is akin to a Huberised loss. We apply our partial Huberisation of to the CE ("PHuber-CE"), and the GCE ("PHuber-GCE"). The latter highlights that partial Huberisation is not tied to the cross-entropy, and is applicable even on top of existing noise-robust losses. Recall that partial Huberisation offers a choice of tuning parameter τ, similar to the α parameter in GCE, and noise-rate estimates in loss-correction techniques more generally. For each dataset, we pick τ ∈ {2, 10} (equivalently corresponding to probability thresholds 0.5 and 0.1 respectively) so as to maximise accuracy on a validation set of noisy samples with the maximal noise rate ρ = 0.6; the chosen value of τ was then used for each noise level. Tuning τ separately for each setting of the noise rate ρ can be expected to help performance, at the expense of increased computational cost. Recall also that as τ → 1, partial Huberisation mimics using the base loss, while as τ → +∞, partial Huberisation mimics using the linear loss; our hypothesis is that an intermediate τ can attain a suitable balance between noise robustness, and gradient informativeness. Table 2 shows that in the noise-free case (ρ = 0.0), all methods perform comparably. However, when injecting noise, accuracy for the CE degrades dramatically. Further, gradient clipping sometimes offers improvements under high noise; however, the performance is far inferior to other losses, which is in keeping with their robustness guarantees. Indeed, the linear loss, which is provably robust to symmetric noise, generally performs well even when ρ = 0.6. However, optimisation under this loss is more challenging, since the gradient does not account for instances' importance (per §4.3). This is particularly reflected on the CIFAR-100 dataset, where this loss suffers to learn even under no noise. The GCE and partially Huberised losses do not suffer from this issue, even at high noise levels. Generally, the partially Huberised losses are competitive with or improve upon the counterpart losses they build upon. In particular, the partially Huberised CE performs much better than the CE under high noise, while the partially Huberised GCE slightly bumps up the GCE numbers on CIFAR-100. This indicates that partially Huberised may be useful in combining with generic base losses to cope with noise. We reiterate here that our partially Huberised loss may be used in conjunction with other ideas, e.g., pruning , consensus , or abstention . We leave such exploration for future work. We established that gradient clipping by itself does not suffice to endow even simple models with label noise robustness; however, a simple variant resolves this issue. Experiments confirm that our composite loss-based gradient clipping performs well on datasets corrupted with label noise. One interesting direction for future work is to analyse the behaviour of gradient-clipping inspired losses for the more general problem of distributionally robust learning (; ; ;). Similarly, for the linear loss function lin (x, y; θ). = −m θ (x, y), we have ∇ lin (x, y; θ) = −∇m θ (x, y). To compute the normalised gradient, we need since φ (z) < 0 by assumption that φ is admissible, and thus decreasing. Consequently, if N = 1, else. Assuming a linear scorer s(x; θ) = θ T x, the score gradient ∇s(x; θ) = x, which is independent of θ. The clipped gradient thus corresponds to the gradient under a "Huberised" loss function else. Proof of Lemma 2. Since φ is strictly convex and decreasing, it must be strictly decreasing. We have where (φ) −1 exists since φ is strictly inceasing by definition of strict convexity. Now, by definition, the Huberised loss is and soφ which exactly equals clip τ (φ (z)). We remark here thatφ τ is continuous, since for any convex conjugate, Plugging in u = τ, at the intersection point z 0. −1 (−τ) of the two pieces of the function, Proof of Lemma 3. For admissible φ, the Huberised lossφ τ of is trivially convex, differentiable everywhere, and decreasing. In particular, we must haveφ τ < 0, and soφ τ must be classification calibrated by Bartlett et al. (2006, Theorem 2). As an illustration, Figure 3 shows the minimiser of the conditional risk, Note that this quantity must be non-negative if and only if η > 1 /2 for a loss to be classification calibrated. This is easily verified to be true for the Huberised logistic loss, regardless of τ. is strictly monotone and continuous. Continuity is immediate; for monotonicity, observe that by definition, For brevity, let z 0. −1 (−τ), where we take Thus, the quantity of interest isφ Since φ is strictly convex, φ is strictly decreasing and thus invertible. Thus, the ratioφ is invertible, provided z 0 ≤ 0, i.e., τ ≥ −φ. Consequently,φ τ is proper composite when τ ≥ −φ. To intuit the need for the restriction on τ, observe that by (, Corollary 12) the tentative link function for the loss is When τ < −φ, the above is seen to be non-invertible. See also Appendix C for an illustration of this link function for the logistic loss. Proof of Proposition 4. In order to apply Long & Servedio (2010, Theorem 2), we simply need to check that the lossφ τ is a convex potential in the sense of Long & Servedio (2010, Definition 1). This requires thatφ τ is convex, non-increasing, continuously differentiable, and asymptotes to zero (or equally, is bounded from below). Each of these is satisfied by assumption of φ being admissible. Proof of Lemma 5. By Lemma 2, we may write clip τ (ϕ (u)) as the derivative of a partially Huberised base lossφ τ given bỹ This induces a composite margin lossφ τ, and definẽ τ (x, y; θ). =φ τ (m θ (x, y)). The gradient under this loss is Thus, CL-gradient clipping is equivalent to using the loss˜ τ. Proof of Lemma 6. We proceed in a similar manner to Lemma 3: to show that the loss is proper, we must establish thatφ is invertible. By Reid & Williamson (2010, Corollary 14), for the margin loss φ to be proper composite with link F, it must be true that F satisfies the symmetry condition F (−z) = 1 − F (z). We thus haveφ Since F is invertible by assumption, the above is invertible if and only ifφ This quantity is invertible provided u 0 ≤ 1 2, i.e., τ ≥ −ϕ (1 /2). A subtlety, however, is that the above does not necessarily span the entire range of [0, +∞]; consequently,φ τ itself is proper composite, with a link function of its own. Even when τ is small, one may verify that the lossφ τ is nonetheless classification calibrated: this is because for any η ∈, the minimiser z * (η) of the conditional risk must satisfy the stationarity condition We thus need to find a suitable z * such that the left hand side equates to a given constant. Now, for any C = 1 there is a unique u such thatφ = C. One may verify that this z * > 0 ⇐⇒ η > 1 2; for example, see Figure 4, which visualises the risk minimiser for various values of τ. Thus, the sign of the minimising score conveys whether or not the positive class-probability is dominant; thus, the loss is classification calibrated. Proof of Proposition 7. Let R τ (f) denote the risk on the clean distribution of a predictor f with respect to the partially Huberised lossφ τ with parameter τ. Similarly, letR τ (f) denote the risk on the noisy distribution.; van , we havē That is, the risk on the noisy distribution equals a scaled version of the risk on the clean distribution, plus an additional term. This term is a constant independent of f if and only ifφ τ satisfies the symmetry condition of , namely,φ τ (u) +φ τ (1 − u) = C for some constant C. Even when the symmetry condition does not hold, one may nonetheless aim to bound this additional term as follows. For simplicity, we restrict attention here to ϕ being the log or cross-entropy loss ϕ(u) = − log u. By definition, for any f ∈, τ. Evidently, all piecewise functions involved are bounded on the respective intervals. For example, By taking the maximum of each of the quantities on the right hand side -which are constants depending on τ -we may thus find constants C 1, C 2 such that Now let f * denote the minimiser of the clean risk R τ (f), andf * the minimiser of the noisy risk R τ (f). Then, using each of the above inequalities, where the last inequality is because R τ (f *) ≤ R τ (f *) by definition of f *. The claim follows. Beyond requiring classification-calibration, it is often desirable to use classifier outputs as valid probabilities. Proper losses (; ;) ϕ: {±1}× →R + are the core losses of such class-probability estimation tasks, for which Equation 13 stipulates that when using ϕ to distinguish positive and negative labels, it is optimal to predict the positive class-probability. Typically, it is more useful to work with losses that accept real-valued scores, e.g. as output by the pre-activation of the final layer of a neural network. To this end, proper composite losses Given a proper loss ϕ and "symmetric" link F with F (−v) = 1−F (v), the loss (y, v) = ϕ(y, F (v)) defines a margin loss (, Corollary 14). Proper composite losses may also be extended to multiclass settings in the natural way : one now defines a proper loss ϕ: where K is the number of classes and ∆ K denotes the K-simplex. A proper composite loss may be defined using a link F:. Combined with the log-loss ϕ(y, p) = − log p y, this yields the standard softmax cross-entropy loss. We illustrate the Huberised, partially Huberised, and generalised cross-entropy losses as their underlying tuning parameters are varied. Additionally, we illustrate the link functions that are implicit in each of the losses, which illustrates that they may be non-invertible if τ is too large. C.1 HUBERISED LOSS Figure 5 illustrates the Huberised version of the logistic loss, and its derivative. Following the proof of Lemma 3, for τ ∈ and z 0 = −σ −1 (τ), the Huberised lossφ τ has an implicit link function (see Figure 6) Compared to the standard sigmoid, the Huberised link saturates more slowly as τ is decreased. Note that when τ ≤ 1 2, the link function is not invertible everywhere: this in the loss not being proper composite per our definition. Figure 7 illustrates the partially Huberised version of the logistic loss, as well as the base log-loss. Following the proof of Lemma 6, the partially Huberised lossφ τ has implicit link functioñ Figure 9 illustrates the link function. The logistic loss has ϕ(u) = − log u, and so ϕ (1 /2) = −2. For τ = 1, the link function will be non-invertible everywhere, which is expected since the loss here is the linear loss, which is not suitable for class-probability estimation. For τ ∈, the link function will be invertible for p / ∈ 1 − 1 τ, 1 τ. Intuitively, the case τ ∈ corresponds to the linear regions of the losses on positive and negative instances crossing over. For τ ≥ 2, the link function will always be invertible. It may be observed that partial Huberisation causes the link function to saturate at values [1/(1 + τ), τ /(1 + τ)]: this does not affect classification calibration, but does imply that rescaling is necessary in order to intepret the output probabilities. Figure 8 illustrates the base ϕ α loss, and its composition with a sigmoid link function., the link function is not invertible everywhere. = φ(y · θ T x) with empirical risk minimiserθ N on a sample {(x n, y n)} N n=1, suppose that the sample is corrupted with an outlier (x, y). One would like to ensure thatθ N is not swayed by making x 2 arbitrarily large: that is, ∇ (x n, y n ;θ N) + ∇ (x, y ;θ N) = 0, or equivalently, lim x 2 →+∞ ∇ (x, y ;θ N) = 0. Fortunately, the saturating behaviour of the partially Huberised loss affords this, as we now show. Proposition 8. Pick any convex, differentiable, proper composite margin loss φ, whose link F satisfies lim z→−∞ z · F (z) = 0. For any τ > 0, let˜ τ (x, y; θ). =φ τ (y · θ T x) be the τ -partially Huberised loss under a linear model. Then, for any (x, y) and θ such that θ 2 < +∞ and θ T x = 0, Proof of Proposition 8. By definition, Thus, where ψ x,θ denotes the angle between x and θ. If θ T x = 0, then cos ψ x,θ = 0, and so the third term is finite. Thus, ∇˜ τ (x, y; θ) 2 → lim z→±∞ |z|·φ τ (z), depending on the sign of y ·θ T x. By definition ofφ τ, the derivative of the loss asymptotes to either F (v), or φ (v). Now, lim z→−∞ z · F (v) = 0 by assumption, and lim z→+∞ z · φ (v) = 0 since φ is convex, the claim is shown. We provide some more details regarding the synthetic data used in the body, as well as on an additional two-dimensional synthetic dataset. The problem considered in (see Figure 10) comprises a distribution concentrated on six atoms {±, ±(γ, 5γ), ±(γ, −γ)} ⊂ R 2 for some γ > 0; we chose γ = 1 24. An instance (x 1, x 2) is labelled as y = x 1 ≥ 0. The instances are weighted so that the first four atoms have probability mass 1 8, and the last two atoms mass 1 4. We modify this distribution slightly by treating the atoms as means of isotropic Gaussians, and treating the marginal distribution over instances to be a mixture of these Gaussians with mixing weights given by the corresponding probability masses of the atoms. For the experiment involving outliers in feature space, the data comprises points on the line. Positively labelled samples are drawn from a unit variance Gaussian centered at, with positively labeled outliers drawn from (−200, 1). Negatively labelled samples comprise the negation of all points. We learn an unregularised linear classifier from this data, which corresponds to a single scalar θ. We further illustrate the differences amongst methods on a 2D dataset inspired by Amid et al. (2019a). The data comprises 500 points, falling into two bands (Figure 11). The decision boundaries for various losses, when trained with a linear model using explicit quadratic features, are shown in Figure 12. We subject the data to 45% symmetric label noise. We see that the logistic and generalised cross-entropy losses see marked changes in their decision boundaries. By contrast, the partially Huberised loss maintains the correct classification boundary., which defeats any member of a broad family of convex losses. The data comprises six points, with the blue points labelled positive, and the red points labelled negative. The two "fat" points have twice as much probability mass as their "thin" counterparts. While the dataset is trivially linearly separable, minimising a broad range of convex losses with a linear model under any non-zero amount of symmetric label noise in a predictor that is tantamount to random guessing. On the clean version of the data, all losses yield roughly equitable decision boundaries. However, when adding 45% symmetric label noise, the logistic loss sees marked changes to its boundary. The partially Huberised loss maintains the correct classification boundary.
Gradient clipping doesn't endow robustness to label noise, but a simple loss-based variant does.
1,151
scitldr
Among deep generative models, flow-based models, simply referred as \emph{flow}s in this paper, differ from other models in that they provide tractable likelihood. Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard any information of the inputs. However, it has been observed that flows trained on FashionMNIST assign higher likelihoods to OoD samples from MNIST. This counter-intuitive observation raises the concern about the robustness of flows' likelihood. In this paper, we explore the correlation between flows' likelihood and image semantics. We choose two typical flows as the target models: Glow, based on coupling transformations, and pixelCNN, based on autoregressive transformations. Our experiments reveal surprisingly weak correlation between flows' likelihoods and image semantics: the predictive likelihoods of flows can be heavily affected by trivial transformations that keep the image semantics unchanged, which we call semantic-invariant transformations~(SITs). We explore three SITs~(all small pixel-level modifications): image pixel translation, random noise perturbation, latent factors zeroing~(limited to flows using multi-scale architecture, e.g. Glow). These findings, though counter-intuitive, resonate with the fact that the predictive likelihood of a flow is the joint probability of all the image pixels. So flows' likelihoods, modeling on pixel-level intensities, is not able to indicate the existence likelihood of the high-level image semantics. We call for attention that it may be \emph{abuse} if we use the predictive likelihoods of flows for OoD samples detection. Deep generative models have been very successful in image generation (; ;), natural language generation , audio synthesis and so on. Among them, generative adversarial networks (GANs) are implicit generative models that explicit likelihood function is not required, and are trained by playing a minimax game between the discriminator and the generator; Variational auto-encoders are latent variable generative models optimized by maximizing a lower bound, called evidence lower bound, of the data log-likelihood. Flow-based models (; differ from them in that they provide exact log-likelihood evaluation with change of variables theorem . A flow usually starts with a simple base probability distribution, e.g. diagonal Gaussian, then follows a chain of transformations in order to approximate complex distributions. Each transformation is parameterized by specially designed neural networks so that the log-determinant of its Jacobian can be efficiently computed. Most of the previous works focus on how to design more flexible transformations to achieve tighter log-likelihoods, and generate more realistic samples. It is also believed that flows can be used to detect out-of-distribution(OoD) samples by assigning low likelihoods on them. However, it has been observed that flows fail to do so. For example, flows trained on FashionMNIST surprisingly assign higher likelihoods on MNIST samples . Though analyses on pixel-level statistics are performed on this phenomenon , and density evaluation combined with uncertainty estimation is used to detect OoD samples , the reasons behind flows' counter-intuitive behaviours are still not clear. Humans easily discriminate MNIST images from FashionMNIST images, since their high-level image semantics are perceptually different. Accordingly, it takes some metrics that can reflect the high-level image semantics for OoD detection. In this paper, we empirically explore the correlation between flows' likelihoods and image semantics, and question the rationality and applicability of using predictive likelihoods of flows for OoD detection. We first introduce a concept of semanticinvariant transformation (SIT). An SIT transforms an input without changing its high-level semantics, e.g. a dog image through an SIT is still supposed to be recognized as a dog. We choose two typical flow-based models as target models: Glow , based on coupling transformations, and pixelCNN, based on autoregressive transformations. We evaluate on image datasets MNIST and FashionMNIST under three trivial SITs: image translation, random noise perturbation, and latent factors zeroing (specific to invertible flows using multi-scale architectures, e.g. Glow). We demonstrate that the predictive likelihoods of the target models show weak correlation to the image semantics in the following ways: • Small pixel translations of test images could in obvious likelihood decreases of Glow. • Perturbing small random noises, unnoticeable to humans, to test images could lead to catastrophic likelihood decreases of target models. This also applies even if we keep the semantic object of a test image intact, and only add noises to the . • For an invertible flow using multi-scale architecture, e.g. Glow, the inferred latent variables of an image is a list of gaussianized and standardized factors. We find that the contributions of a flow's blocks to the log-likelihood are constant and independent of inputs. Thus, simply zeroing the preceding latent factors of a sample image, and feed them to flow's reverse function. We could obtain new samples with surprisingly higher likelihoods, yet with perceptually unnoticeable changes from the original image. We emphasize that all these SITs are small pixel-level modifications on test images, and undoubtedly have no influences on humans' recognition of the semantic objects in the images. However, they lead to obvious inconsistency of flows' likelihoods on test samples. Considering that the predictive likelihood of a flow is the joint probability of all the image pixels, it may not convincingly indicate the existence of a semantic object in an image. Thus it could be problematic to use flows for downstream tasks which require metrics that can reflect image semantics, e.g. OoD detection. 2.1 CHANGE OF VARIABLES THEOREM Given a random variable z with probability density function p(z), after applying an invertible function f: R D → R D on z, we get a new random variable z = f (z). Then probability density function of the changed variable z is given by: We can construct arbitrarily complex probability distributions by transforming a simple base distribution p(z 0) with a chain of mappings f k of length K. Then we have: Flow-based models are generative models designed by applying the above theorem, thus exact loglikelihood evaluation of data is feasible. Then the practical problem of building flow-based models applied on high-dimensional data, like images, becomes how to design invertible transformations whose Jacobian determinant can be efficiently computed. Research on flows is very active and rapidly evolving. In this paper, we particularly focus on flowbased generative models on images and the behaviours of their likelihoods. They can roughly be divided into two categories according to the granularity of the transformation layers: The affine coupling is proposed in Real NVP , whose Jacobian is a lower triangular matrix that can be efficiently computed. An earlier and simpler version is additive coupling proposed in NICE , which can be obtained by simply removing the scale item exp s(x 1:d) in affine coupling. Additive coupling layer is volume-preserving and the log-determinant of its Jacobian is always 0. Glow improves Real NVP by replacing the fixed shuffling permutation with 1 × 1 invertible convolution. Since forward and inverse operation of a coupling layer have the same computational efficiency, both likelihood evaluation and sampling(or generation) for coupling flows are equally efficient. Autoregressive Flow As the building blocks of autoregressive flow, autoregressive transformations model the joint probability p(x) as the product of one-dimensional conditionals: where the probability of observation x i conditions only on its previous observations x <i. The autoregressive property of an autoregressive layer is enforced by specially designed mechanism, e.g. masking. In PixelCNN (van den, this is implemented as masked convolutional layers, which are inherently easier to be parallelized than its counterpart PixelRNN . The likelihood evaluation of PixelCNN takes only one-forward pass, but its inference, i.e. generation, takes O(D), since we have to sample pixel-by-pixel. PixelCNN can be further parallelized (b) to accelerate its inference speed. Flows for generation Serving as powerful decoders, PixelCNN can also be combined with other generative models, e.g. PixelGAN and PixelVAE . Variants of PixelCNN are also used to model audio, video , and text. PixelCNN, combining with attention, is also applied to few-shot autoregressive density estimation Reed et al. (2017a). proposes several improvements to coupling flow, reducing its gap to autoregressive flow in terms of density estimation. Autoregressive models for density estimation Autoregressive models can be specially designed for general-purpose density estimation. Masked Autoencoder for Distribution Estimation is a pioneering work that use masked neural networks to model the autoregressive density. MADE constitutes the building block of two popular normalizing flows: Inverse Autoregressive Flow and Masked Autoregressive Flow . IAF and MAF are similar but with different computational trade-offs. IAF, providing efficient sampling, is designed to improve the expressiveness of the approximate posterior of VAE. MAF is a more powerful density estimator which stacks multiple MADEs. Let g be a flow-based generative model trained on dataset sampled from some unknown p(x), and p g (x) be the g's predictive probability density function of sample x. Semantic-Invariant Transformation (SIT) Roughly speaking, SIT can be any transformation that do not change humans' recognition of image semantics. For example, suppose x is a dog image. After applying SIT T to x, T (x) is supposed to be high recognized as a dog image. As a proof of concept evaluation, we limit our evaluations to three trivial SITs: image translation, random noise perturbation, and latent factors zeroing (specific to Glow). We probe the correlation between the predictive likelihood of a flow g and image semantics by examining the influences of SITs on test samples' likelihoods. Specifically, a reasonable observation we should expect is that for SIT T: p g (T (x)) − p g (x) < δ holds for a small positive scalar δ. We report bits-per-dim (BPD), which is given by: where NLL is the negative log-likelihood of the test sample, h, w, c are height, width, number of channels. Lower BPD implies higher likelihood. Throughout this paper, we use BPD and likelihood interchangeably. We refer to Supp. A for setup and training details of the target models. Translation invariance is a fundamental property in learning image representations that are robust for downstream tasks. In this section, we evaluate the influences of image translations on flows' likelihoods. The in Fig. 1 and examples in Fig. 2 show that even 1 or 2-pixel left translation could lead to obvious increase of Glow's predictive BPDs, while the PixelCNN's predictive BPDs are robust to pixel translation. This surprising difference can be attributed to the difference of their architectures. Glow, like other flows based on coupling transformation layers , models the joint probability of the pixels in a coarse-grained way. They rely on multi-scale architecture modeling different level of abstractions in order to achieve competitive BPDs. At higher scale levels, the intermediate tensors have smaller spatial sizes and bigger channel sizes. This is performed at the starting point of each scale level with squeeze operation, which trades spatial sizes for channel sizes by transforming a h × w × c tensor into a h/2 × w/2 × 4c tensor. Note that squeeze operation actually destroys the spatial positions of adjacent pixels, and 1-pixel translation could lead to quite different spatial partitions. While for PixelCNN, the intermediate tensors are not reshaped, and the spatial positions of pixels are kept still; Furthermore, the prediction of each pixel conditions only on a neighborhood of (previous) pixels in masked convolution, so translation invariance is preserved. Problematic Likelihood Comparisons The foundation of using likelihood-based models for OoD detection is that they are supposed to assign much lower likelihoods for OoD samples x out than indistribution samples x in, i.e. p g (x in) p g (x out). However, it has been observed that flows assign higher likelihoods on OoD samples than even training samples . Analyses of pixel-level datasets statistics in show that this may be due to OoD datasets just "sit inside of" in-distribution datasets with roughly the same mean, smaller variance. Surprisingly, similar counter-intuitive likelihood assignment also occurs in in-distribution samples. For example, in Fig. 1, images with class label 1 are consistently have significantly lower BPDs, i.e. higher likelihoods than samples of other classes. In OoD detection, we assume that a sample with a higher likelihood indicates that it is more likely to be an in-distribution sample. Following the same logic, this tells that all images of class label 1 are more likely to be in-distribution samples than samples from other classes, which contradicts the fact that they are all in-distribution samples for sure. We may reasonably suspect that flows' counter-intuitive likelihood assignment is dominated by the inherent differences of pixel-level statistics associated to the image semantics, e.g. different numbers. This kind of counter-intuitive likelihood comparisons exist not only between in-distribution and OoD samples, but also within in-distribution samples from different classes. find that 1-pixel shift of the images could lead to quite different nearest neighbours from the training set, measured in Euclidean distance, which also demonstrate the gap between pixel-level metrics and humans' perception. Image pixels are discrete integers; and in practice, right amount of real-valued uniform noise is added to dequantize the pixels. For images x ∈ [0 Humans can robustly recognize semantic objects in images regardless of the s. So we also evaluate influences of adding random perturbations to only the s of test images. This can be simply implemented using proper mask: where m is the mask, and is a small scaling factor ensuring the noise is small enough. In our evaluations, we use unit Gaussian noises, and set the scaling factor = 0.001. Examples in Fig. 3 manifest that adding small noises catastrophically lower samples' likelihoods. Compared to Glow, PixelCNN is more sensitive to the noises, because its pixel-wise modeling quickly augment and propagate the influences of the added noise. We get similar even if we keep the semantic objects of test images intact, and add noises only to the s. Note that the Gaussian noises · N we added are out of the coverage (with < 0 elements) of the uniform noises added during training, so theoretically this is expected since models are not optimized in that areas. However, it does reveal that flows are not aware of the image semantics, and treat the pixels of objects and the pixels of s with no discrimination. Other tested noises include ·(1/256+ N) and · [−1/256, 0], and similar were obtained. Algorithm 1 Generate x * by zeroing the latent factors Infer the latent factors. 3: z * ← zero(z, k) Zero the preceding k factors. 4: x * ← g.reverse(z *, y) Reverse the zeroed latent factors. 5: return x * Let us first decompose the Glow architecture into blocks and review their contributions to the final log-likelihood. A Glow consists of a sequence of modules at different scale levels. At each scale level, it starts with a squeeze operation, which reshapes the intermediate tensor without contribution to the log-likelihood. Following each squeeze operation is a stack of step flow blocks. A step flow block consists of three layers: actnorm, invertible 1 × 1 convolutional layer, and coupling layer. The log-determinants of actnorm and 1×1 convolutional layer are input-independent, and depend only on their inner weights (see Table 1 in ). Additive coupling layer is volumepreserving whose log-determinant is 0, thus is also input-independent. For affine coupling layer, its log-determinant depends on the affined half, but is quantitatively small. Compared to Glow with additive coupling layers, affine layers bring only a small improvement of < 0.05 BPD . Then, the intermediary tensor is split into two halves along the channel dimension. One halve is gaussianized with a convolutional block, then the other halve is factored out after being standardized. This procedure significantly reduces the amount of computation and memory. We refer to for more details due to limited space. So for a Glow using additive couplings, the cumulative log-determinant of the flow blocks within a particular scale-level is constant regardless of different samples. Only the log-determinants of gaussianized factorings (between the transitions of different scale-levels) depend on the individual inputs. Denote z = {z 1, . . ., z L} as the latent variables of input image x. Each z i is the standardized vector of i-th scale level. This simply means that a sample x whose latent variables z are close to center 0 will have a higher log-likelihood. Empirically, it also applies to Glow using affine coupling layers, since the influence of varying the latent variables on the log-determinants of affine coupling layers is quantitatively small compared to its gain. We could make use of this property to generate samples with higher likelihoods via the invertibility of Glow for free (see Algorithm 1, label y is optional). We find that the semantic object of a test image depends heavily on the last factored latent z L, rather than the preceding factors. Examples in Fig. 5 show that zeroing the preceding 1 and 2 latent factors could give us samples with obviously lower BPDs but without obvious changes (slightly faded pixel intensities) of the semantic objects. Results in Fig. 4 show that zeroing the first latent factor give the maximum increment of likelihoods. We also evaluate the influences of these SITs on the performance of discriminative classifiers. In contrast to the obvious change of flows' likelihoods, these small perturbations could decrease the testing accuracies of classifiers to some insignificant, or negligible (on MNIST) extent (see Tab 1). The discriminative classifier used here is a shallow residual network of 8 layers on 32 × 32 images in the structure specified in (Difference to Adversarial Examples Both the SITs we use in this paper and adversarial perturbations are small perturbations, but they are inherently different. Adversarial perturbations are intentionally crafted to cause misbehaviors, which are usually specific to individual images, and easily fool a classifier with almost 0% accuracy. While three SITs above are universal transformations over all images that take no additional computations and basically come for free. What is the problem of likelihood-based generative models? Discriminative classifiers, trained to extract class-relevant features, are known to be vulnerable to adversarial examples, and give over-confident predictions even for OoD samples. Generative models are supposed to be more robust since they model every pixel information of an image. However, likelihood modeling in high-dimensional space can be hard and lead to counter-intuitive observations. It was observed that likelihood-based generative models can assign even higher likelihoods on OoD samples . observe this phenomenon on both flows and VAEs. They decompose the change-of-variable theorem and investigate the influences of different transformation layers, find that the phenomenon still exists regardless of whether the transformation is volume-preserving or not. Their second-order analysis on pixel statistics suggests that OoD datasets, e.g. MNIST, just sit inside of in-distribution datasets, e.g. FashinMNIST, with roughly the same mean, smaller variance. They suspect that flows may simply fit the pixel intensities without really capture the high-level semantics. find that the likelihood of an image is mostly dominated by the irrelevant pixels, and propose a remedy to correct the original likelihood with a likelihood ratio. Though significantly improves the accuracy of OoD detection, but still fail to answer the question: whether the likelihood ratio shows high correlation to high-level semantics. This paper differs from previous works and step further to explore the correlations between the likelihood of flow-based generative models and image semantics. Theoretical analyses in (; van den) point out an important argument that generative models' ability to produce plausible samples is neither sufficient nor necessary for high likelihood. Results in this paper provide more experimental evidences for this simple argument that even for powerful exact likelihood-based generative models-flows, the likelihoods of samples can be largely weakly correlated to the high-level image semantics. Thus, special attention should be paid to this argument before we apply likelihood-based generative models to downstream tasks. For example, considering the weak correlation between flows' likelihoods and image semantics, it may be inappropriate to use them for OoD samples detection. On the other hand, these counter-intuitive behaviours of flows raise our awareness of the gap between the predictive likelihoods of flows and the expectation that these likelihoods can closely relate to the semantics for OoD detection. What is exactly the likelihood of a image? We should keep in mind that the predictive likelihood of a flow is the joint probability of all the image pixels. There is no doubt that flows, trained by maximizing its likelihood, could generate impressive synthesized data. There seem to be no problem that in terms of image generation, we expect that every single generated pixel in a image is the most likely one (hinging on its contextual pixels). However, the likelihood is explicitly modeled on pixels, so can be easily influenced by pixel-level modifications. Images' likelihoods significantly decrease even small noises are added to the pixels of s. For downstream tasks that need some "likelihood" to indicate the object in an image is a cat, rather than a car, the pixels of s are almost irrelevant. This drive us to think that we may need to model likelihood in some kind of semantic space or with some "perceptual" metrics, rather than on raw pixels. One promising direction is to define likelihood of an images on its high-level representation, and successful examples are (; Nilesh A.). The target models Glow and PixelCNN are implemented in Pytorch 1.0, and the implementation of PixelCNN is based on https://github.com/pclucas14/pixel-cnn-pp. Though several modifications proposed in PixenCNN++ are used in our implementation, we still use PixelCNN to annotate our model. We implement both unconditional and conditional versions of Glow. We report the of conditional Glow in our experiments. For Glow, using multi-scale architecture with more levels is critical to achieve lower BPDs. We resize the FashionMNIST, MNIST images from 28 × 28 to 32 × 32 and set the number of levels to 5. So the last factored the latent variables will be 1×1 in the spatial size. We provide pretrained models in code package as supplementary material for reproducing the (We only provide models on MNIST due to the upload size limit). For PixelCNN, both 28 × 28 and 32 versions are provided. In image translation experiments, we report the BPDs of PixelCNN on MNIST images of size 28×28 in the paper. Unlike Glow, PixelCNN treat the pixels as discretized intensity levels, and resizing the image size to 32 × 32 could lead to dangling intensity levels. We also find that if performing image translation experiment on PixelCNN(32 × 32), 1-pixel or 2-pixel translation will also lead to considerably BPD decreases.
show experimental evidences about the weak correlation between flows' likelihoods and image semantics.
1,152
scitldr
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods. As the use of machine intelligence increases in security-sensitive applications BID2 BID0, robustness has become a critical feature to guarantee the reliability of deployed machine-learning systems. Unfortunately, recent research has demonstrated that existing models are not robust to small, adversarially designed perturbations of the input BID1 BID31 BID14 BID20 BID6. Adversarially perturbed examples have been deployed to attack image classification services BID22, speech recognition systems BID6, and robot vision BID25. The existence of these adversarial examples has motivated proposals for approaches that increase the robustness of learning systems to such examples BID28 BID20 BID7.The robustness of machine learning models to adversarial examples depends both on the properties of the model (i.e., Lipschitzness) and on the nature of the problem considered, e.g., on the input dimensionality and the Bayes error of the problem BID11. Consequently, defenses that aim to increase robustness against adversarial examples fall in one of two main categories. The first category comprises model-specific strategies that enforce model properties such as invariance and smoothness via the learning algorithm or regularization scheme BID30 BID20 BID7, potentially exploiting knowledge about the adversary's attack strategy BID14. The second category of defenses are model-agnostic: they try to remove adversarial perturbations from the input. For example, in the context of image classification, adversarial perturbations can be partly removed via JPEG compression BID9 or image re-scaling BID23. Hitherto, none of these defenses has been shown to be very effective. Specifically, model-agnostic defenses appear too simple to sufficiently remove adversarial perturbations from input images. By contrast, model-specific defenses make strong assumptions about the nature of the adversary (e.g., on the norm that the adversary minimizes or on the number of iterations it uses to generate the perturbation). Consequently, they do not satisfy BID18 principle: the adversary can alter its attack to circumvent such model-specific defenses. In this paper, we focus on increasing the effectiveness of model-agnostic defense strategies by developing approaches that remove the adversarial perturbations from input images, maintain sufficient information in input images to correctly classify them, and are still effective in settings in which the adversary has information on the defense strategy being used. We explore transformations based on image cropping and rescaling BID15, bit-depth reduction ), JPEG compression (, total variance minimization BID29, and image quilting BID10 . We show that these defenses can be surprisingly effective against existing attacks, in particular, when the convolutional network is trained on images that are transformed in a similar way. The image transformations are good at countering the (iterative) fast gradient sign method BID20 ), Deepfool (, and the BID5 attack, even in gray-box settings in which the model architecture and parameters are public. Our strongest defenses are based on total variation minimization and image quilting: these defenses are non-differentiable and inherently random, which makes it difficult for an adversary to get around them. Our best defenses eliminate 60% of gray-box attacks and 90% of black-box attacks by four major attack methods that perturb pixel values by 8% on average. We study defenses against non-targeted adversarial examples for image-recognition systems. Let DISPLAYFORM0 H×W ×C be the image space. Given an image classifier h(·) and a source image x ∈ X, a non-targeted 1 adversarial example of x is a perturbed image x ∈ X such that h(x) = h(x) and d(x, x) ≤ ρ for some dissimilarity function d(·, ·) and ρ ≥ 0. Ideally, d(·, ·) measures the perceptual difference between x and x but, in practice, the Euclidean distance d(x, x) = x−x 2 or the Chebyshev distance d(x, x) = x − x ∞ is most commonly used. Given a set of N images {x 1, . . ., x N} and a target classifier h(·), an adversarial attack aims to generate {x 1, . . ., x N} such that each x n is an adversarial example for x n. The success rate of an attack is measured by the proportion of predictions that was altered by an attack: DISPLAYFORM1 The success rate is generally measured as a function of the magnitude of the perturbations performed by the attack, using the normalized L 2 -dissimilarity: DISPLAYFORM2 A strong adversarial attack has a high success rate whilst its normalized L 2 -dissimilarity is low. In most practical settings, an adversary does not have direct access to the model h(·) and has to do a black-box attack. However, prior work has shown successful attacks by transferring adversarial examples generated for a separately-trained model to an unknown target model BID22. Therefore, we investigate both the black-box and a more difficult gray-box attack setting: in our gray-box setting, the adversary has access to the model architecture and the model parameters, but is unaware of the defense strategy that is being used. A defense is an approach that aims make the prediction on an adversarial example h(x) equal to the prediction on the corresponding clean example h(x). In this study, we focus on imagetransformation defenses g(x) that perform prediction via h(g(x)). Ideally, g(·) is a complex, nondifferentiable, and potentially stochastic function: this makes it difficult for an adversary to attack the prediction model h(g(x)) even when the adversary knows both h(·) and g(·). One of the first successful attack methods is the fast gradient sign method (FGSM; BID14). Let (·, ·) be the differentiable loss function that was used to train the classifier h(·), e.g., the cross-entropy loss. The FGSM adversarial example corresponding to a source input x and true label y is: DISPLAYFORM0 for some > 0 that governs the perturbation magnitude. A stronger variant of this attack, called iterative FGSM (I-FGSM; BID21), iteratively applies the FGSM update: Alternative attacks aim to minimize the Euclidean distance between the input and the adversarial example instead. For instance, assuming h(·) is a binary classifier, DeepFool projects x onto a linearization of the decision boundary defined by h(·) for M iterations: DISPLAYFORM1 DISPLAYFORM2 where x and x are defined as in I-FGSM. The multi-class variant of DeepFool performs the projection onto the nearest class boundaries. The linearization performed in DeepFool is particularly well suited for ReLU-networks, as these represent piecewise linear class boundaries. Carlini-Wagner's L 2 attack (CW-L2; BID5) is an optimization-based attack that combines a differentiable surrogate for the model's classification accuracy with an L 2 -penalty term. Let Z(x) be the operation that computes the logit vector (i.e., the output before the softmax layer) for an input x, and Z(x) k be the logit value corresponding to class k. The untargeted variant of CW-L2 finds a solution to the unconstrained optimization problem min DISPLAYFORM3 where κ denotes a margin parameter, and where the parameter λ f trades off the perturbation norm and the hinge loss of predicting a different class. We perform the minimization over x using the Adam optimizer BID19 for 100 iterations with an initial learning rate of 0.001.All of the aforementioned attacks enforce that x ∈ X by clipping values between 0 and 1. FIG0 shows adversarial images produced by all four attacks at five normalized L 2 -dissimilarity levels. Adversarial attacks alter particular statistics of the input image in order to change the model prediction. Indeed, adversarial perturbations x−x have a particular structure, as illustrated by FIG0. We design and experiment with image transformations that alter the structure of these perturbations, and investigate whether the alterations undo the effects of the adversarial attack. We investigate five image transformations: image cropping and rescaling, bit-depth reduction, JPEG compression, total variance minimization, and image quilting. Figure 2: Illustration of total variance minimization and image quilting applied to an original and an adversarial image (produced using I-FGSM with = 0.03, corresponding to a normalized L 2 -dissimilarity of 0.075). From left to right, the columns correspond to: no transformation, total variance minimization, and image quilting. From top to bottom, rows correspond to: the original image, the corresponding adversarial image produced by I-FGSM, and the absolute difference between the two images above. Difference images were multiplied by a constant scaling factor to increase visibility. We first introduce three simple image transformations: image cropping-rescaling BID15, bit-depth reduction, and JPEG compression and decompression BID9. Image croppingrescaling has the effect of altering the spatial positioning of the adversarial perturbation, which is important in making attacks successful. Following BID16, we crop and rescale images at training time as part of the data augmentation. At test time, we average predictions over random image crops. Bitdepth reduction ) perform a simple type of quantization that can removes small (adversarial) variations in pixel values from an image; we reduce images to 3 bits in our experiments. JPEG compression removes small perturbations in a similar way; we perform compression at quality level 75 (out of 100). An alternative way of removing adversarial perturbations is via a compressed sensing approach that combines pixel dropout with total variation minimization BID29. This approach randomly selects a small set of pixels, and reconstructs the "simplest" image that is consistent with the selected pixels. The reconstructed image does not contain the adversarial perturbations because these perturbations tend to be small and localized. Specifically, we first select a random set of pixels by sampling a Bernoulli random variable X(i, j, k) for each pixel location (i, j, k); we maintain a pixel when X(i, j, k) = 1. Next, we use total variation minimization to constructs an image z that is similar to the (perturbed) input image x for the selected set of pixels, whilst also being "simple" in terms of total variation by solving: DISPLAYFORM0 Herein, denotes element-wise multiplication, and TV p (z) represents the L p -total variation of z: DISPLAYFORM1 The total variation (TV) measures the amount of fine-scale variation in the image z, as a of which TV minimization encourages removal of small (adversarial) perturbations in the image. The objective function is convex in z, which makes solving for z straightforward. In our implementation, we set p = 2 and employ a special-purpose solver based on the split Bregman method BID13 ) to perform total variance minimization efficiently. The effectiveness of TV minimization is illustrated by the images in the middle column of Figure 2: in particular, note that the adversarial perturbations that were present in the for the nontransformed image (see bottom-left image) have nearly completely disappeared in the TV-minimized adversarial image (bottom-center image). As expected, TV minimization also changes image structure in non-homogeneous regions of the image, but as these perturbations were not adversarially designed we expect the negative effect of these changes to be limited. Image quilting BID10 ) is a non-parametric technique that synthesizes images by piecing together small patches that are taken from a database of image patches. The algorithm places appropriate patches in the database for a predefined set of grid points, and computes minimum graph cuts BID3 ) in all overlapping boundary regions to remove edge artifacts. Image quilting can be used to remove adversarial perturbations by constructing a patch database that only contains patches from "clean" images (without adversarial perturbations); the patches used to create the synthesized image are selected by finding the K nearest neighbors (in pixel space) of the corresponding patch from the adversarial image in the patch database, and picking one of these neighbors uniformly at random. The motivation for this defense is that the ing image only consists of pixels that were not modified by the adversary -the database of real patches is unlikely to contain the structures that appear in adversarial images. The right-most column of Figure 2 illustrates the effect of image quilting on adversarial images. Whilst interpretation of these images is more complicated due to the quantization errors that image quilting introduces, it is interesting to note that the absolute differences between quilted original and the quilted adversarial image appear to be smaller in non-homogeneous regions of the image. This suggests that TV minimization and image quilting lead to inherently different defenses. We performed five experiments to test the efficacy of our defenses. The experiment in Section 5.2 considers gray-box attacks: it applies the defenses on adversarial images before using them as input into a convolutional network trained to classify "clean" images. In this setting, the adversary has access to the model architecture and parameters but is unaware of the defense strategy. The experiment in Section 5.3 focuses on a black-box setting: it replaces the convolutional network by networks that were trained on images with a particular input-transformation. The experiment in Section 5.4 combines our defenses with ensembling and model transfer. The experiment in Section 5.5 investigates to what extent networks trained on image-transformations can be attacked in a gray-box setting. The experiment in Section 5.6 compares our defenses with prior work. The setup of our gray-box and black-box experiments is illustrated in FIG1. Code to reproduce our is available at https://github.com/facebookresearch/adversarial_image_defenses. We performed experiments on the ImageNet image classification dataset. The dataset comprises 1.2 million training images and 50, 000 test images that correspond to one of 1, 000 classes. Our adversarial images are produced by attacking a ResNet-50 model BID16. We evaluate our defense strategies against the four adversarial attacks presented in Section 3. We measure the strength of an adversary in terms of its normalized L 2 -dissimilarity and report classification accu- racies as a function of the normalized L 2 -dissimilarity. To produce adversarial images like those in FIG0, we set the normalized L 2 -dissimilarity for each of the attacks as follows:• FGSM. Increasing the step size increases the normalized L 2 -dissimilarity.• I-FGSM. We fix M = 10, and increase to increase the normalized L 2 -dissimilarity.• DeepFool. We fix M = 5, and increase to increase the normalized L 2 -dissimilarity.• CW-L2. We fix κ = 0 and λ f = 10, and multiply the ing perturbation by an appropriately chosen ≥ 1 to alter the normalized L 2 -dissimilarity. We fixed the hyperparameters of our defenses in all experiments: specifically, we set pixel dropout probability p = 0.5 and the regularization parameter of the total variation minimizer λ TV = 0.03. We use a quilting patch size of 5×5 and a database of 1, 000, 000 patches that were randomly selected from the ImageNet training set. We use the nearest neighbor patch (i.e., K = 1) for experiments in Sections 5.2 and 5.3, and randomly select a patch from one of K = 10 nearest neighbors in all other experiments. In the cropping defense, we sample 30 crops of size 90×90 from the 224×224 input image, rescale the crops to 224×224, and average the model predictions over all crops. FIG2 shows the top-1 accuracy of a ResNet-50 tested on transformed adversarial images as a function of the adversary strength for each of the four attacks. Each plot shows for five different transformations we apply to the images at test time (viz., image cropping-rescaling, bitdepth reduction, JPEG compression, total variation minimization, and image quilting). The dotted line shows the classification error of the ResNet-50 model on images that are not adversarially perturbed, i.e., it gives an upper bound on the accuracy that defenses can achieve. In line with the reported in the literature, the four adversaries successfully attack the ResNet-50 model in nearly all cases (FGSM has a slightly lower favorable attack rate of 80−90%) when the input images are not transformed. The also show that the proposed image transformations are capable of partly eliminating the effect of the attacks. In particular, ensembling 30 predictions over different, random image crops is very efficient: these predictions are correct for 40−60% of the images (note that 76% is the highest accuracy that one can expect to achieve). This suggests that adversarial examples are susceptible to changes in the location and scale of the adversarial perturbations. While not as effective, image transformations based on total variation minimization and image quilting also successfully defend against adversarial examples from all four attacks: applying these transformations allows us to classify 30−40% of the images correctly. This suggests that total variation minimization and image quilting can successfully remove part of the perturbations from adversarial images. In particular, the accuracy of the image-quilting defense hardly deteriorates as the strength of the adversary increases. However, the quilting transformation does severely impact the model's accuracy on non-adversarial images. The high relative performance of image cropping-rescaling in 5.2 may be partly explained by the fact that the convolutional network was trained on randomly cropped-rescaled images 2, but not on any of the other transformations. This implies that independent of whether an image is adversarial or not, the network is more robust to image cropping-rescaling than it is to those transformations. The in FIG2 suggest that this negatively affects the effectiveness of these defenses, even if the defenses are successful in removing the adversarial perturbation. To investigate this, we trained ResNet-50 models on transformed ImageNet training images. We adopt the standard data augmentation from BID16, but apply bit-depth reduction, JPEG compression, TV minimization, or image quilting on the resized image crop before feeding it to the network. We measure the classification accuracy of the ing networks on the same adversarial images as before. Note that this implies that we assume a black-box setting in this experiment. Table 1: Top-1 classification accuracy of ensemble and model transfer defenses (columns) against four black-box attacks (rows). The four networks we use to classify images are ResNet-50 (RN50), ResNet-101 (RN101), DenseNet-169 (DN169), and Inception-v4 (Iv4). Adversarial images are generated by running attacks against the ResNet-50 model, aiming for an average normalized L 2 -dissimilarity of 0.06. Higher is better. The best defense against each attack is typeset in boldface. We present the of these experiments in FIG3. Training convolutional networks on images that are transformed in the same way as at test time, indeed, dramatically improves the effectiveness of all transformation defenses. In our experiments, the image-quilting defense is particularly effective against strong attacks: it successfully defends against 80−90% of all four attacks, even when the normalized L 2 -dissimilarity of the attack approaches 0.08. We evaluate the efficacy of ensembling different defenses and "transferring" attacks to different network architectures (in a black-box setting). Specifically, we measured the accuracy of four networks using ensembles of defenses on adversarial images generated to attack a ResNet-50; the four networks we consider are ResNet-50, ResNet-101, DenseNet-169 BID17, and Inception-v4 BID33. To ensemble the image quilting and TVM defenses, we average the image-quilting prediction (using a weight of 0.5) with model predictions for 10 different TVM reconstructions (with a weight of 0.05 each), re-sampling the pixels used to measure the reconstruction error each time. To combine cropping with other transformations, we first apply those transformations and average predictions over 10 random crops from the transformed images. The of our ensembling experiments are presented in Table 1. The show that gains of 1 − 2% in classification accuracy can be achieved by ensembling different defenses, whereas transferring attacks to different convolutional network architectures can lead to an improvement of 2−3%. Inception-v4 performs best in our experiments, but this may be partly due to that network having a higher accuracy even in non-adversarial settings. Our best black-box defense achieves an accuracy of about 71% against all four defenses: the attacks deteriorate the accuracy of our best classifier (which combines cropping, TVM, image quilting, and model transfer) by at most 6%. The previous experiments demonstrated the effectiveness of image transformations against adversarial images, in particular, when convolutional networks are re-trained to be robust to those image transformations. In this experiment, we investigate to what extent the ing networks can be attacked in a gray-box setting in which the adversary has access to those networks (but does not have access to the input transformations applied at test time). We use the four attack methods to generate novel adversarial images against the transformation-robust networks trained in 5.3, and measure the accuracy of the networks on these novel adversarial images in FIG4.The show that bit-depth reduction and JPEG compression are weak defenses in such a graybox setting. Whilst their relative ordering varies between attack methods, image cropping and rescaling, total variation minimization, and image quilting are fairly robust defenses in the white-box setting. Specifically, networks using these defenses classify up to 50% of adversarial images correctly. In our final set of experiments, we compare our defenses with the state-of-the-art ensemble adversarial training approach proposed by BID34. Ensemble adversarial training fits the parameters of a convolutional network on adversarial examples that were generated to attack an ensemble of pre-trained models. These adversarial examples are very diverse, which makes the convolutional network being trained robust to a variety of adversarial perturbation. In our experiments, we used the model released by BID34: an Inception-Resnet-v2 BID32 trained on adversarial examples generated by FGSM against Inception-Resnet-v2 and Inception-v3 models. We compare the model to our ResNet-50 models with image cropping, total variance minimization, and image quilting defenses. We note that there are two small differences in terms of the assumptions that ensemble adversarial training makes and the assumptions our defenses make: in contrast to ensemble adversarial training, our defenses assume that part of the defense strategy (viz., the input transformation) is unknown to the adversary, and in contrast to ensemble adversarial training, our defenses assume no prior knowledge of the attacks being used. The former difference is advantageous to our defenses, whereas the latter difference gives our defenses a disadvantage compared to ensemble adversarial training. Table 2 compares the classification accuracies of the defense strategief on adversarial examples with a normalized L 2 -dissimilarity of 0.06. The show that ensemble adversarial training works better on FGSM attacks (which it uses at training time), but is outperformed by each of the transformation-based defenses all other attacks. Input transformations particularly outperform ensemble adversarial training against the iterative attacks: our defense are are 18−24× more robust than ensemble adversarial training against DeepFool attacks. Combining cropping, TVM, and quilting increases the accuracy of our defenses against DeepFool gray-box attacks to 51.51% (compared to 1.84% for ensemble adversarial training). The from this study suggest there exists a range of image transformations that have the potential to remove adversarial perturbations while preserving the visual content of the image: one merely has to train the convolutional network on images that were transformed in the same way. A critical property that governs which image transformations are most effective in practice is whether Table 2: Top-1 classification accuracy on images perturbed using attacks against ResNet-50 models trained on input-transformed images, and an Inception-v4 model trained using ensemble adversarial. Adversarial images are generated by running attacks against the models, aiming for an average normalized L 2 -dissimilarity of 0.06. The best defense against each attack is typeset in boldface.an adversary can incorporate the transformation in its attack. For instance, median filtering likely is a weak remedy because one can backpropagate through the median filter, which is sufficient to perform any of the attacks described in Section 3. A strong input-transformation defense should, therefore, be non-differentiable and randomized, a strategy has been previously shown to be effective BID35 b). Two of our top defenses possess both properties:1. Both total variation minimization and image quilting are difficult to differentiate through. Specifically, total variation minimization involves solving a complex minimization of a function that is inherently random. Image quilting involves a discrete variable that selects the patch from the database, which is a non-differentiable operation, and the graph-cut optimization complicates the use of differentiable approximations BID24.2. Both total variation minimization and image quilting give rise to randomized defenses. Total variation minimization randomly selects the pixels it uses to measure reconstruction error on when creating the denoised image. Image quilting randomly selects one of the K nearest neighbors uniformly at random. The inherent randomness of our defenses makes it difficult to attack the model: it implies the adversary has to find a perturbation that alters the prediction for the entire distribution of images that could be used as input, which is harder than perturbing a single image BID27.Our with gray-box attacks suggest that randomness is particularly important in developing strong defenses. Therefore, we surmise that total variation minimization, image quilting, and related methods BID8 are stronger defenses than deterministic denoising procedures such as bit-depth reduction, JPEG compression, or non-local means BID4. Defenses based on total variation minimization and image quilting also have an advantage over adversarial-training approaches BID20: an adversarially trained network is differentiable, which implies that it can be attacked using the methods in Section 3. An additional disadvantage of adversarial training is that it focuses on a particular attack; by contrast, transformation-based defenses generalize well across attack methods because they are model-agnostic. While our study focuses exclusively on image classification, we expect similar defenses to be useful in other domains for which successful attacks have been developed, such as semantic segmentation and speech recognition BID6 BID38. In speech recognition, for example, total variance minimization can be used to remove perturbations from waveforms, and one could develop "spectrogram quilting" techniques that reconstruct a spectrogram by concatenating "spectrogram patches" along the temporal dimension. We leave such extensions to future work. In future work, we also intend to study combinations of our input-transformation defenses with ensemble adversarial training BID34, and we intend to investigate new attack methods that are specifically designed to circumvent our input-transformation defenses.
We apply a model-agnostic defense strategy against adversarial examples and achieve 60% white-box accuracy and 90% black-box accuracy against major attack algorithms.
1,153
scitldr
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov-accelerated algorithm that attains any provable speedup. Moreover, we then prove that these algorithms both have optimal complexity. Asynchronous algorithms complete much faster iterations, and A2BCD has optimal complexity. Hence we observe in experiments that A2BCD is the top-performing coordinate descent algorithm, converging up to 4-5x faster than NU_ACDM on some data sets in terms of wall-clock time. To motivate our theory and proof techniques, we also derive and analyze a continuous-time analog of our algorithm and prove it converges at the same rate. In this paper, we propose and prove the convergence of the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD), the first asynchronous Nesterovaccelerated algorithm that achieves optimal complexity. No previous attempts have been able to prove a speedup for asynchronous Nesterov acceleration. We aim to find the minimizer x * of the unconstrained minimization problem: DISPLAYFORM0 f (x) = f x,..., x (n) (FORMULA52 where f is σ-strongly convex for σ > 0 with L-Lipschitz gradient ∇f = (∇ 1 f, . . ., ∇ n f). x ∈ R d is composed of coordinate blocks x,..., x (n). The coordinate blocks of the gradient ∇ i f are assumed L i -Lipschitz with respect to the ith block. That is, ∀x, h ∈ R d: DISPLAYFORM1 where P i is the projection onto the ith block of R d. LetL 1 n n i=1 L i be the average block Lipschitz constant. These conditions on f are assumed throughout this whole paper. Our algorithm can also be applied to non-strongly convex objectives (σ = 0) or non-smooth objectives using the black box reduction techniques proposed in BID1. Hence we consider only the coordinate smooth, strongly-convex case. Our algorithm can also be applied to the convex regularized ERM problem via the standard dual transformation (see for instance): DISPLAYFORM2 Hence A2BCD can be used as an asynchronous Nesterov-accelerated finite-sum algorithm. Coordinate descent methods, in which a chosen coordinate block i k is updated at every iteration, are a popular way to solve equation 1.1. Randomized block coordinate descent updates a uniformly randomly chosen coordinate block i k with a gradient-descent-like step: DISPLAYFORM3 The complexity K of an algorithm is defined as the number of iterations required to decrease the error E(f (x k) − f (x *)) to less than (f (x 0) − f (x *)). Randomized coordinate descent has a complexity of K = O(n(L/σ) ln(1/)).Using a series of averaging and extrapolation steps, accelerated improves RBCD's iteration complexity K to O(n L /σ ln(1/)), which leads to much faster convergence whenL σ is large. This rate is optimal when all L i are equal. Finally, using a special probability distribution for the random block index i k, the non-uniform accelerated coordinate descent method BID2 (NU_ACDM) can further decrease the complexity to O(DISPLAYFORM4 L i /σ ln(1/)), which can be up to √ n times faster than accelerated RBCD, since some L i can be significantly smaller than L. NU_ACDM is the current state-of-the-art coordinate descent algorithm for solving equation 1.1.Our A2BCD algorithm generalizes NU_ACDM to the asynchronous-parallel case. We solve equation 1.1 with a collection of p computing nodes that continually read a shared-access solution vector y into local memory then compute a block gradient ∇ i f, which is used to update shared solution vectors (x, y, v). Proving convergence in the asynchronous case requires extensive new technical machinery. A traditional synchronous-parallel implementation is organized into rounds of computation: Every computing node must complete an update in order for the next iteration to begin. However, this synchronization process can be extremely costly, since the lateness of a single node can halt the entire system. This becomes increasingly problematic with scale, as differences in node computing speeds, load balancing, random network delays, and bandwidth constraints mean that a synchronous-parallel solver may spend more time waiting than computing a solution. Computing nodes in an asynchronous solver do not wait for others to complete and share their updates before starting the next iteration. They simply continue to update the solution vectors with the most recent information available, without any central coordination. This eliminates costly idle time, meaning that asynchronous algorithms can be much faster than traditional ones, since they have much faster iterations. For instance, random network delays cause asynchronous algorithms to complete iterations Ω(ln(p)) time faster than synchronous algorithms at scale. This and other factors that influence the speed of iterations are discussed in Hannah & Yin (2017a). However, since many iterations may occur between the time that a node reads the solution vector, and the time that its computed update is applied, effectively the solution vector is being updated with outdated information. At iteration k, the block gradient ∇ i k f is computed at a delayed iterateŷ k defined as 1: DISPLAYFORM5 for delay parameters j(k, 1),..., j(k, n) ∈ N. Here j(k, i) denotes how many iterations out of date coordinate block i is at iteration k. Different blocks may be out of date by different amounts, which is known as an inconsistent read. We assume 2 that j(k, i) ≤ τ for some constant τ < ∞.Asynchronous algorithms were proposed in to solve linear systems. General convergence and theory were developed later in BID5;;; Luo & Tseng (1992; 1993); There is also a rich body of work on asynchronous SGD. In the distributed setting, showed global convergence for stochastic variationally coherent problems even when the delays grow at a polynomial rate. , an asynchronous decentralized SGD was proposed with the same optimal sublinear convergence rate as SGD and linear speedup with respect to the number of workers. , authors obtained an asymptotic rate of convergence for asynchronous momentum SGD on streaming PCA, which provides insight into the tradeoff between asynchrony and momentum. , authors prove convergence for asynchronous SGD that highlight the tradeoff between faster iterations and iteration complexity. Further related work is discussed in Section 4. In this paper, we prove that A2BCD attains NU_ACDM's state-of-the-art iteration complexity to highest order for solving equation 1.1, so long as delays are not too large (see Section 2). The proof is very different from that of BID2, and involves significant technical innovations and complexity related to the analysis of asynchronicity. We also prove that A2BCD (and hence NU_ACDM) has optimal complexity to within a constant factor over a fairly general class of randomized block coordinate descent algorithms (see Section 2.1). This extends in to asynchronous algorithms with L i not all equal. Since asynchronous algorithms complete faster iterations, and A2BCD has optimal complexity, we expect A2BCD to be faster than all existing coordinate descent algorithms. We confirm with numerical experiments that A2BCD is the current fastest coordinate descent algorithm (see Section 5).We are only aware of one previous and one contemporaneous attempt at proving convergence for asynchronous Nesterov-accelerated algorithms. However, the first is not accelerated and relies on extreme assumptions, and the second obtains no speedup. Therefore, we claim that our are the first-ever analysis of asynchronous Nesterov-accelerated algorithms that attains a speedup. Moreover, our speedup is optimal for delays not too large 3.The work of Meng et al. claims to obtain square-root speedup for an asynchronous accelerated SVRG.In the case where all component functions have the same Lipschitz constant L, the complexity they obtain reduces to (n + κ) ln(1/) for κ = O τ n 2 (Corollary 4.4). Hence authors do not even obtain accelerated rates. Their convergence condition is τ < In a contemporaneous preprint, authors in skillfully devised accelerated schemes for asynchronous coordinate descent and SVRG using momentum compensation techniques. Although their complexity have the improved √ κ dependence on the condition number, they do not prove any speedup. Their complexity is τ times larger than the serial complexity. Since τ is necessarily greater than p, their imply that adding more computing nodes will increase running time. The authors claim that they can extend their to linear speedup for asynchronous, accelerated SVRG under sparsity assumptions. And while we think this is quite likely, they have not yet provided proof. We also derive a second-order ordinary differential equation (ODE), which is the continuous-time limit of A2BCD (see Section 3). This extends the ODE found in to an asynchronous accelerated algorithm minimizing a strongly convex function. We prove this ODE linearly converges to a solution with the same rate as A2BCD's, without needing to resort to the restarting techniques. The ODE analysis motivates and clarifies the our proof strategy of the main . We should consider functions f where it is efficient to calculate blocks of the gradient, so that coordinate-wise parallelization is efficient. That is, the function should be "coordinate friendly" Peng et al. (2016b). This is a very wide class that includes regularized linear regression, logistic regression, etc. The L 2 -regularized empirical risk minimization problem is not coordinate friendly in general, however the equivalent dual problem is, and hence can be solved efficiently by A2BCD (see , and Section 5).To calculate the k + 1'th iteration of the algorithm from iteration k, we use only one block of the gradient ∇ i k f. We assume that the delays j(k, i) are independent of the block sequence i k, but otherwise arbitrary (This is a standard assumption found in the vast majority of papers, but can be relaxed ; ;). Definition 1. Asynchronous Accelerated Randomized Block Coordinate Descent (A2BCD). Let f be σ-strongly convex, and let its gradient ∇f be L-Lipschitz with block coordinate Lipschitz parameters L i as in equation 1.2. We define the condition number κ = L/σ, and let L = min i L i. Using these parameters, we sample i k in an independent and identically distributed (IID) fashion according to DISPLAYFORM0 Let τ be the maximum asynchronous delay. We define the dimensionless asynchronicity parameter ψ, which is proportional to τ, and quantifies how strongly asynchronicity will affect convergence: DISPLAYFORM1 We use the above system parameters and ψ to define the coefficients α, β, and γ via eqs. (2.3) to (2.5). Hence A2BCD algorithm is defined via the iterations: eqs. (2.6) to (2.8). DISPLAYFORM2 See Section A for a discussion of why it is practical and natural to have the gradient DISPLAYFORM3 DISPLAYFORM4 Here we define y k = y 0 for all k < 0. The determination of the coefficients c i is in general a very involved process of trial and error, intuition, and balancing competing requirements. The algorithm doesn't depend on the coefficients, however; they are only an analytical tool. We define DISPLAYFORM5 To simplify notation 4, we assume that the minimizer x * = 0, and that f (x *) = 0 with no loss in generality. We define the Lyapunov function: DISPLAYFORM6 We now present this paper's first main contribution. Theorem 1. Let f be σ-strongly convex with a gradient ∇f that is L-Lipschitz with block Lipschitz constants DISPLAYFORM7 ). Then for A2BCD we have: DISPLAYFORM8 To obtain E[ρ k] ≤ ρ 0, it takes K A2BCD iterations for: 13) where O(·) is asymptotic with respect to σ −1/2 S → ∞, and uniformly bounded. DISPLAYFORM9 This is proven in Section B. A stronger for L i ≡ L can be proven, but this adds to the complexity of the proof; see Section E for a discussion. In practice, asynchronous algorithms are far more resilient to delays than the theory predicts. τ can be much larger without negatively affecting the convergence rate and complexity. This is perhaps because we are limited to a worst-case analysis, which is not representative of the average-case performance. Allen-Zhu et al. FORMULA15 (Theorem 5.1) shows a linear convergence rate of 1 − 2/ 1 + 2σ −1/2 S for NU_ACDM, which leads to the corresponding iteration complexity of DISPLAYFORM10. Hence, we have: DISPLAYFORM11 We can assume x * = 0 with no loss in generality since we may translate the coordinate system so that x * is at the origin. We can assume f (x *) = 0 with no loss in generality, since we can replace f (x) with f (x)−f (x *). Without this assumption, the Lyapunov function simply becomes: DISPLAYFORM12 Published as a conference paper at ICLR 2019 DISPLAYFORM13, the complexity of A2BCD asymptotically matches that of NU_ACDM. Hence A2BCD combines state-of-the-art complexity with the faster iterations and superior scaling that asynchronous iterations allow. We now present some special cases of the conditions on the maximum delay τ required for good complexity. Remark 1. Reduction to synchronous case. Notice that when τ = 0, we have ψ = 0, c i ≡ 0 and hence A k ≡ 0. Thus A2BCD becomes equivalent to NU_ACDM, the Lyapunov function 5 ρ k becomes equivalent to one found in BID2 (pg. 9), and Theorem 1 yields the same complexity. The maximum delay τ will be a function τ (p) of p, number of computing nodes. Clearly τ ≥ p, and experimentally it has been observed that τ = O(p). Let gradient complexity K(, τ) be the number of gradients required for an asynchronous algorithm with maximum delay τ to attain suboptimality. τ = 0, since with only 1 computing node there can be no delay. This corresponds to the serial complexity. We say that an asynchronous algorithm attains a complexity speedup if DISPLAYFORM0 is increasing in p. We say it attains linear complexity speedup if DISPLAYFORM1 In Theorem 1, we obtain a linear complexity speedup (for p not too large), whereas no other prior attempt can attain even a complexity speedup with Nesterov acceleration. In the ideal scenario where the rate at which gradients are calculated increases linearly with p, algorithms that have linear complexity speedup will have a linear decrease in wall-clock time. However in practice, when the number of computing nodes is sufficiently large, the rate at which gradients are calculated will no longer be linear. This is due to many parallel overhead factors including too many nodes sharing the same memory read/write bandwidth, and network bandwidth. However we note that even with these issues, we obtain much faster convergence than the synchronous counterpart experimentally. NU_ACDM and hence A2BCD are in fact optimal in some sense. That is, among a fairly wide class of coordinate descent algorithms A, they have the best-possible worst-case complexity to highest order. We extend the work in to encompass algorithms are asynchronous and have unequal L i. For a subset S ∈ R d, we let IC(S) (inconsistent read) denote the set of vectors v whose components are a combination of components of vectors in the set S. DISPLAYFORM0 Definition 4. Asynchronous Randomized Incremental Algorithms. Consider the unconstrained minimization problem equation 1.1 for function f satisfying the conditions stated in Section 1. We define the class A as algorithms G on this problem such that: DISPLAYFORM1 This is a rather general class: x k+1 can be constructed from any inconsistent reading of past iterates IC(X k), and any past gradient of an inconsistent read ∇ ij f (IC(X j)). DISPLAYFORM2 Hence A has a complexity lower bound: DISPLAYFORM3 Our proof in Section D follows very similar lines to;. In this section we present and analyze an ODE which is the continuous-time limit of A2BCD. This ODE is a strongly convex, and asynchronous version of the ODE found in. For simplicity, assume L i = L, ∀i. We rescale (I.e. we replace f (x) with 1 σ f.) f so that σ = 1, and hence κ = L/σ = L. Taking the discrete limit of synchronous A2BCD (i.e. accelerated RBCD), we can derive the following ODE 6 (see Section equation C.1): DISPLAYFORM0 We define the parameter η nκ 1/2, and the energy: DISPLAYFORM1. This is very similar to the Lyapunov function discussed in equation 2.11, with DISPLAYFORM2 the role of v k 2, and A k = 0 (since there is no delay yet). Much like the traditional analysis in the proof of Theorem 1, we can derive a linear convergence with a similar rate. See Section C.2. We may also analyze an asynchronous version of equation 3.1 to motivate the proof of our main theorem. HereŶ (t) is a delayed version of Y (t) with the delay bounded by τ. DISPLAYFORM0 Unfortunately, this energy satisfies (see Section equation C.4, equation C.7): DISPLAYFORM1 Hence this energy E(t) may not be decreasing in general. But, we may add a continuous-time asynchronicity error (see), much like in Definition 2, to create a decreasing energy. Let c 0 ≥ 0 and r > 0 be arbitrary constants that will be set later. Define: DISPLAYFORM2 Lemma 6. When rτ ≤ 1 2, the asynchronicity error A(t) satisfies: DISPLAYFORM3 DISPLAYFORM4 Hence f (Y (t)) convergence linearly to f (x *) with rate O exp −t/(nκ 1/2) Notice how this convergence condition is similar to Corollary 3, but a little looser. The convergence condition in Theorem 1 can actually be improved to approximately match this (see Section E).Proof. DISPLAYFORM5 The preceding should hopefully elucidate the logic and general strategy of the proof of Theorem 1. We now discuss related work that was not addressed in Section 1. Nesterov acceleration is a method for improving an algorithm's iteration complexity's dependence the condition number κ. FORMULA15 showed that many of the assumptions used in prior work (such as bounded delay τ < ∞) were unrealistic and unnecessary in general. In Hannah & Yin (2017a) the authors showed that asynchronous iterations will complete far more iterations per second, and that a wide class of asynchronous algorithms, including asynchronous RBCD, have the same iteration complexity as their synchronous counterparts. Hence certain asynchronous algorithms can be expected to significantly outperform traditional ones. authors propose a novel asynchronous catalyst-accelerated BID6 primal-dual algorithmic framework to solve regularized ERM problems. They structure the parallel updates so that the data that an update depends on is up to date (though the rest of the data may not be). However catalyst acceleration incurs a log(κ) penalty over Nesterov acceleration in general. In BID0, the author argues that the inner iterations of catalyst acceleration are hard to tune, making it less practical than Nesterov acceleration. To investigate the performance of A2BCD, we solve the ridge regression problem. Consider the following primal and corresponding dual objective (see for instance): DISPLAYFORM0 where A ∈ R d×n is a matrix of n samples and d features, and l is a label vector. We let A = [A 1, . . ., A m] where A i are the column blocks of A. We compare A2BCD (which is asynchronous accelerated), synchronous NU_ACDM (which is synchronous accelerated), and asynchronous RBCD (which is asynchronous non-accelerated). Nodes randomly select a coordinate block according to equation 2.1, calculate the corresponding block gradient, and use it to apply an update to the shared solution vectors. synchronous NU_ACDM is implemented in a batch fashion, with batch size p (1 block per computing node). Nodes in synchronous NU_ACDM implementation must wait until all nodes apply their computed gradients before they can start the next iteration, but the asynchronous algorithms simply compute with the most up-to-date information available. We use the datasets w1a (47272 samples, 300 features), wxa which combines the data from from w1a to w8a (293201 samples, 300 features), and aloi (108000 samples, 128 features) from. The algorithm is implemented in a multi-threaded fashion using C++11 and GNU Scientific Library with a shared memory architecture. We use 40 threads on two 2.5GHz 10-core Intel Xeon E5-2670v2 processors. See Section A.1 for a discussion of parameter tuning and estimation. The parameters for each algorithm are tuned to give the fastest performance, so that a fair comparison is possible. A critical ingredient in the efficient implementation of A2BCD and NU_ACDM for this problem is the efficient update scheme discussed in Lee & Sidford (2013b; a). In linear regression applications such as this, it is essential to be able to efficiently maintain or recover Ay. This is because calculating block gradients requires the vector A T i Ay, and without an efficient way to recover Ay, block gradient evaluations are essentially 50% as expensive as full-gradient calculations. Unfortunately, every accelerated iteration in dense updates to y k because of the averaging step in equation 2.6. Hence Ay must be recalculated from scratch. However Lee & Sidford (2013a) introduces a linear transformation that allows for an equivalent iteration that in sparse updates to new iteration variables p and q. The original purpose of this transformation was to ensure that the averaging steps (e.g. equation 2.6) do not dominate the computational cost for sparse problems. However we find a more important secondary use which applies to both sparse and dense problems. Since the updates to p and q are sparse coordinate-block updates, the vectors Ap, and Aq can be efficiently maintained, and therefore block gradients can be efficiently calculated. The specifics of this efficient implementation are discussed in Section A.2.In Table 5, we plot the sub-optimality vs. time for decreasing values of λ, which corresponds to increasingly large condition numbers κ. When κ is small, acceleration doesn't in a significantly better convergence rate, and hence A2BCD and async-RBCD both outperform sync-NU_ACDM since they complete faster iterations at similar complexity. Acceleration for low κ has unnecessary overhead, which means async-RBCD can be quite competitive. When κ becomes large, async-RBCD is no longer competitive, since it has a poor convergence rate. We observe that A2BCD and sync-NU_ACDM have essentially the same convergence rate, but A2BCD is up to 4 − 5× faster than sync-NU_ACDM because it completes much faster iterations. We observe this advantage despite the fact that we are in an ideal environment for synchronous computation: A small, homogeneous, high-bandwidth, low-latency cluster. In large-scale heterogeneous systems with greater synchronization overhead, bandwidth constraints, and latency, we expect A2BCD's advantage to be much larger. TAB4: Sub-optimality f (y k) − f (x *) (y-axis) vs time in seconds (x-axis) for A2BCD, synchronous NU_ACDM, and asynchronous RBCD for data sets w1a, wxa and aloi for various values of λ. An efficient implementation will have coordinate blocks of size greater than 1. This to ensure the efficiency of linear algebra subroutines. Especially because of this, the bulk of the computation for each iteration is computing ∇ i k f (ŷ k), and not the averaging steps. Hence the computing nodes only need a local copy of y k in order to do the bulk of an iteration's computation. Given this gradient ∇ i k f (ŷ k), updating y k and v k is extremely fast (x k can simply be eliminated). Hence it is natural to simply store y k and v k centrally, and update them when the delayed gradients ∇ i k f (ŷ k). Given the above, a write mutex over (y, v) has minuscule overhead (which we confirm with experiments), and makes the labeling of iterates unambiguous. This also ensures that v k and y k are always up to date when (y, v) are being updated. Whereas the gradient ∇ i k f (ŷ k) may at the same time be out of date, since it has been calculated with an outdated version of y k. However a write mutex is not necessary in practice, and does not appear to affect convergence rates or computation time. Also it is possible to prove convergence under more general asynchronicity. When defining the coefficients, σ may be underestimated, and L, L 1,..., L n may be overestimated if exact values are unavailable. Notice that x k can be eliminated from the above iteration, and the block gradient ∇ i k f (ŷ k) only needs to be calculated once per iteration. A larger (or overestimated) maximum delay τ will cause a larger asynchronicity parameter ψ, which leads to more conservative step sizes to compensate. To estimate ψ, one can first performed a dry run with all coefficient set to 0 to estimate τ. All function parameters can be calculated exactly for this problem in terms of the data matrix and λ. We can then use these parameters and this tau to calculate ψ. ψ and τ merely change the parameters, and do not change execution patterns of the processors. Hence their parameter specification doesn't affect the observed delay. Through simple tuning though, we found that ψ = 0.25 ed in good performance. In tuning for general problems, there are theoretical reasons why it is difficult to attain acceleration without some prior knowledge of σ, the strong convexity modulus BID3. Ideally σ is pre-specified for instance in a regularization term. If the Lipschitz constants L i cannot be calculated directly (which is rarely the case for the classic dual problem of empirical risk minimization objectives), the line-search method discussed in Section 4 can be used. As mentioned in Section 5, authors in Lee & Sidford (2013a) proposed a linear transformation of an accelerated RBCD scheme that in sparse coordinate updates. Our proposed algorithm can be given a similar efficient implementation. We may eliminate x k from A2BCD, and derive the equivalent iteration below: DISPLAYFORM0 where C and Q k are defined in the obvious way. Hence we define auxiliary variables p k, q k defined via: DISPLAYFORM0 These clearly follow the iteration: DISPLAYFORM1 Since the vector Q k is sparse, we can evolve variables p k, and q k in a sparse manner, and recover the original iteration variables at the end of the algorithm via A.1.The gradient of the dual function is given by: DISPLAYFORM2 As mentioned before, it is necessary to maintain or recover Ay k to calculate block gradients. Since Ay k can be recovered via the linear relation in equation A.1, and the gradient is an affine function, we maintain the auxiliary vectors Ap k and Aq k instead. Hence we propose the following efficient implementation in Algorithm 1. We used this to generate the in Table 5. We also note also that it can improve performance to periodically recover v k and y k, reset the values of p k, q k, and C to v k, y k, and I respectively, and restarting the scheme (which can be done cheaply in time O(d)).We let B ∈ R 2×2 represent C k, and b represent B −1. ⊗ is the Kronecker product. Each computing node has local outdated versions of p, q, Ap, Aq which we denotep,q,Âp,Âq respectively. We also find it convenient to define: DISPLAYFORM3 Algorithm 1 Shared-memory implementation of A2BCD Randomly select block i via equation 2.1. Read shared data into local memory:p ← p,q ← q,Âp ← Ap,Âq ← Aq,B ← B. Compute block gradient: DISPLAYFORM0 10: DISPLAYFORM1 11: DISPLAYFORM0 12: DISPLAYFORM1 Increase iteration count: k ← k + 1 14: end while 15: Recover original iteration variables: DISPLAYFORM2 We first recall a couple of inequalities for convex functions. Lemma 7. Let f be σ-strongly convex with L-Lipschitz gradient. Then we have: DISPLAYFORM0 We also find it convenient to define the norm: DISPLAYFORM1 B.1 Starting point First notice that using the definition equation 2.8 of v k+1 we have: DISPLAYFORM0 We have the following general identity: DISPLAYFORM1 It can also easily be verified from equation 2.6 that we have: DISPLAYFORM2 DISPLAYFORM3 This inequality is our starting point. We analyze the terms on the second line in the next section. To analyze these terms, we need a small lemma. This lemma is fundamental in allowing us to deal with asynchronicity. Lemma 8. Let χ, A > 0. Let the delay be bounded by τ. Then: DISPLAYFORM0 Proof. See Hannah & Yin (2017a). We have: DISPLAYFORM0 The terms in bold in equation B.8 and equation B.9 are a of the asynchronicity, and are identically 0 in its absence. Our strategy is to separately analyze terms that appear in the traditional analysis of Nesterov FORMULA15, and the terms that from asynchronicity. We first prove equation B.8: FIG5 equation B.10 follows from strong convexity (equation B.2 with x = y k and y = x *), and the fact that ∇f is L-Lipschitz. The term due to asynchronicity becomes: DISPLAYFORM0 DISPLAYFORM1 using Lemma 8 with χ = κψ −1, A = y k. Combining this with equation B.10 completes the proof of equation B.8.We now prove equation B.9: DISPLAYFORM2 Here the last line follows from Lemma 8 with χ = κψ DISPLAYFORM3 We can complete the proof using the following identity that can be easily obtained from equation 2.6: DISPLAYFORM4 Much like , we need a f (x k) term in the Lyapunov function (see the middle of page 357). However we additionally need to consider asynchronicity when analyzing the growth of this term. Again terms due to asynchronicity are emboldened. Lemma 10. We have: DISPLAYFORM0 Proof. From the definition equation 2.7 of x k+1, we can see that x k+1 − y k is supported on block i k. Since each gradient block ∇ i f is L i Lipschitz with respect to changes to block i, we can use equation B.1 to obtain: DISPLAYFORM1 Here the last line followed from the definition equation B.3 of the norm · * 1/2. We now analyze the middle term: DISPLAYFORM2 We then apply Lemma 8 to this with χ = 2h DISPLAYFORM3 Finally to complete the proof, we combine equation B.11, with equation B.12. The previous inequalities produced difference terms of the form y k+1−j − y k−j 2. The following lemma shows how these errors can be incorporated into a Lyapunov function. Lemma 11. Let 0 < r < 1 and consider the asynchronicity error and corresponding coefficients: DISPLAYFORM0 Remark 2. Interpretation. This means that an asynchronicity error term A k can negate a series of difference terms − ∞ j=1 s j y k+1−j − y k−j 2 at the cost of producing an additional error c 1 E k y k+1 − y k 2, while maintaining a convergence rate of r. This essentially converts difference terms, which are hard to deal with, into a y k+1 − y k 2 term which can be negated by other terms in the Lyapunov function. The proof is straightforward. Proof. DISPLAYFORM0 Noting the following completes the proof: DISPLAYFORM1 Given that A k allows us to negate difference terms, we now analyze the cost c 1 E k y k+1 − y k 2 of this negation. We have: DISPLAYFORM0 Proof. DISPLAYFORM1 Here equation B.13 following from equation 2.8, the definition of v k+1. equation B.14 follows from the inequality x + y 2 ≤ 2 x 2 + 2 y 2. The rest is simple algebraic manipulation. DISPLAYFORM2 (definitions of h and α: equation 2.3, and equation 2.5) = 1 DISPLAYFORM3 Rearranging the definition of ψ, we have: DISPLAYFORM4 Using this on equation B.15, we have: DISPLAYFORM5 This completes the proof. We are finally in a position to bring together all the all the previous together into a master inequality for the Lyapunov function ρ k (defined in equation 2.11). After this lemma is proven, we will prove that the right hand size is negative, which will imply that ρ k linearly converges to 0 with rate β. Lemma 13. Master inequality. We have: DISPLAYFORM0 Proof. DISPLAYFORM1 We now collect and organize the similar terms of this inequality. DISPLAYFORM2 Now finally, we add the function-value and asynchronicity terms to our analysis. We use Lemma 11 is with r = 1 − σ 1/2 S −1, and DISPLAYFORM3 Notice that this choice of s i will recover the coefficient formula given in equation 2.9. Hence we have: DISPLAYFORM4 (Lemmas 11 and 12) + c 1 2α In the next section, we will prove that every coefficient on the right hand side of equation B.16 is 0 or less, which will complete the proof of Theorem 1. DISPLAYFORM5 DISPLAYFORM6 Here the last line followed since ψ ≤ 1 2 and σ 1/2 S −1 ≤ 1. We now analyze the coefficient of DISPLAYFORM7 Proof. DISPLAYFORM8 in Lemma 13 is non-positive. Proof. We first need to bound c 1.(equation B.18 and equation 2.9) c 1 = s DISPLAYFORM9 It can be easily verified that if x ≤ 1 2 and y ≥ 0, then (1 − x) −y ≤ exp(2xy). Using this fact with x = σ 1/2 S −1 and y = τ, we have: DISPLAYFORM10 (since ψ ≤ 3/7 and hence τ σ DISPLAYFORM11 We now analyze the coefficient of ∇f (ŷ k) DISPLAYFORM12 Proof. DISPLAYFORM13 Here the last inequality follows since β ≤ 1 and α ≤ σ 1/2 S −1. We now rearrange the definition of ψ to yield the identity: DISPLAYFORM14 Using this, we have: DISPLAYFORM15 Here the last line followed since L ≤ L, ψ ≤ 3 7, and τ ≥ 1. Hence the proof is complete. Proof of Theorem 1. Using the master inequality 13 in combination with the previous Lemmas 14, 15, 16, and 17, we have: DISPLAYFORM16 When we have: DISPLAYFORM17 then the Lyapunov function ρ k has decreased below ρ 0 in expectation. Hence the complexity K satisfies: DISPLAYFORM18 Now it can be shown that for 0 < x ≤ 1 2, we have: DISPLAYFORM19 Since n ≥ 2, we have σ 1/2 S −1 ≤ 1 2. Hence: DISPLAYFORM20 An expression for K NU_ACDM , the complexity of NU_ACDM follows by similar reasoning. DISPLAYFORM21 Finally we have: DISPLAYFORM22 which completes the proof. C.1 Derivation of ODE for synchronous A2BCDIf we take expectations with respect to E k, then synchronous (no delay) A2BCD becomes: DISPLAYFORM0 We find it convenient to define η = nκ 1/2. Inspired by this, we consider the following iteration: DISPLAYFORM1 C.1 Derivation of ODE for synchronous A2BCDfor coefficients: DISPLAYFORM2 s is a discretization scale parameter that will be sent to 0 to obtain an ODE analogue of synchronous A2BCD. We first use equation DISPLAYFORM3 The proof of convergence is completed in Section 3. For parameter set σ, L 1,..., L n, n, we construct a block-separable function f on the space R As mentioned, a stronger than Theorem 1 is possible. In the case when L i = L for all i, we can consider a slight modification of the coefficients: DISPLAYFORM0 (E.1) DISPLAYFORM1 for the asynchronicity parameter: DISPLAYFORM2 This leads to complexity: DISPLAYFORM3 Here there is no restriction on ψ as in Theorem 1, and hence there is no restriction on τ. Assuming ψ ≤ 1 gives optimal complexity to within a constant factor. Notice then that the ing condition of τ τ ≤ 1 6 nκ −1/2 (E.9) now essentially matches the one in Theorem 3 in Section 3. While this is stronger, it increases the complexity of the proof substantially. So in the interests of space and simplicity, we do not prove this stronger .
We prove the first-ever convergence proof of an asynchronous accelerated algorithm that attains a speedup.
1,154
scitldr
A framework for efficient Bayesian inference in probabilistic programs is introduced by embedding a sampler inside a variational posterior approximation. Its strength lies in both ease of implementation and automatically tuning sampler parameters to speed up mixing time. Several strategies to approximate the evidence lower bound (ELBO) computation are introduced, including a rewriting of the ELBO objective. Experimental evidence is shown by performing experiments on an unconditional VAE on density estimation tasks; solving an influence diagram in a high-dimensional space with a conditional variational autoencoder (cVAE) as a deep Bayes classifier; and state-space models for time-series data. We consider a probabilistic program (PP) to define a distribution p(x, z), where x are observations and z, both latent variables and parameters, and ask queries involving the posterior p(z|x). This distribution is typically intractable but, conveniently, probabilistic programming languages (PPLs) provide inference engines to approximate it using Monte Carlo methods (e.g. particle Markov Chain Monte Carlo (MCMC) or Hamiltonian Monte Carlo (HMC) ) or variational approximations (e.g. Automatic Differentiation Variational Inference (ADVI) ). Whereas the latter are biased and underestimate uncertainty, the former may be exceedingly slow depending on the target distribution. For such reason, over the recent years, there has been an increasing interest in developing more efficient posterior approximations (; ;). It is known that the performance of a sampling method depends on the parameters used . Here we propose a framework to automatically adapt the posterior shape and tune the parameters of a posterior sampler with the aim of boosting Bayesian inference in PPs. Our framework constitutes a principled way to enhance the flexibility of the variational posterior approximation, yet can be seen also as a procedure to tune the parameters of an MCMC sampler. Our contributions are a new flexible and unbiased variational approximation to the posterior, which improves an initial variational approximation with a (learnable via automatic differentiation) stochastic process. Appendix A discusses related work. In standard VI, the variational approximation q φ (z|x) is analytically tractable and typically chosen as a factorized Gaussian distribution. We propose to use a more flexible approximating posterior by embedding a sampler through: q φ,η (z|x) = Q η,T (z|z 0)q 0,φ (z 0 |x)dz 0, where q 0,φ (z|x) is the initial and tractable density (i.e., the starting state for the sampler). We refer to q φ,η (z|x) as the refined variational approximation. The distribution Q η,T (z|z 0) refers to a stochastic process parameterized by η used to evolve the original density q 0,φ (z|x) and achieve greater flexibility; we describe below particular forms of it. When T = 0, no refinement steps are performed, and the refined variational approximation coincides with the original one, q φ,η (z|x) = q 0,φ (z|x). As T increases, the variational approximation will be closer to the exact posterior, provided that Q η,T is a valid MCMC sampler. Next, we maximize a refined ELBO objective, ELBO(q) = E q φ,η (z|x) [log p(x, z) − log q φ,η (z|x)] to optimize the divergence KL(q φ,η (z|x)||p(z|x)). The first term of ELBO only requires sampling from q φ,η (z|x); however the second term, −E q φ,η (z|x) [log q φ,η (z|x)] requires also evaluating the evolving density. Regarding Q η,T (z|z 0), we consider the following families of sampling algorithms. When the latent variables z are continuous (z ∈ R d), we evolve the original variational density q 0,φ (z|x) through a stochastic diffusion process. To make it tractable, we discretize the Langevin dynamics using the Euler-Maruyama scheme, arriving at the stochastic gradient Langevin dynamics (SGLD) sampler. We then follow the process Q η,T (z|z 0) (representing T iterations of an MCMC sampler). As an example, for the SGLD sampler z i = z i−1 + η∇ log p(x, z i−1) + ξ i, where i iterates from 1 to T; in this case, the only parameter of the SGLD sampler is the learning rate η. The noise for the SGLD is ξ i ∼ N (0, 2ηI). The initial variational distribution q 0,φ (z|x) is a Gaussian parameterized by a deep neural network (NN). Then, T iterations of a sampler Q parameterized by η are applied leading to q φ,η. An alternative may be given by ignoring the noise vector ξ , thus refining the initial variational approximation with just stochastic gradient descent (SGD). Moreover, we can use Stein variational gradient descent (SVGD) or a stochastic version to apply repulsion between particles and promote a more extensive exploration of the latent space. We propose a set of guidelines for the ELBO optimization using the refined variational approximation. Particle approximation We can consider the flow Q η,T (z|z 0) as a mixture of Dirac deltas (i.e., we approximate it with a finite set of particles). That is, we sample z 1,..., z K ∼ Q η,T (z|z 0) and useQ η,T (z|z 0) = T i=1 q η (z i |z i−1)q 0,φ (z 0 |x). The entropy for each factor can be straightforwardly computed, i.e. for the case of SGLD, q η (z i |z i−1) = N (z i−1 + η∇ log p(x, z i−1), 2ηI). This approximation keeps track of a better estimate of the entropy than the particle approximation. Deterministic flows If using a deterministic flow (such as SGD or SVGD), we can keep track of the change in entropy at each iteration using the change of variable formula as done in. However, this requires a costly Jacobian computation, making it unfeasible to combine with our backpropagation through the sampler scheme (Sec. 2.3) for moderately complex problems. In standard VI, the variational approximation q(z|x; φ) is parameterized by φ. The parameters are learned using SGD or variants such as Adam , using ∇ φ ELBO(q). Since we have shown how to embed a sampler inside the variational guide, it is also possible to compute a gradient of the objective with respect to the sampler parameters η. For instance, we can compute a gradient with respect to the learning rate η from the SGLD or SGD process from Section 2.1, ∇ η ELBO(q), to search for an optimal step size at every VI iteration. This is an additional step apart from using the gradient ∇ φ ELBO(q) employed to learn a good initial sampling distribution. See Appendix D.3 for a discussion on two modes of automatic differentiation that can be used. Code is released at https://github.com/vicgalle/vis. The VIS framework was implemented using Pytorch , though we also release a notebook for the first experiment using Jax to highlight its simple implementation. Appendix B contains additional experiments; Appendix C, implementation details. Funnel density As a preliminary experiment, we test the VIS framework on a synthetic yet complex target distribution. The target, bi-dimensional density is defined through: As a variational approximation we take the usual diagonal Gaussian. For the VIS case, we refine it for T = 1 steps using SGLD. Results are in Figure 1. Clearly, our refined version achieves a tighter bound, the VIS variant is placed nearer to the mean of the true distribution and is more disperse than the original variational approximation, confirming that the refinement step helps in attaining more flexible posterior approximations. State-space model (DLM) We now test the VIS framework on the Mauna Loa monthly CO 2 time series data . As the training set, we take the first 10 years, and we evaluate over the next 2 years. We use a dynamic linear model (DLM) composed of a local linear trend plus a seasonality block of periodicity 12. Full model specification can be checked in Appendix C.1. As a preprocessing step, we standardize the time series to zero mean and unitary deviation. To guarantee the same computational budget time, the model without refining is run for 10 epochs, whereas the model with refinement is run for 4 epochs. We use the particle approximation from Sec. 2.2. We report mean absolute error (MAE) and predictive entropy in Table 1. In addition, we compute the interval score as defined in , a strictly proper scoring rule. As can be seen, for similar wall-clock times, the refined model not only achieves lower MAE, but also its predictive intervals are narrower than the non-refined counterpart. Variational Autoencoder We aim to check whether VIS is competitive with respect to other recent algorithms. We test our approach in a Variational Autoencoder (VAE) model , which is the building block of more complex models and tasks (b;). The VAE defines a conditional distribution p θ (x|z), generating an observation x from a latent variable z. We are interested in modelling two 28 × 28 image distributions, MNIST and fashion-MNIST. To perform inference (learn parameters θ), the VAE introduces a variational approximation q φ (z|x). In the standard setting, this is Gaussian; we instead use the refined variational approximation with various values of T. We used the MC approximation, though achieved similar using the Gaussian one. As experimental setup, we reproduce the setting from. Results are reported in Table 2. To guarantee a fair comparison, we trained the VIS-5-10 variant for 10 epochs, whereas all the other variants were trained for 15 (fMNIST) or 20 epochs (MNIST), so that the VAE performance is comparable to that in. Although VIS is trained for less epochs, by increasing the number of MCMC iterations T, we dramatically improve on test log-likelihood. In terms of computational complexity, the average time per epoch using T = 5 is 10.46s, whereas with no refinement (T = 0) is 6.10s (hence our decision to train the refined variant for less epochs): a moderate increase in computing time compensates the dramatic increase in log-likelihood while not introducing new parameters, except for the learning rate η. We also compare our with the contrastive divergence approach . Figure 2 displays ten random samples of reconstructed digit images as visual check. Discussion We have proposed a flexible and efficient framework to perform inference in probabilistic programs defining wide classes of models. Our framework can be seen as a general way of tuning SG-MCMC sampler parameters, adapting the initial distributions and the learning rate. Key to the success and applicability of the VIS framework are the approximations introduced for the intractable parts of the refined variational approximations, which are computationally cheap but convenient. The idea of preconditioning the posterior distribution to speed up the mixing time of an MCMC sampler has recently been explored in and , where a reparameterization is learned before performing the sampling via HMC. Both papers extend seminal work of by learning an efficient and expressive deep, non-linear transformation instead of a polynomial regression. However, they do not account for tuning the parameters of the sampler as we introduce in Section 2, where a fully, end to end differentiable sampling scheme is proposed. The work of introduced a general framework for constructing more flexible variational distributions, called normalizing flows. These transformations are one of the main techniques to improve the flexibility of current VI approaches and have recently pervaded the literature of approximate Bayesian inference with current developments such as continuous-time normalizing flows (a) which extend an initial simple variational posterior with a discretization of Langevin dynamics. However, they require a generative adversarial network (GAN) to learn the posterior, which can be unstable in high-dimensional spaces. We overcome this issue with the novel formulation stated in Section 2. Our framework is also compatible with different optimizers, not only those derived from Langevin dynamics. Other recent proposals to create more flexible variational posteriors are based on implicit approaches, which typically require a GAN (Huszár, 2017) or implicit schema such as UIVI or SIVI . Our variational approximation is also implicit, but we use a sampling algorithm to drive the evolution of the density, combined with a Dirac delta approximation to derive an efficient variational approximation as we report on the extensive experiments in the Section 3. Closely related to our framework is the work of , where a VAE is learned using HMC. We use a similar compound distribution as the variational approximation, though our framework allows for any SG-MCMC sampler (via the entropy approximation strategies introduced) and also the tuning of sampler parameters via gradient descent. Our work is also related to the recent idea of amortization of samplers . A common problem with these approaches is that they incur in an additional error, the amortization gap . We alleviate this by evolving a set of particles z i with a stochastic process in the latent space after learning a good initial distribution. Hence, the bias generated by the initial approximation is significantly reduced after several iterations of the process. A recent article related to our paper is , who define a compound distribution similar to our framework. However, we focus on an efficient approximation using the reverse KL divergence, the standard and well understood divergence used in variational inference, which allows for tuning sampler parameters and achieving competitive . With the final experiments we show that the VIS framework can deal with more general probabilistic graphical models. Influence diagrams are one of the most familiar representations of a decision analysis problem. There is a long history on bridging the gap between influence diagrams and probabilistic graphical models (see , for instance), so developing better tools for Bayesian inference can be automatically used to solve influence diagrams. We showcase the flexibility of the proposed scheme to solve inference problems in an experiment with a classification task in a high-dimensional setting. As dataset, the MNIST handwritten digit classification task is chosen, in which grey-scale 28 × 28 images have to be classified in one of the ten classes Y = {0, 1, . . ., 9}. More concretely, we extend the VAE model to condition it on a discrete variable y, leading to the conditional VAE (cVAE). A cVAE defines a decoder distribution p θ (x|z, y) on an input space x ∈ R D given class label y ∈ Y and latent variable z ∈ R d. To perform inference, a variational posterior is learned as an encoder q φ (z|x, y) from a prior p(z) ∼ N (0, I). Leveraging the conditional structure on y, we use the generative model as a classifier using Bayes rule: where we use K Monte Carlo samples z (k) ∼ q φ (z|x, y). In the experiments we set K = 5. Given a test sample x, the labelŷ with highest probability p(y|x) is predicted. Figure 5 in Appendix depicts the corresponding influence diagram. Additional details regarding the model architecture and hyperparameters can be found in Appendix C. For comparison purposes, we perform various experiments changing T for the transition distribution Q η,T in the refined variational approximation. Results are in Table 3. We report the test accuracy achieved at the end of training. Note we are comparing different values of T depending on being on the training or testing phases (in the latter, where the model and variational parameters are kept frozen). The model with T tr = 5 was trained for 10 epochs, whereas the other settings for 15 epochs, in order to give all settings similar training times. Results are averaged from 3 runs with different random seeds. From the it is clear that the effect of using the refined variational approximation (the cases when T > 0) is crucially beneficial to achieve higher accuracy. The effect of learning a good initial distribution and inner learning rate by using the gradients ∇ φ ELBO(q) and ∇ η ELBO(q) has a highly positive impact in the accuracy obtained. On a final note, we have not included the case when only using a SGD or SGLD sampler (i.e., without learning an initial distribution q 0,φ (z|x)) since the were much worse than the ones in Table 3, for a comparable computational budget. This strongly suggests that for We test our variational approximation on two state-space models, one for discrete data and the other for continuous observations. All the experiments in this subsection use the Fast AD version from Section D.3 since it was not necessary to further tune the sampler parameters to have competitive . The model equations are given by where each conditional is a Categorical distribution which takes 5 different classes and the prior p(θ) = p(θ em)p(θ tr) are two Dirichlet distributions that sample the emission and transition probabilities, respectively. We perform inference on the parameters θ. The model equations are the same as in the HMM case, though the conditional distributions are now Gaussian and the parameters θ refer to the emission and transition variances. As before, we perform inference over θ. The full model implementations can be checked in Appendix C.1, based on funsor 1, a PPL on top of the Pytorch autodiff framework. For each model, we generate a synthetic dataset, and use the refined variational approximation with T = 0, 1, 2. As the original variational approximation to the parameters θ we use a Dirac Delta. Performing VI with this approximation corresponds to MAP estimation using the Kalman filter in the DLM case and the Baum-Welch algorithm in the HMM case , since we marginalize out the latent variables z 1:τ. Model details are given in Appendix C.1.1. Figure 3 shows the . The first row reports the experiments related to the HMM; the second one to the DLM. While in all graphs we report the evolution of the loglikelihood during inference, in the first column we report the number of ELBO iterations, whereas in the second column we measure wall-clock time as the optimization takes place. We confirm that VIS (T > 0) achieve better than regular optimization with VI (T = 0) for a similar amount of time. With the aim of assessing whether ELBO optimization helps in attaining better auxiliary scores, we also report on a prediction task. We generate a synthetic time series of alternating 0 and 1 for τ = 105 timesteps. We train the HMM model from before on the first 100 points, and report in Table 4 the accuracy of the predictive distribution p(y t) averaged over the last 5 time-steps. We also report the predictive entropy since it helps in assessing the confidence of the model in its forecast and is a strictly proper scoring rule . To guarantee the same computational budget time and a fair comparison, the model without refining is run with 50 epochs, whereas the model with refinement is run for 20 epochs. We see that the refined model achieves higher accuracy than its counterpart; in addition it is correctly more confident in its predictions. x t ∼ N (3.0z t + 0.5, σ em). with z 0 = 0.0. The DLM model is comprised of a linear trend component plus a seasonal block of period 12. The trend is specified as With respect to the seasonal component, the main idea is to cycle the state: suppose θ t ∈ R p, with p being the seasonal period. Then, at each timestep, the model focuses on the first component of the state vector: Thus, we can specify the seasonal component via: where F is a p−dimensional vector and G is a p × p matrix such that def encode(self, x): h1_mu = F.relu(self.fc1_mu(x)) h1_cov = F.relu(self.fc1_cov(x)) h1_mu = F.relu(self.fc12_mu(h1_mu)) h1_cov = F.relu(self.fc12_cov(h1_cov)) # we work in the logvar-domain return self.fc2_mu(h1_mu), torch.log(F.softplus(self.fc2_cov(h1_cov))) def decode(self, z): h3 = F.relu(self.fc3(z)) h3 = F.relu(self.fc32(h3)) return torch.sigmoid(self.fc4(h3)) The VAE model is implemented with PyTorch . The prior distribution p(z) for the latent variables z ∈ R 10 is a standard factorized Gaussian. The decoder distribution p θ (x|z) and the encoder distribution (initial variational approximation) q 0,φ (z|x) are parameterized by two feed-forward neural networks whose details can be checked in Figure 4. The optimizer Adam is used in all experiments, with a learning rate λ = 0.001. We also set η = 0.001. We train for 15 epochs (fMNIST) and 20 epochs (MNIST), in order to achieve similar performance to the explicit VAE case in . For the VIS-5-10 setting, we train for only 10 epochs, to allow for a fair computational comparison (similar computing times). The cVAE model is implemented with PyTorch . The prior distribution p(z) for the latent variables z ∈ R 10 is a standard factorized Gaussian. The decoder distribution p θ (x|y, z) and the encoder distribution (initial variational approximation) q 0,φ (z|x, y) are parameterized by two feed-forward neural networks whose details can be checked in Figure 6. The integral is approximated with 1 MC sample from the variational approximation in all experimental settings. The optimizer Adam is used in all the experiments, with a learning rate λ = 0.01. We set the initial η = 5e − 5. In this Section we study in detail key properties of the proposed VIS framework. Performing variational inference with the refined variational approximation can be regarded as using the original variational guide while optimizing an alternative, tighter ELBO. Note that for a refined guide of the form q(z|z 0)q(z 0 |x), the objective function can be written as However, using the Dirac Delta approximation for q(z|z 0) and noting that z = z 0 + η∇ log p(x, z 0) when using SGD with T = 1, we arrive at the modified objective: which is equivalent to the refined ELBO introduced in. Since we are perturbing the latent variables in the steepest ascent direction, it is straightforward to show that, for moderate η, the previous bound is tighter than the one, for the original variational guide q(z 0 |x), E q(z 0 |x) [log p(x, z 0) − log q(z 0 |x)]. This reformulation of ELBO is also convenient since it provides a clear way of implementing our refined variational inference framework in any PPL supporting algorithmic differentiation. From the in subsection D.1, we can further restrict to the case when the original variational approximation is also a Dirac point mass. Then, the original ELBO optimization resorts to the standard maximum likelihood estimation, i.e., max z log p(x, z). Within the VIS framework, we optimize instead max z log p(x, z + ∆z), where ∆z is one iteration of the sampler, i.e., ∆z = η∇ log p(x, z) in the SGD case. For notational clarity we resort to the case T = 1, but a similar analysis can be straightforwardly done if more refinement steps are performed. We may now perform a first-order Taylor expansion of the refined objective as log p(x, z + ∆z) ≈ log p(x, z) + (∆z) ∇ log p(x, z). Taking gradients of the first order approximation w.r.t. the latent variables z we arrive at ∇ z log p(x, z) + η∇ z log p(x, z) ∇ 2 z log p(x, z), where we have not computed the gradient through the ∆z term. That is, the refined gradient can be deemed as the original gradient plus a second order correction. Instead of being modulated by a constant learning rate, this correction is adapted by the chosen sampler. In the experiments in Section B.1 we show that this is beneficial for the optimization as it can take less iterations to achieve lower losses. By further taking gradients through the ∆z term, we may tune the sampler parameters such as the learning rate as described in Section 2.3. Consequently, the next subsection describes both modes of differentiation. Here we describe how to implement two variants of the ELBO objective. First, we define a stop gradient operator 2 ⊥ that sets the gradient of its operand to zero, i.e., ∇ x ⊥(x) = 0 whereas in the forward pass it acts as the identity function, that is, ⊥(x) = x. Then, the two variants of the ELBO objective are E q [log p(x, z + ∆z) − log q(z + ∆z|x)] (Full AD) and E q [log p(x, z + ⊥(∆z)) − log q(z + ⊥(∆z)|x)]. (Fast AD) The Full AD ELBO makes it possible to further compute a gradient wrt sampler parameters inside ∆z at the cost of a slight increase in the computational burden. Note that for a large class of models (including HMMs and DLMs) we can marginalize out z 1:τ and have reduced variance iterating with: θ ← θ + ∇ θ log p(x 1:τ |θ) + ξ, where the latent variables z 1:τ have been marginalized out using the sum-product algorithm. For linear-Gaussian models we can also compute the exact form of the refined posterior, since all terms in Eq. 5 are linear wrt the latent variables θ. However, inference in these linear models is exact by using conjugate distributions, so the proposed framework is more fit to the case of state-space models containing non-linear (or non-conjugate) components. For these families of models, we resort to use just a gradient estimator of the entropy or the Delta approximation in Section 2.1.
We embed SG-MCMC samplers inside a variational approximation
1,155
scitldr
The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data. This architecture is thus not robust, e.g., against out-of-distribution data. Approximate Bayesian posteriors on the weight space have been empirically demonstrated to improve predictive uncertainty in deep learning. The theoretical analysis of such Bayesian approximations is limited, including for ReLU classification networks. We present an analysis of approximate Gaussian posterior distributions on the weights of ReLU networks. We show that even a simplistic (thus cheap), non-Bayesian Gaussian distribution fixes the asymptotic overconfidence issue. Furthermore, when a Bayesian method, even if a simple one, is employed to obtain the Gaussian, the confidence becomes better calibrated. This theoretical motivates a range of Laplace approximations along a fidelity-cost trade-off. We validate these findings empirically via experiments using common deep ReLU networks. As neural networks have been successfully applied in ever more domains, including safety-critical ones, the robustness of their predictions and the calibration of their predictive uncertainty have moved into focus, subsumed under the notion of AI safety . A principal goal of uncertainty calibration is that learning machines (and neural networks in particular) should assign low confidence to test cases not explained well by the training data or prior information . The most obvious such instance are test points that lie "far away" from the training data. Many methods to achieve this goal have been proposed, both Bayesian (; ;) and non-Bayesian (; ;). ReLU networks are currently among the most widely used neural architectures. This class comprises any network that can be written as a composition of linear layers (including fully-connected, convolutional, and residual layers) and a ReLU activation function. But while ReLU networks often achieve high accuracy, the uncertainty of their predictions has been shown to be miscalibrated . demonstrated that ReLU networks are always overconfident "far away from the data": scaling a training point x (a vector in a Euclidean input space) with a scalar δ yields predictions of arbitrarily high confidence in the limit δ → ∞. This means ReLU networks are susceptible to adversarial or out-of-distribution (OOD) examples. Bayesian methods have long been known empirically to improve predictive uncertainty calibration. demonstrated empirically that the predictive uncertainty of Bayesian neural networks will naturally be high in regions not covered by training data. Results like this raise the hope that the overconfidence problem of ReLU networks, too, might be mitigated by the use of Bayesian methods. This paper offers a theoretical analysis of the binary classification case of ReLU networks with logistic output layer. We show that equipping such networks with virtually any Gaussian probability distribution (i.e. regardless of whether it is motivated in a Bayesian fashion or not) mitigates the aforementioned theoretical problem, so that predictive confidence far away from the training data approaches a known constant, bounded away from one, whose value is controlled by the covariance (cf. Figure 1). At the same time, this treatment does not change the decision boundary of the trained network, so it has no negative effect on the predictive performance. Figure 1: Binary classification on a toy dataset using a MAP estimate (a) and various Gaussian approximations over the weights, sorted by their complexity of inverting the precision matrix. These approximations are carried out only at the last layer of the network and d denotes the number of hidden units at that layer. The shade of color represents the confidence of the prediction (darker shade means higher confidence). The decision boundary is in thick black. Even an arbitrary (i.e. nonBayesian) isotropic (b) or diagonal (c) covariance makes the confidence bounded away from one. Using the data in a more Bayesian fashion (d) calibrates the uncertainty further, in particular in regions close to the data. A central aspect of our is that asymptotic overconfidence can be mitigated with an essentially arbitrary Gaussian distribution on the weight space, including one of simple diagonal or even scalar covariance, and one whose covariance need not even depend on the training data. Achieving calibration at finite distances from the training data requires increasing levels of fidelity towards full Bayesian inference, for which our also give some quantification. Our thus answer a question about "how Bayesian" one needs to be to achieve certain levels of calibration. This is valuable because even approximate Bayesian treatments of deep learning, such as through Laplace approximations, can have high computational cost. We empirically validate our through a simple Laplace approximation to only the last layer of deep ReLU architectures, and find that this cheap procedure is already competitive to recently proposed non-Bayesian methods specifically constructed to overcome the overconfidence problem of ReLU networks. We also show that this cheap Bayesian approach yields good performance in the multi-class classification setting, indicating that our analysis may carry over to this case. Section 2 begins with a rigorous problem statement and assumptions, then develops the main theoretical . We discuss related work in Section 3, while empirical are in Section 4. Definitions We call a function f: R n → R piecewise affine if there exists a finite set of polytopes {Q r} R r=1, referred to as linear regions of f, such that ∪ R r=1 Q r = R n and f | Qr is an affine function for every Q r. ReLU networks are networks that in piecewise affine classifier functions which include networks with fully-connected, convolutional, and residual layers where just ReLU or leaky-ReLU are used as activation functions and max or average pooling are used as a convolution layer. be a dataset, where the targets t i ∈ {0, 1} or t i ∈ {1, . . ., k} for the binary and multi-class case, respectively. We define the logistic (sigmoid) function as σ(z):= 1/(1 + exp(−z)) for z ∈ R and the softmax function as softmax(z, i):= exp(z i)/ j exp(z j) for z ∈ R k. Given a linear classifier, 1 we will consider probability distributions p(w|D) or p(W|D) over the weight vector and matrix, respectively. We call these distributions posterior if they arose from Bayes' theorem or an approximation thereof. The predictive distribution (also called the marginalized prediction) is for the binary and multi-class cases, respectively. For Euclidean spaces we use the standard inner product and norm. Finally, λ i (·), λ max (·), and λ min (·) return the ith, maximum, and minimum eigenvalue (which are assumed to exist) of their matrix argument, respectively. Problem statement The following theorem from shows that ReLU networks exhibit arbitrarily high confidence far away from the training data: If a training point x ∈ R n is scaled by a sufficiently large scalar δ > 0, the input δx attains arbitrarily high confidence. Q r and f (x) = V r x + a r be the piecewise affine representation of the output of a ReLU network on Q r. Suppose that V r does not contain identical rows for all r = 1,..., R, then for almost any x ∈ R n and > 0 there exists an δ > 0 and a class i ∈ {1, . . ., K} such that it holds softmax(f (δx), i) ≥ 1 −. Moreover, lim δ→∞ softmax(f (δx), i) = 1. For binary classification tasks, it is standard to treat neural networks as probabilistic models of the conditional distribution p(y|x, w). Standard deep training involves assigning a maximum a posteriori (MAP) value w MAP to the weights. Doing so ignores potential uncertainty on w. We will show that this lack of uncertainty is the primary cause of the overconfidence discussed in. Unfortunately, there is generally no analytic solution for eq.. But for the logistic link function, good approximations exist when the distribution over the weights is Gaussian p(w|D) = N (w; µ, Σ) with mean µ and covariance Σ. One such approximation is constructed by scaling the input of the probit function 2 Φ by a constant λ = π/8. Using this approximation and the Gaussian assumption, if we let a:= w T x, we get where the last step uses the approximation Φ(π/8 x) ≈ σ(x) a second time, with In the case of µ = w MAP, eq. can be seen as the "softened" version of the MAP prediction of the classifier, using the covariance of the Gaussian. The principal aspect of interest of in this paper will be not so much any philosophical point about Bayesian inference, but that the approximate probabilistic Gaussian formalism as outlined in eqs. and introduces the second set of parameters in the form of Σ. We will find that at least asymptotic overconfidence problems can be fixed by setting Σ to virtually any sensible value, regardless of whether they are motivated in a Bayesian fashion or not. As a first notable property of this approximation, we show below that, in contrast to some other methods for uncertainty quantification (e.g. Monte Carlo dropout ) it preserves the decision boundary induced by the MAP estimate. Moreover, this property still holds even if we use any feature map φ and define the linear classifier on the image of this map instead. The implication is important in practice, as this gives a guarantee that if we apply this approximation to the last layer of any MAP pre-trained neural networks, then the classification accuracy of the marginalized prediction is exactly the same as the MAP classification accuracy.. The confidence of the marginalized prediction of a linear classifier is the highest in the direction of the lowest curvature, as described by Σ. Then we can obtain a Gaussian approximation p(w|D) ≈ N (w|µ, Σ) of the posterior by setting µ = w MAP and, the inverse Hessian of the negative log-posterior. In our binary classification case, p(y|x, w) is assumed to be Bernoulli(σ(w T x)) while p(w) is assumed to be N (w|0, σ 2 0 I), leading to the standard 2 -regularized binary cross-entropy loss. As our central theoretical contribution, we show that, far away from the training points, z(x) goes to a quantity that only depends on the mean and covariance of the Gaussian over the weights. This implies that we can make p(y = 1|x, D) closer to one-half far away from the training points if we can make z(x) closer to zero by controlling the Gaussian. Proposition 2.3 below shows this in the case of linear classifiers (also cf. Figure 2), while Theorem 2.4 shows that the analysis actually also holds in the case of ReLU networks. Proposition 2.3. Let f: R n → R be a binary linear classifier defined by f (x):= w T x and p(w|D):= N (w|µ, Σ) be the distribution over w. Then for any x ∈ R n, Furthermore, if x ∈ R n then as δ > 0 goes to infinity Proof. See Proposition A.3 in Appendix A. Recall from the definition, φ is a piecewise affine function. Thus, we can write the input space as R n = ∪ R r=1 Q r and for every Q r, the restriction φ| Qr: is an affine function φ| Qr (x):= V r x + a r for some V r ∈ R d×n and a r ∈ R d. Note that if i, j ∈ {1, . . ., M} with i = j then in general V i = V j and a i = a j. Using this definition, we can also show a similar to Proposition 2.3, in the case when x is replaced by any feature vector in the image of φ. ReLU network and let p(w|D):= N (w|µ, Σ) be the distribution over w. Then for any x ∈ R n, where V ∈ R d×n and a ∈ R d are some matrix and vector that depend on x. Furthermore, as δ > 0 goes to infinity Proof. See Theorem A.5 in Appendix A. Given a target upper bound on the logit and confidence values of a ReLU network, we can concretely pick the covariance Σ that respects the asymptotic bound of Theorem 2.4. Corollary 2.5 (Σ from a desired upper confidence bound on ReLU networks). Let f • φ, with φ: and N (w|µ, Σ) be the distribution over w where the mean µ is fixed and Σ is any SPD matrix. Then: (i) For any > 0 there exists Σ such that for any x ∈ R n far away from the training data, we have that |z • φ(x)| ≤. (ii) For any 0.5 < p < 1 there exists Σ such that for any x ∈ R n far away from the training data, we have that σ(|z Proof. See Corollary A.6 in Appendix A. Proposition 2.3 and Theorem 2.4 imply that the confidence of a binary linear classifier with ReLU features can be bound closer to one-half by increasing the minimum eigenvalue of the posterior covariance. In this section, we will move towards the Bayesian setting (i.e. using an explicit prior and likelihood, not just an imposed probability measure on the weights). Specifically, we will present a way to control the posterior through the prior in a Laplace approximation. Concretely, the following proposition and its immediate corollary point out that the eigenvalues of the posterior covariance can be increased (bringing |z(δx)| closer to zero) by increasing the prior variance.. Let p(w|D):= N (w|µ, Σ) be the posterior over w, obtained via a Laplace approximation with prior N (w|0, σ 2 0 I). Suppose H is the Hessian w.r.t. w at µ of the negative log-likelihood of the model. Then (ii) For each i = 1,..., d, the ith eigenvalue λ i (Σ) of Σ is a non-decreasing function of σ Proof. See Proposition A.7 in Appendix A. We get an immediate corollary from Proposition 2.6 that relates its to Theorem 2.4.. Let p(w|D):= N (w|µ, Σ) be the posterior over w, obtained via a Laplace approximation with prior N (w|0, σ is a non-increasing function of σ 2 0 with limits where H is as defined in (i) of Proposition 2.6. Proof. See Corollary A.8 in Appendix A. Lastly, the following corollary formalizes the intuition that the marginalized prediction with the inverse empirical features covariance C −1 as Σ will naturally have high uncertainty far away from the training data. Furthermore, this property can also be observed for Laplace approximation if the spectral properties of the Hessian ((i) of Proposition 2.6) are not too different to those of C. (ii) Σ is obtained via a Laplace approximation w.r.t. a prior N (w|0, σ 2 0 I) with σ 2 0 → ∞ and suppose H defined in (i) of Proposition 2.6 is invertible and the ordering of its eigenvalues is the same as that of C, while the eigenvectors are the same as those of C, then on any level set of µ T φ(x), the confidence decreases faster in the direction where the training data are sparser in the feature space R d. Proof. See Corollary A.9 in Appendix A. Similar statements for multi-class classifiers are not as straight-forward due to the lack of a good closed-form approximation of the integral of softmax under a Gaussian measure. However, as can be seen in Appendix C, at least the application of the above analysis can easily be generalized to the multi-class case. In fact, in the experiments (Section 4), we mainly use multi-class classifiers and show empirically that they are effective in mitigating issues that arise from the overconfidence problem. The overconfidence problem of deep neural networks, and thus ReLU networks, has long been known in the deep learning community . However, only recently this issue was demonstrated formally . Many methods have been proposed to combat or at least detect this issue. Post-hoc heuristics based on temperature or Platt scaling are unable to detect inputs with arbitrarily high confidence far away from the training data . proposed enhanced training objectives based on robust optimization to mitigate this issue. Bayesian methods have long been thought to mitigate the overconfidence problem on any neural network . Empirical evidence supporting this intuition has also been presented (; , etc.). Our complement these with a theoretical justification for the ReLU-logistic case. But while our work is theoretical in nature, we believe its application has practical value since it shows that a full Bayesian (expensive) treatment is not necessary if one is only worried about overconfidence. Indeed, fully Bayesian neural networks are often intractable and crude approximations have to be used, ing in undesirable . In this section we validate our theoretical by applying a Laplace approximation only to the last layer of various widely used ReLU networks and call this method last-layer Laplace approximation (LLLA). We refer the reader to Appendix C for details. Note that since LLLA is the simplest Laplace approximation that we can apply to deep networks, our should also hold for more general Laplace methods, e.g. Kronecker-factored Laplace (KFLA) , where not only the linear classifier's posterior but also the posterior of the feature map is approximated. Note however, these fully-Bayesian methods are significantly more expensive and require a significant amount of implementation effort. We will present our empirical on (i) a 2D toy classification task and (ii) out-of-distribution (OOD) data detection experiments. For the OOD experiment, we find the optimal prior variance σ 2 0 via a heuristic that follows directly from Corollary 2.7. Concretely, we pick the largest positive integer that makes the drop on the mean maximum confidence (MMC) of the in-distribution dataset to be within around 0.03 of the MAP's MMC. Thus, we only set this once without seeing any of the OOD datasets. The dataset is constructed by sampling the input points from k Gaussians. The corresponding targets indicate from which Gaussian the point was sampled. We use a 5-layer ReLU network with 100 hidden units at each layer as the feature map φ. The classifier, along with this feature map is trained jointly. We show the for the binary and multi-class (k = 4) case in Figure 3. As we can see, the MAP predictions have high confidence (low entropy) everywhere except at the region close to the decision boundary. The widely used MC-dropout does not remedy this issue. While ACET remedies the overconfidence issue, it is expensive and in general does not preserve the decision boundary. In contrast, LLLA yields better calibrated predictions: high confidence close to the training points and high uncertainty otherwise, while maintaining the MAP's decision boundary. We furthermore show the zoomed-out version of LLLA prediction we have presented in Figure 1d, along with the contour of the denominator of z (eq.) in Figure 4. We see that the covariance acts as a "moderator" for the MAP predictions: As a test point moves away from the training data, the denominator of z becomes larger and the marginalized prediction goes to a constant close to one-half. Table 1, LLLA yields competitive performance compared to both CEDA and ACET. We have shown that even an extremely approximate and virtually non-Bayesian probabilistic Gaussian treatment mitigates the most extreme aspects of overconfidence in ReLU networks. Our analytical bound the confidence of the Bayesian prediction of linear classifiers and ReLU networks far away from the training data away from one. This motivates a spectrum of approximations, from ad-hoc isotropic to "full Bayesian" Laplace approximations. In the Laplace approximation case, the bound asymptotically converges to a constant whose value can be controlled via the prior. We validated our experimentally by constructing a simple Laplace method that can still capture the properties we have shown, specifically by only approximating the last-layer's posterior distribution. In contrast to other approximations, this method is cheap and simple to implement, yet already yields competitive performance compared to the more expensive, recently proposed non-Bayesian method for combating the overconfidence problem. While more elaborate Laplace approximations can improve fidelity the further, our provide virtually any ReLU network with a simple and computationally lightweight way to mitigate overconfidence. 1/2 = 0. Notice, the denominator of the l.h.s. is positive. Thus, it follows that µ f must be 0, implying that σ(µ f) = 0.5. Lemma A.2. Let x ∈ R n be a vector and A ∈ R n×n be an SPD matrix. If λ min (A) is the minimum eigenvalue of A, then x T Ax ≥ λ min x 2. Proof. Since A is SPD, it admits an eigendecomposition A = QΛQ T and Λ = Λ 1 2 Λ 1 2 makes sense. Therefore, by keeping in mind that Q T x is a vector in R n, we have where the last equality is obtained as Q T x 2 = x T Q T Qx and noting that Q is an orthogonal matrix. Proposition A.3. Let f: R n → R be a binary linear classifier defined by f (x):= w T x and p(w|D):= N (w|µ, Σ) be the distribution over w. Then for any x ∈ R n, Furthermore, if x ∈ R n then as δ > 0 goes to infinity Proof. The first follows directly from Lemma A.2 and by noting that the denominator of eq. is positive since Σ is symmetric positive-definite (SPD) by definition. For the second , let x ∈ R n be arbitrary. By computation and again since the denominator of eq. is positive, we have We would like to inspect the asymptotic behavior of z(δx) with respect to δ. First, for the sake of completeness, we can compute that lim δ→0 |z(δx)| = 0. This reflects the case when δx goes to the decision boundary. Now, for the case when δ → ∞, we can see that since 1/δ 2 → 0 as δ → ∞. Therefore, using Lemma A.2 and Cauchy-Schwarz inequality, we have thus the proof is complete. Under review as a conference paper at ICLR 2020 Lemma A.4 . Let {Q i} R l=1 be the set of linear regions associated to the ReLU network φ: R n → R n. For any x ∈ R n there exists α ∈ R with α > 0 and t ∈ {1, . . ., R} such that δx ∈ Q t for all β ≥ α. Furthermore, the restriction of φ to Q t can be written as an affine function. Theorem A.5. Let f: R d → R be a binary linear classifier defined by f • φ(x):= w T φ(x) where φ: R n → R d is a ReLU network and let p(w|D):= N (w|µ, Σ) be the distribution over w. Then for any x ∈ R n, where V ∈ R d×n and a ∈ R d are some matrix and vector that depend on x. Furthermore, as δ > 0 goes to infinity such that x ∈ Q and φ| Q (x):= Vx + a. Applying eq. to φ| Q (x) and following the proof of Proposition 2.3 yield thus the first is obtained., such that for any δ ≥ α, we have that δx ∈ R and the restriction φ| R can be written as Ux + c. Therefore, for any such δ, Now, notice that as δ → ∞, 1/δ 2 and 1/δ goes to zero. So, in the limit, we have that Again, following the proof of Proposition 2.3 (i.e. using Cauchy-Schwarz and Lemma A.2), we can upper-bound this limit with which concludes the proof. Corollary A.6 (λ min (Σ) from a desired upper confidence bound on ReLU networks). Let f • φ, with φ: R n → R d and f: R d → R, be a ReLU network defined by f • φ(x):= w T φ(x) and N (w|µ, Σ) be the distribution over w where the mean µ is fixed and Σ is any SPD matrix. Then: (i) For any > 0 there exists Σ such that for any x ∈ R n far away from the training data, we have that |z • φ(x)| ≤. (ii) For any 0.5 < p < 1 there exists Σ such that for any x ∈ R n far away from the training data, we have that σ(|z • φ(x)|) ≤ p. Proof. We begin with (i). Let > 0 and δ = 8 π µ 2. Pick any Σ SPD with λ min (Σ) = δ. Then, by eq. of Theorem 2.4 and our choice of λ min (Σ), for any z ∈ R n, asymptotically we have that which is the desired . For (ii), let 0.5 < p < 1 be arbitrary. Observe that the inverse logistic function is given by σ −1 (x):= log x/(1 − x) for 0 < x < 1 and it is positive for 0.5 < x < 1. Therefore by setting in (i) with 2 and verify that for any x ∈ R n this gives |z(x)| ≤ σ −1 (p). Thus, for any x ∈ R n far away from the training data, since σ is monotonic, we have that and the proof is complete.. Let p(w|D):= N (w|µ, Σ) be the posterior over w, obtained via a Laplace approximation with prior N (w|0, σ 2 0 I). Suppose H is the Hessian w.r.t. w at µ of the negative log-likelihood of the model. Then (ii) For each i = 1,..., d, the ith eigenvalue λ i (Σ) of Σ is a non-decreasing function of σ Proof. The negative log-likelihood of Bernoulli distribution is given by Now, observing that σ (x) = σ(x)(1 − σ(x)) for all x ∈ R, we can compute T. T, since t ∈ {0, 1} by assumption. By considering all x, t ∈ D, we get (i). For (ii), first we assume that all Hessians mentioned below are w.r.t. w. We note that the assumption on the prior implies − log p(w) = 1/2 w T (1/σ 2 0 I)w + const, which has Hessian 1/σ 2 0 I. Thus, the Hessian of the negative log posterior − log p(w|D) = − log p(w) − log x,t∈D p(y|x, w) is 1/σ 2 0 I + H. This implies that the posterior covariance Σ of the Laplace approximation is given by Therefore, the ith eigenvalue of Σ for any i = 1,..., n is. For all i = 1,..., n, the derivative of λ i (Σ) w.r.t. σ T for some saturating h: R → R, which has lim x→∞ h(x) = l. Let also φ: R n → R d defined as φ(x):= g(Vx + a) for some V ∈ R d×n and a ∈ R d be a feature map. Suppose p(w|D):= N (w|µ, Σ) is the distribution over w. Then for any x ∈ D, as δ > 0 goes to infinity Proof. By definition, By definition of g, lim δ→∞ g(δVx + a) = (l, . . ., l) T =: l, which implies The theoretical in the main text essentially tell us that if we have a Gaussian approximate posterior that comes from a Laplace approximation, then using eq. (and eq.) when making a prediction can remedy the overconfidence problem on any ReLU network. In this section we describe a simple Laplace method that can still capture the properties that we have presented in Section 2. Concretely, we apply the Laplace approximation only to the linear last layer of ReLU networks, that have been trained via MAP estimation. For the sake of clarity, we omit the bias in the following and revisit the case where the bias is included at the end of this section. For the binary classification case, let g: R n → R be a MAP-trained deep ReLU neural network with a linear last-layer. We can decompose g into a feature map φ: R n → R d and a linear classifier f:. Based on Proposition 2.6, we can simply perform a Laplace approximation to get the posterior of the weight of the linear classifier f, i.e. p(w|D) = N (w|w MAP, H −1) where H is the Hessian of the negative log-posterior w.r.t. w at w MAP. This Hessian could be obtained via automatic differentiation or via the explicit formula stated in (i) of Proposition 2.6. We emphasize that we only deal with the weight at the last layer of g, i.e. the weight of f, and not the weight of the whole network, thus the inversion of H is rarely a problem. For instance, large models such as DenseNet-201 and ResNet-152 (a) In the case of multi-class classification, we now have f:. We obtain the posterior over a random matrix W ∈ R k×d in the form N (vec(W)|vec(W MAP), Σ) for some Σ ∈ R dk×dk SPD. The procedure is still similar to the one described above, since the exact Hessian of the linear multi-class classifier can still be easily and efficiently obtained via automatic differentiation. Note that in this case we need to invert a dk × dk matrix, which, depending on the size of k, can be quite large. For a more efficient procedure, we can make a further approximation to the posterior in the multiclass case by assuming the posterior is a matrix Gaussian distribution. We can use the Kroneckerfactored Laplace approximation (KFLA) , but only for the last layer of the network. That is, we find the Kronecker factorization of the Hessian Then by definition of a matrix Gaussian , we immediately 4 Based on the implementations available in the TorchVision package. 5 For example, the ImageNet dataset has k = 1000. 6 In practice, we take the running average of the Kronecker factors of the Hessian over the mini-batches. obtain the posterior MN (W|W MAP, U, V). The distribution of the latent functions is Gaussian, since f:= Wφ(x) and p(W|D) = MN (W|W MAP, U, V) imply where the last equality follows since (φ(x) T Vφ(x)) is a scalar. We then have the following integral which can be approximated via MC-integration. While one can always assume that the bias trick is already used, i.e. it is absorbed in the weight matrix/vector, in practice when dealing with pre-trained networks, one does not have such liberty. In this case, one can simply assume that the bias b or b is independent of the weight w or W, respectively in the two-and multi-class cases. By using the same Laplace approximation procedure, one can easily get p(b|D):. This implies w T φ(x)+b =: f and Wφ(x) + b =: f are also Gaussians given by. Similarly, in the case when the Kronecker-factored approximation is used, we have Because of the construction above, which is simply done by applying Laplace approximation on the last layer of a ReLU network, we call this method last layer Laplace approximation or LLLA for short. We present the pseudocodes of LLLA in Algorithms 1 and 2. Algorithm 1 LLLA with exact Hessian for binary classification. We train all networks we use in Table 1 for 100 epochs with batch size of 128. The initial learning rates are 0.001 and 0.1 for MNIST and CIFAR-10 experiments, respectively, and we divide them by 10 at epoch 50, 75, and 95. We use ADAM and SGD with 0.9 momentum, respectively. Standard data augmentations, i.e. random crop and standardization are also used for training the network on CIFAR-10. Meanwhile, for LLLA, we use the Kronecker-factored Hessian. Algorithm 2 LLLA with Kronecker-factored Hessian for multi-class classification. A pre-trained network f • φ with W MAP as the weight of f, (averaged) cross-entropy loss L, training set D train, test set D test, mini-batch size m, number of samples s, running average weighting ρ, and prior precision τ 0 = 1/σ 2 0. Predictions P containing p(y = i|x, D train) ∀x ∈ D test ∀i ∈ {1, . . ., k}. We further compare the OOD detection performance of LLLA to the temperature scaling method. To find the optimal temperature, we follow the method of. In particular, we use the implementation provided by https://github.com/JonathanWenger/pycalib.
We argue theoretically that by simply assuming the weights of a ReLU network to be Gaussian distributed (without even a Bayesian formalism) could fix this issue; for a more calibrated uncertainty, a simple Bayesian method could already be sufficient.
1,156
scitldr
Word alignments are useful for tasks like statistical and neural machine translation (NMT) and annotation projection. Statistical word aligners perform well, as do methods that extract alignments jointly with translations in NMT. However, most approaches require parallel training data and quality decreases as less training data is available. We propose word alignment methods that require little or no parallel data. The key idea is to leverage multilingual word embeddings – both static and contextualized – for word alignment. Our multilingual embeddings are created from monolingual data only without relying on any parallel data or dictionaries. We find that traditional statistical aligners are outperformed by contextualized embeddings – even in scenarios with abundant parallel data. For example, for a set of 100k parallel sentences, contextualized embeddings achieve a word alignment F1 that is more than 5% higher (absolute) than eflomal. Word alignment is essential for statistical machine translation and useful in NMT, e.g., for imposing priors on attention matrices (; ;) or for decoding . Further, word alignments have been successfully used in a range of tasks such as typological analysis (; Östling, 2015), annotation projection (; Padó and) and creating multilingual embeddings . Statistical word aligners such as the IBM models and their successors (e.g., fastalign , GIZA++ , eflomal (Östling and) ) are widely used for alignment. With the rise of NMT , attempts have been made to interpret attention matrices as soft word alignments . Several methods create alignments from attention matrices (; ;) or pursue a multitask approach for alignment and translation . However, most systems require parallel data and their performance deteriorates when parallel text is scarce (cf. Tables 1-2 in ). Recent unsupervised multilingual embedding algorithms that use only monolingual data provide high quality static and contextualized embeddings (; ;). Our key idea is to leverage these embeddings for word alignments -without relying on parallel data. Requiring no or little parallel data is advantageous in many scenarios, e.g., in the low-resource case and in domain-specific settings without parallel data. A lack of parallel data cannot be easily remedied: mining parallel sentences is possible (cf. ) but assumes that monolingual corpora contain parallel sentences. Contributions: We propose two new alignment methods based on the matrix of embedding similarities. We propose two post-processing algorithms that handle null words and integrate positional information. We show that word alignments obtained from multilingual BERT outperform strong statistical word aligners like eflomal. We investigate the differences between word and subword processing for alignments and find subword processing to be preferable. Upon acceptance we will publish the source code. Consider parallel sentences s (e), s (f), with lengths l e, l f in languages e, f. Assume we have access to some embedding function E that assigns each word in a sentence a d-dimensional vector, i.e., E(s (k) ) ∈ R l k ×d for k ∈ {e, f}. Let E(s (k) ) i denote the vector of the i-th word in sentence s (k). We define the similarity matrix as the matrix S ∈ le×l f induced by the embeddings where S ij:= sim E(s (e) ) i, E(s (f) ) j is some normalized measure of similarity, e.g., cosine-similarity normalized to be between 0 and 1. We now introduce methods for extracting alignments from S, i.e., obtaining a binary matrix A ∈ {0, 1} le×l f. Argmax. A simple baseline is to align each word in sentence s (e) with the most similar word in s (f) and vice versa. That is, we define A ij = 1 if and only if and A ij = 0 else. Similar methods have been applied to Dice coefficients (cf. ) and attention matrices . Match. Argmax finds a local, not a global optimum. To address this, we frame alignment as an assignment problem: we search for a maximumweight maximal matching (cf. ) in the bipartite weighted graph induced by the similarity matrix. This optimization problem is given by A ij S ij subject to A being a valid maximal matching (i.e., every word in the shorter sentence is aligned). There are known algorithms to solve the above problem in polynomial time (cf.). Note that alignments generated with the matching method are inherently bidirectional and do not require any symmetrization as post-processing. Distortion Correction [Dist]. Distortion, as introduced in IBM Model 2, is essential for alignments based on non-contextualized embeddings since the similarity of two words is solely based on their surface form, independent of position. To penalize high distortion, we multiply the similarity matrix S componentwise with where κ is a hyperparameter to scale the matrix between [(1 − κ), 1]. We use κ = 0.5. 1 We can interpret this as imposing a locality-preserving prior: 1 See supplementary for different values. given a choice, a word should be aligned to a word with a similar relative position ((i/l e − j/l f) 2 close to 0) rather than a more distant word (large Null. Null words model untranslated words and are an important part of alignment models. We remove alignment edges when the normalized entropy of the similarity distribution is above a threshold τ, a hyperparameter. 2 Intuitively if a word is not similar to any of the words in the target sentence, we do not align it. That is, we set A ij = 0 if and only if Traditional word alignment models create forward and backward alignments and then symmetrize them . We compared grow-diagfinal-and (GDFA) and intersection and found them to perform comparably. 4 We use GDFA throughout the paper. We investigate both subword segmentations such as BPE/wordpiece Our test data are three language pairs in different domains. We use Europarl gold alignments 7 for English-German, Bible gold alignments for English-French and gold alignments by for English-Persian (domain: books). We select additional parallel training data that is consistent with the target domain where available: Europarl for English-German and Parallel Bible Corpus (PBC) for English-French and EnglishPersian. For fast-align, GIZA++ and eflomal we add 10,000 parallel sentences (simulating a midresource setting) to the gold standard as training data. We show the effect of adding more or less training data in Figure 1. Since mBERT is pretrained on Wikipedia, we train fastText embeddings on Wikipedia as well. For hyperparameters of all models see supplementary. Our evaluation measures are precision, recall, F 1 and alignment error rate (AER). Overall. Table 1 shows that mBERT performs consistently best. Eflomal has high precision and is on par for ENG-FRA. Surprisingly, fastText outperforms fast-align in two out of three languages (e.g., F 1 65 vs. 61 for ENG-DEU) despite not having access to parallel data. Among the statistical baselines, eflomal outperforms GIZA++, while GIZA++ is better than fast-align, as expected. Parallel Data. Figure 1 shows that fast-align and eflomal get better with more training data with eflomal outperforming fast-align, as expected. However, even with 10 6 parallel sentences mBERT outperforms both statistical baselines. fastText becomes competitive for fewer than 1000 parallel sentences. The main takeaway is that mBERT-based alignments, a method that does not need any parallel training data, outperform state-of-the-art aligners, even in the high resource case. Word vs. Subword. In Table 1 subword processing mainly benefits fast-align, GIZA++ and eflomal (except for ENG-FRA). fastText is harmed by subword processing. We use VecMap to match (sub)word distributions across languages. We hypothesize that it is harder to match subword than word distributions -this effect is strongest for Persian, probably due to different scripts and thus different subword distributions. For mBERT words and subwords are about the same. Table 4 compares alignment and postprocessing methods. Argmax generally yields higher precision whereas Match has higher recall. For fastText using Argmax with Dist yields best F 1 on two languages. Adding a distortion prior boosts performance for static embeddings, e.g., from.46 to.61 for ENG-FRA F 1. Null-word processing increases precision, e.g., from.91 to.96 for ENG-DEU, but does not increase F 1. For mBERT Argmax performs best in two out of three language pairs. Dist has little and sometimes harmful effects on mBERT indicating that mBERT's contextualized representations already match well across languages. mBERT Layers. Figure 2 shows a parabolic trend with layer 8 of mBERT yielding the best performance. This is consistent with other work : in the first layers the contextualization is too weak for high-quality alignments while last layers are too specialized on the pretraining task (masked language modeling). introduced the IBM models, the best known statistical word aligners. More recent aligners, often based on IBM models, include fastalign , GIZA++ and eflomal (Östling and). All of these models are trained on parallel text. Our method instead aligns based on embeddings that are induced from monolingual data only. Prior work on using learned representations for alignment includes compute similarity matrices of encoder-decoder representations that are leveraged for word alignments, together with supervised learning which requires manually annotated alignment. In contrast to our work, they all require parallel data. We presented word aligners based on contextualized (resp. static) embeddings that perform better than (resp. comparably with) statistical word aligners. Our method is the first that does not require parallel data and is particularly useful for scenarios where a medium number of parallel sentences need to be aligned, but no additional parallel data is available. For a set of 100k parallel sentences, contextualized embeddings achieve an alignment F 1 that is 5% higher (absolute) than eflomal. Given a set of predicted alignment edges A and a set of sure (possible) gold standard edges S (P) we computed our evaluation measures as follows: where | · | denotes the cardinality of a set. This is the usual way of evaluating alignments . For asymmetric alignments different symmetrization methods exist. fast-align provides an overview and implementation for these methods, which we use. We compare intersection and grow-diag-final-and (GDFA) in Table 3. In terms of F1 GDFA performs better (Intersection wins once, GDFA five times, three ties). As expected, Intersection yields higher precision while GDFA yields higher recall. Thus intersection is preferable for tasks like annotation projection, whereas GDFA is typically used in statistical machine translation. The analogous numbers from Table 1 in the main paper on word-level can be found in Table 4. Again Distortion is essential for fastText and not necessary for mBERT. Adding Null helps especially for mBERT. Overall the takeaways are consistent with the from subword-level. We provide a list of customized hyperparameters used in our computations in hyperparameters we used default values as provided in the corresponding implementation (see respective links to the code repositories). In the main paper we introduced the hyperparameter κ. In Figure 3 we plot the performance for different values of κ. We observe that introducing distortion indeed helps (i.e., κ > 0) but the actual value is not decisive for performance. This is rather intuitive, as a small adjustment to the similarities is sufficient while larger adjustments do not necessarily hurt or change the Argmax or the optimal point in the Matching Algorithm. In the main paper we have chosen κ = 0.5. For τ in the null-word post-processing we need to use high values as the similarity distributions tend to be quite uniform in the high-dimensional spaces. See Figure 4 for different values of τ. As expected, for τ = 1 no edges are removed and thus the performance is not changed compared to not having a null-word post-processing. With decreasing τ the precision increases and recall goes down. We use τ = 0.999 for fastText and τ = 0.9995 for mBERT. Upon acceptance we will publish the code together with instructions on how to reproduce the . Table 6 provides an overview on the data used in the main paper together with download links.
We use representations trained without any parallel data for creating word alignments.
1,157
scitldr
Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a . Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively. Designing a suitable reward function plays a critical role in building reinforcement learning models for real-world applications. Ideally, one would want to customize reward functions to achieve application-specific goals . In practice, however, it is difficult to design a function that produces credible rewards in the presence of noise. This is because the output from any reward function is subject to multiple kinds of randomness:• Inherent Noise. For instance, sensors on a robot will be affected by physical conditions such as temperature and lighting, and therefore will report back noisy observed rewards.• Application-Specific Noise. In machine teaching tasks BID13 ), when an RL agent receives feedback/instructions from people, different human instructors might provide drastically different feedback due to their personal styles and capabilities. This way the RL agent (machine) will obtain reward with bias.• Adversarial Noise. Adversarial perturbation has been widely explored in different learning tasks and shows strong attack power against different machine learning models. For instance, has shown that by adding adversarial perturbation to each frame of the game, they can mislead RL policies arbitrarily. Assuming an arbitrary noise model makes solving this noisy RL problem extremely challenging. Instead, we focus on a specific noisy reward model which we call perturbed rewards, where the observed rewards by RL agents are generated according to a reward confusion matrix. This is not a very restrictive setting to start with, even considering that the noise could be adversarial: Given that arbitrary pixel value manipulation attack in RL is not very practical, adversaries in the real-world have high incentives to inject adversarial perturbation to the reward value by slightly modifying it. For instance, adversaries can manipulate sensors via reversing the reward value. In this paper, we develop an unbiased reward estimator aided robust framework that enables an RL agent to learn in a noisy environment with observing only perturbed rewards. Our solution framework builds on existing reinforcement learning algorithms, including the recently developed DRL ones (Q-Learning BID19 BID18, Cross-Entropy Method (CEM) BID11, Deep SARSA BID10, Deep Q-Network (DQN) (; BID6, Dueling DQN (DDQN) BID17, Deep Deterministic Policy Gradient (DDPG) , Continuous DQN (NAF) and Proximal Policy Optimization (PPO) BID4 The main challenge is that the observed rewards are likely to be biased, and in RL or DRL the accumulated errors could amplify the reward estimation error over time. We do not require any assumption on knowing the true distribution of reward or adversarial strategies, other than the fact that the generation of noises follow an unknown reward confusion matrix. Instead, we address the issue of estimating the reward confusion matrices by proposing an efficient and flexible estimation module. provided preliminary studies for the noisy reward problem and gave some general negative . The authors proved a No Free Lunch theorem, which is, without any assumption about what the reward corruption is, all agents can be misled. Our do not contradict with the therein, as we consider a specific noise generation model (that leads to a set of perturbed rewards). We analyze the convergence and sample complexity for the policy trained based on our proposed method using surrogate rewards in RL, using Q-Learning as an example. We conduct extensive experiments on OpenAI Gym (AirRaid, Alien, Carnival, MsPacman, Pong, Phoenix, Seaquest) and show that the proposed reward robust RL method achieves comparable performance with the policy trained using the true rewards. In some cases, our method even achieves higher cumulative reward -this is surprising to us at first, but we conjecture that the inserted noise together with our noisy-removal unbiased estimator adds another layer of exploration, which proves to be beneficial in some settings. This merits a future study. Our contributions are summarized as follows: FORMULA2 We adapt and generalize the idea of defining a simple but effective unbiased estimator for true rewards using observed and perturbed rewards to the reinforcement learning setting. The proposed estimator helps guarantee the convergence to the optimal policy even when the RL agents only have noisy observations of the rewards. We analyze the convergence to the optimal policy and finite sample complexity of our reward robust RL methods, using Q-Learning as the running example. Extensive experiments on OpenAI Gym show that our proposed algorithms perform robustly even at high noise rates. Robust Reinforcement Learning It is known that RL algorithms are vulnerable to noisy environments . Recent studies (; ;) show that learned RL policies can be easily misled with small perturbations in observations. The presence of noise is very common in real-world environments, especially in robotics-relevant applications. Consequently, robust (adversarial) reinforcement learning (RRL/RARL) algorithms have been widely studied, aiming to train a robust policy which is capable of withstanding perturbed observations BID12; ) or transferring to unseen environments BID1 ). However, these robust RL algorithms mainly focus on noisy vision observations, instead of the observed rewards. A couple of recent works (; BID3 have also looked into a rather parallel question of training robust RL algorithms with uncertainty in models. Learning with Noisy Data Learning appropriately with biased data has received quite a bit of attention in recent machine learning studies ; BID7 BID6 ; BID9 BID16 ; . The idea of above line of works is to define unbiased surrogate loss function to recover the true loss using the knowledge of the noises. We adapt these approaches to reinforcement learning. Though intuitively the idea should apply in our RL settings, our work is the first one to formally establish this extension both theoretically and empirically. Our quantitative understandings will provide practical insights when implementing reinforcement learning algorithms in noisy environments. In this section, we define our problem of learning from perturbed rewards in reinforcement learning. Throughout this paper, we will use perturbed reward and noisy reward interchangeably, as each time step of our sequential decision making setting is similar to the "learning with noisy data" setting in supervised learning (; BID7 BID6 BID9 . In what follows, we formulate our Markov Decision Process (MDP) problem and the reinforcement learning (RL) problem with perturbed (noisy) rewards. Our RL agent interacts with an unknown environment and attempts to maximize the total of his collected reward. The environment is formalized as a Markov Decision Process (MDP), denoting as M = S, A, R, P, γ. At each time t, the agent in state s t ∈ S takes an action a t ∈ A, which returns a reward r(s t, a t, s t+1) ∈ R (which we will also shorthand as r t), and leads to the next state s t+1 ∈ S according to a transition probability kernel P, which encodes the probability Pa(st, st+1). Commonly P is unknown to the agent. The agent's goal is to learn the optimal policy, a conditional distribution π(a|s) that maximizes the state's value function. The value function calculates the cumulative reward the agent is expected to receive given he would follow the current policy π after observing the current state DISPLAYFORM0 where 0 ≤ γ ≤ 1 1 is a discount factor. Intuitively, the agent evaluates how preferable each state is given the current policy. From the Bellman Equation, the optimal value function is given by V * (s) = maxa∈A s t+1 ∈S Pa(st, st+1) [rt + γV * (st+1)]. It is a standard practice for RL algorithms to learn a state-action value function, also called the Q-function. Q-function denotes the expected cumulative reward if agent chooses a in the current state and follows π thereafter: DISPLAYFORM1 In many practical settings, our RL agent does not observe the reward feedback perfectly. We consider the following MDP with perturbed reward, denoting asM = S, A, R, C, P, γ 2: instead of observing r t ∈ R at each time t directly (following his action), our RL agent only observes a perturbed version of r t, denoting asr t ∈R. For most of our presentations, we focus on the cases where R,R are finite sets; but our generalize to the continuous reward settings. The generation ofr follows a certain function C: S × R →R. To let our presentation stay focused, we consider the following simple state-independent 3 flipping error rates model: if the rewards are binary (consider r + and r −),r(s t, a t, s t+1) (r t) can be characterized by the following noise rate parameters e +, e −: e+ = P(r(st, at, st+1) = r−|r(st, at, st+1) = r+), e− = P(r(st, at, st+1) = r+|r(st, at, st+1) = r−). When the signal levels are beyond binary, suppose there are M outcomes in total, denoting as [R 0, R 1, · · ·, R M −1].r t will be generated according to the following confusion matrix C M ×M where each entry c j,k indicates the flipping probability for generating a perturbed outcome: c j,k = P(r t = R k |r t = R j). Again we'd like to note that we focus on settings with finite reward levels for most of our paper, but we provide discussions in Section 3.1 on how to handle continuous rewards with discretizations. In the paper, we do not assume knowing the noise rates (i.e., the reward confusion matrices), which is different from the assumption of knowing them as adopted in many supervised learning works. Instead we will estimate the confusion matrices (Section 3.3). In this section, we first introduce an unbiased estimator for binary rewards in our reinforcement learning setting when the error rates are known. This idea is inspired by , but we will extend the method to the multi-outcome, as well as the continuous reward settings. With the knowledge of noise rates (reward confusion matrices), we are able to establish an unbiased approximation of the true reward in a similar way as done in. We will call such a constructed unbiased reward as a surrogate reward. To give an intuition, we start with replicating the for binary reward R = {r −, r +} in our RL setting: Lemma 1. Let r be bounded. Then, if we define, r(st, at, st+1):=(1−e −)·r + −e + ·r − 1−e + −e − (r(st, at, st+1) = r+) DISPLAYFORM0 we have for any r(s t, a t, s t+1), Er |r [r(s t, a t, s t+1)] = r(s t, a t, s t+1).In the standard supervised learning setting, the above property guarantees convergence -as more training data are collected, the empirical surrogate risk converges to its expectation, which is the same as the expectation of the true risk (due to unbiased estimators). This is also the intuition why we would like to replace the reward terms with surrogate rewards in our RL algorithms. The above idea can be generalized to the multi-outcome setting in a fairly straight-forward way. DefineR:= [r(r = R 0),r(r = R 1),...,r(r = R M −1)], wherer(r = R m) denotes the value of the surrogate reward when the observed reward is DISPLAYFORM1 DISPLAYFORM2 we have for any r(s t, a t, s t+1), Er |r [r(s t, a t, s t+1)] = r(s t, a t, s t+1).Continuous reward When the reward signal is continuous, we discretize it into M intervals and view each interval as a reward level, with its value approximated by its middle point. With increasing M, this quantization error can be made arbitrarily small. Our method is then the same as the solution for the multi-outcome setting, except for replacing rewards with discretized ones. Note that the finerdegree quantization we take, the smaller the quantization error -but we would suffer from learning a bigger reward confusion matrix. This is a trade-off question that can be addressed empirically. So far we have assumed knowing the confusion matrices, but we will address this additional estimation issue in Section 3.3, and present our complete algorithm therein. We now analyze the convergence and sample complexity of our surrogate reward based RL algorithms (with assuming knowing C), taking Q-Learning as an example. Convergence guarantee First, the convergence guarantee is stated in the following theorem:Theorem 1. Given a finite MDP, denoting asM = S, A,R, P, γ, the Q-learning algorithm with surrogate rewards, given by the update rule, DISPLAYFORM0 converges w.p.1 to the optimal Q-function as long as t α t = ∞ and t α 2 t < ∞.Note that the term on the right hand of Eqn. includes surrogate rewardr estimated using Eqn. FORMULA2 and Eqn.. Theorem 1 states that that agents will converge to the optimal policy w.p.1 with replacing the rewards with surrogate rewards, despite of the noises in observing rewards. This is not surprising -though the surrogate rewards introduce larger variance, we are grateful of their unbiasedness, which grants us the convergence. In other words, the addition of the perturbed reward does not destroy the convergence guarantees of Q-Learning. Sample complexity To establish our sample complexity , we first introduce a generative model following previous literature (; 2000;). This is a practical MDP setting to simplify the analysis. Definition 1. A generative model G(M) for an MDP M is a sampling model which takes a stateaction pair (s t, a t) as input, and outputs the corresponding reward r(s t, a t) and the next state s t+1 randomly with the probability of P a (s t, s t+1), i.e., s t+1 ∼ P(·|s, a).Exact value iteration is impractical if the agents follow the generative models above exactly . Consequently, we introduce a phased Q-Learning which is similar to the ones presented in; for the convenience of proving our sample complexity . We briefly outline phased Q-Learning as follows -the complete description (Algorithm 2) can be found in Appendix A. Definition 2. Phased Q-Learning algorithm takes m samples per phase by calling generative model G(M). It uses the collected m samples to estimate the transition probability P and update the estimated value function per phase. Calling generative model G(M) means that surrogate rewards are returned and used to update value function per phase. The sample complexity of Phased Q-Learning is given as follows: DISPLAYFORM1 be bounded reward, C be an invertible reward confusion matrix with det(C) denoting its determinant. For an appropriate choice of m, the Phased Q-Learning algorithm calls the generative model DISPLAYFORM2 times in T epochs, and returns a policy such that for all state s ∈ S, DISPLAYFORM3 Theorem 2 states that, to guarantee the convergence to the optimal policy, the number of samples needed is no more than O(1/det(C)2 ) times of the one needed when the RL agent observes true rewards perfectly. This additional constant is the price we pay for the noise presented in our learning environment. When the noise level is high, we expect to see a much higher 1/det(C)2; otherwise when we are in a low-noise regime, Q-Learning can be very efficient with surrogate reward . Note that Theorem 2 gives the upper bound in discounted MDP setting; for undiscounted setting (γ = 1), the upper bound is at the order of O. Lower bound is omitted due to the lack of space. The idea of constructing MDP in which learning is difficult and the algorithm must make |S||A|T log 1 δ calls to G(M), is similar to.While the surrogate reward guarantees the unbiasedness, we sacrifice the variance at each of our learning steps, and this in turn delays the convergence (as also evidenced in the sample complexity bound). It can be verified that the variance of surrogate reward is bounded when C is invertible, and it is always higher than the variance of true reward. This is summarized in the following theorem: Theorem 3. Let r ∈ [0, R max] be bounded reward and confusion matrix C is invertible. Then, the variance of surrogate rewardr is bounded as follows: DISPLAYFORM4 To give an intuition of the bound, when we have binary reward, the variance for surrogate reward bounds as follows: DISPLAYFORM5 (1−e+−e−) 2. As e − + e + → 1, the variance becomes unbounded and the proposed estimator is no longer effective, nor will it be well-defined. In practice, there is a trade-off question between bias and variance by tuning a linear combination of R andR, i.e., R proxy = ηR + (1 − η)R, and choosing an appropriate η ∈. In Section 3.1 we have assumed the knowledge of reward confusion matrices, in order to compute the surrogate reward. This knowledge is often not available in practice. Estimating these confusion matrices is challenging without knowing any ground truth reward information; but we'd like to note that efficient algorithms have been developed to estimate the confusion matrices in supervised learning settings BID0;; ). The idea in these algorithms is to dynamically refine the error rates based on aggregated rewards. Note this approach is not different from the inference methods in aggregating crowdsourcing labels, as referred in the literature (; ;). We adapt this idea to our reinforcement learning setting, which is detailed as follows. At each training step, the RL agent collects the noisy reward and the current state-action pair. Then, for each pair in S × A, the agent predicts the true reward based on accumulated historical observations of reward for the corresponding state-action pair via, e.g., averaging (majority voting). Finally, with the predicted true reward and the accuracy (error rate) for each state-action pair, the estimated reward confusion matricesC are given bỹ DISPLAYFORM0 where in above # [·] denotes the number of state-action pair that satisfies the condition [·] in the set of observed rewardsR(s, a) (see Algorithm 1 and 3);r(s, a) andr(s, a) denote predicted true rewards (using majority voting) and observed rewards when the state-action pair is (s, a). The above procedure of updatingc i,j continues indefinitely as more observation arrives. Algorithm 1 Reward Robust RL (sketch) DISPLAYFORM1 Initialize value function Q(s, a) arbitrarily. while Q is not converged do Initialize state s ∈ S while s is not terminal do Choose a from s using policy derived from Q Take action a, observe s and noisy rewardr if collecting enoughr for every S × A pair then Get predicted true rewardr using majority voting Estimate confusion matrixC based onr andr (Eqn. 4) Obtain surrogate rewardṙ DISPLAYFORM2 Our final definition of surrogate reward replaces a known reward confusion C in Eqn. FORMULA4 with our estimated oneC. We denote this estimated surrogate reward asṙ. We present (Reward Robust RL) in Algorithm 1 4. Note that the algorithm is rather generic, and we can plug in any exisitng RL algorithm into our reward robust one, with only changes in replacing the rewards with our estimated surrogate rewards. In this section, reward robust RL is tested in different games, with different noise settings. Due to space limit, more experimental can be found in Appendix D. Environments and RL Algorithms To fully test the performance under different environments, we evaluate the proposed robust reward RL method on two classic control games (CartPole, Pendulum) and seven Atari 2600 games (AirRaid, Alien, Carnival, MsPacman, Pong, Phoenix, Seaquest), which encompass a large variety of environments, as well as rewards. Specifically, the rewards could be unary (CartPole), binary (most of Atari games), multivariate (Pong) and even continuous (Pendulum). A set of state-of-the-art reinforcement learning algorithms are experimented with while training under different amounts of noise (See TAB4 5 . For each game and algorithm, three policies are trained based on different random initialization to decrease the variance. Reward Post-Processing For each game and RL algorithm, we test the performances for learning with true rewards, learning with noisy rewards and learning with surrogate rewards. Both symmetric and asymmetric noise settings with different noise levels are tested. For symmetric noise, the confusion matrices are symmetric. As for asymmetric noise, two types of random noise are tested: 1) rand-one, each reward level can only be perturbed into another reward; 2) rand-all, each reward could be perturbed to any other reward, via adding a random noise matrix. To measure the amount of noise w.r.t confusion matrices, we define the weight of noise ω in Appendix B.2. The larger ω is, the higher the noise rates are. CartPole The goal in CartPole is to prevent the pole from falling by controlling the cart's direction and velocity. The reward is +1 for every step taken, including the termination step. When the cart or pole deviates too much or the episode length is longer than 200, the episode terminates. Due to the unary reward {+1} in CartPole, a corrupted reward −1 is added as the unexpected error (e − = 0). As a , the reward space R is extended to {+1, −1}. Five algorithms , , , and are evaluated. In FIG1, we show that our estimator successfully produces meaningful surrogate rewards that adapt the underlying RL algorithms to the noisy settings, without any assumption of the true distribution of rewards. With the noise rate increasing (from 0.1 to 0.9), the models with noisy rewards converge slower due to larger biases. However, we observe that the models always converge to the best score 200 with the help of surrogate rewards. DISPLAYFORM0 In some circumstances (slight noise -see FIG8, 6b, 6c, 6d), the surrogate rewards even lead to faster convergence. This points out an interesting observation: learning with surrogate reward even outperforms the case with observing the true reward. We conjecture that the way of adding noise and then removing the bias introduces implicit exploration. This implies that for settings even with true reward, we might consider manually adding noise and then remove it in expectation. Pendulum The goal in Pendulum is to keep a frictionless pendulum standing up. Different from the CartPole setting, the rewards in pendulum are continuous: r ∈ (−16.28, 0.0]. The closer the reward is to zero, the better performance the model achieves. Following our extension (see Section 3.1), the (−17, 0] is firstly discretized into 17 intervals: (−17, −16], (−16, −15], · · ·, (−1, 0], with its value approximated using its maximum point. After the quantization step, the surrogate rewards can be estimated using multi-outcome extensions presented in Section 3.1. We experiment two popular algorithms, and in this game. In FIG2, both algorithms perform well with surrogate rewards under different amounts of noise. In most cases, the biases were corrected in the long-term, even when the amount of noise is extensive (e.g., ω = 0.7). The quantitative scores on CartPole and Pendulum are given in TAB1, where the TAB2. Our reward robust RL method is able to achieve consistently good scores. Atari We validate our algorithm on seven Atari 2600 games using the state-of-the-art algorithm PPO BID4. The games are chosen to cover a variety of environments. The rewards in the Atari games are clipped into {−1, 0, 1}. We leave the detailed settings to Appendix B. Results for PPO on Pong-v4 in symmetric noise setting are presented in FIG3. Due to limited space, more on other Atari games and noise settings are given in Appendix D.3. Similar to previous , our surrogate estimator performs consistently well and helps PPO converge to the optimal policy. TAB2 shows the average scores of PPO on five selected Atari games with different amounts of noise (symmetric & asymmetric). In particular, when the noise rates e + = e − > 0.3, agents with surrogate rewards obtain significant amounts of improvements in average scores. We do not present the for the case with unknown C because the state-space (image-input) is very large for Atari games, which is difficult to handle with the solution given in Section 3.3. DISPLAYFORM1 Only an underwhelming amount of reinforcement learning studies have focused on the settings with perturbed and noisy rewards, despite the fact that such noises are common when exploring a realworld scenario, that faces sensor errors or adversarial examples. We adapt the ideas from supervised Er |r (r) = Pr |r (r =r −)r − + Pr |r (r =r +)r +.When r = r +, from the definition in Lemma 1:Pr |r (r =r −) = e +, Pr |r (r =r +) = 1 − e +. Taking the definition of surrogate rewards Eqn. FORMULA2 DISPLAYFORM0 Similarly, when r = r −, it also verifies Er |r [r(s t, a t, s t+1)] = r(s t, a t, s t+1).Proof of Lemma 2. The idea of constructing unbiased estimator is easily adapted to multi-outcome reward settings via writing out the conditions for the unbiasedness property (s.t. Er |r [r] = r.). For simplicity, we shorthandr(r = R i) asR i in the following proofs. Similar to Lemma 1, we need to solve the following set of functions to obtainr: DISPLAYFORM1 whereR i denotes the value of the surrogate reward when the observed reward is R i. Define R:= [R 0 ; R 1 ; · · · ; R M −1], andR:= [R 0,R 1, ...,R M −1], then the above equations are equivalent to: R = C ·R. If the confusion matrix C is invertible, we obtain the surrogate reward: DISPLAYFORM2 According to above definition, for any true reward level R i, i = 0, 1, · · ·, M − 1, we have DISPLAYFORM3 Furthermore, the probabilities for observing surrogate rewards can be written as follows: DISPLAYFORM4 wherep i = j p j c j,i, andp i, p i represent the probabilities of occurrence for surrogate rewardR i and true reward R i respectively. Corollary 1. Letp i and p i denote the probabilities of occurrence for surrogate rewardr(r = R i) and true reward R i. Then the surrogate reward satisfies, DISPLAYFORM5 Proof of Corollary 1. From Lemma 2, we have, DISPLAYFORM6 Consequently, DISPLAYFORM7 To establish Theorem 1, we need an auxiliary (Lemma 3) from stochastic process approximation, which is widely adopted for the convergence proof for Q-Learning (; BID14 . Lemma 3. The random process {∆ t} taking values in R n and defined as DISPLAYFORM8 converges to zero w.p.1 under the following assumptions: DISPLAYFORM9 Here F t = {∆ t, ∆ t−1, · · ·, F t−1 · · ·, α t, · · ·} stands for the past at step t, α t (x) is allowed to depend on the past insofar as the above conditions remain valid. The notation || · || W refers to some weighted maximum norm. Proof of Lemma 3. See previous literature (; BID14 .Proof of Theorem 1. For simplicity, we abbreviate s t, s t+1, Q t, Q t+1, r t,r t and α t as s, s, Q, Q, r,r, and α, respectively. Subtracting from both sides the quantity Q * (s, a) in Eqn.: DISPLAYFORM10 In consequence, DISPLAYFORM11 Finally, DISPLAYFORM12 Becauser is bounded, it can be clearly verified that DISPLAYFORM13 for some constant C. Then, due to the Lemma 3, ∆ t converges to zero w.p.1, i.e., Q (s, a) converges to Q * (s, a).The procedure of Phased Q-Learning is described as Algorithm 2: DISPLAYFORM14 DISPLAYFORM15 Note thatP here is the estimated transition probability, which is different from P in Eqn. FORMULA22.To obtain the sample complexity , the range of our surrogate reward needs to be known. Assuming reward r is bounded in [0, R max], Lemma 4 below states that the surrogate reward is also bounded, when the confusion matrices are invertible:Lemma 4. Let r ∈ [0, R max] be bounded, where R max is a constant; suppose C M ×M, the confusion matrix, is invertible with its determinant denoting as det(C). Then the surrogate reward satisfies DISPLAYFORM16 Proof of Lemma 4. From Eqn. FORMULA4, we have, DISPLAYFORM17 where adj(C) is the adjugate matrix of C; det(C) is the determinant of C. It is known from linear algebra that, DISPLAYFORM18 where M ji is the determinant of the (M − 1) × (M − 1) matrix that from deleting row j and column i of C. Therefore, M ji is also bounded: DISPLAYFORM19 where the sum is computed over all permutations σ of the set {0, 1, · · ·, M − 2}; c is the element of M ji; sgn(σ) returns a value that is +1 whenever the reordering given by σ can be achieved by successively interchanging two entries an even number of times, and −1 whenever it can not. Consequently, DISPLAYFORM20 Proof of Theorem 2. From Hoeffding's inequality, we obtain: DISPLAYFORM21 In the same way,r t is bounded by M det(C) · R max from Lemma 4. We then have, DISPLAYFORM22 Further, due to the unbiasedness of surrogate rewards, we have st+1∈S P a (s t, s t+1)r t = st+1∈S;rt∈R P a (s t, s t+1,r t)r t.As a , DISPLAYFORM23 In the same way, DISPLAYFORM24 Recursing the two equations in two directions (0 → T), we get DISPLAYFORM25 Combining these two inequalities above we have: DISPLAYFORM26 For arbitrarily small, by choosing m appropriately, there always exists 1 = 2 =(1−γ) 2(1+γ) such that the policy error is bounded within. That is to say, the Phased Q-Learning algorithm can converge to the near optimal policy within finite steps using our proposed surrogate rewards. Finally, there are |S||A|T transitions under which these conditions must hold, where | · | represent the number of elements in a specific set. Using a union bound, the probability of failure in any condition is smaller than DISPLAYFORM27 We set the error rate less than δ, and m should satisfy that DISPLAYFORM28 In consequence, after m|S||A|T calls, which is, O DISPLAYFORM29, the value function converges to the optimal one for every state s, with probability greater than 1 − δ. The above bound is for discounted MDP setting with 0 ≤ γ < 1. For undiscounted setting γ = 1, since the total error (for entire trajectory of T time-steps) has to be bounded by, therefore, the error for each time step has to be bounded by T. Repeating our anayslis, we obtain the following upper bound: DISPLAYFORM30 Proof of Theorem 3. DISPLAYFORM31 Using the CauchySchwarz inequality, DISPLAYFORM32 So we get, Var(r) − Var(r) ≥ 0. In addition, DISPLAYFORM33 We set up our experiments within the popular OpenAI baselines BID4 and kerasrl framework. Specifically, we integrate the algorithms and interact with OpenAI Gym environments TAB4. A set of state-of-the-art reinforcement learning algorithms are experimented with while training under different amounts of noise, including Q-Learning BID19 BID18, Cross-Entropy Method (CEM) BID11, Deep SARSA BID10, Deep Q-Network (DQN) (; BID6, Dueling DQN (DDQN) BID17, Deep Deterministic Policy Gradient (DDPG) , Continuous DQN (NAF) and Proximal Policy Optimization (PPO) BID4 algorithms. For each game and algorithm, three policies are trained based on different random initialization to decrease the variance in experiments. We explore both symmetric and asymmetric noise of different noise levels. For symmetric noise, the confusion matrices are symmetric, which means the probabilities of corruption for each reward choice are equivalent. For instance, a confusion matrix DISPLAYFORM0 says that r 1 could be corrupted into r 2 with a probability of 0.2 and so does r 2 (weight = 0.2).As for asymmetric noise, two types of random noise are tested: 1) rand-one, each reward level can only be perturbed into another reward; 2) rand-all, each reward could be perturbed to any other reward. To measure the amount of noise w.r.t confusion matrices, we define the weight of noise as follows: DISPLAYFORM1, where ω controls the weight of noise; I and N denote the identity and noise matrix respectively. Suppose there are M outcomes for true rewards, N writes as: DISPLAYFORM2 where for each row i, 1) rand-one: randomly choose j, s.t n i,j = 1 and n i,k = 0 if k = j; 2) randall: generate M random numbers that sum to 1, i.e., j n i,j = 1. For the simplicity, for symmetric noise, we choose N as an anti-identity matrix. As a , c i,j = 0, if i = j or i + j = M. To obtain an intuitive view of the reward perturbation model, where the observed rewards are generated based on a reward confusion matrix, we constructed a simple MDP and evaluated the performance of robust reward Q-Learning (Algorithm 1) on different noise ratios (both symmetric and asymmetric). The finite MDP is formulated as FIG6: when the agent reaches state 5, it gets an instant reward of r + = 1, otherwise a zero reward r − = 0. During the explorations, the rewards are perturbed according to the confusion matrix C 2×2 = [1 − e −, e − ; e +, 1 − e +]. There are two experiments conducted in this setting: 1) performance of Q-Learning under different noise rates TAB5; 2) robustness of estimation module in time-variant noise (FIG6). As shown in TAB5, Q-Learning achieved better consistently with the guidance of surrogate rewards and the confusion matrix estimation algorithm. For time-variant noise, we generated varying amount of noise at different training stages: 1) e − = 0.1, e + = 0.3 (0 to 1e 4 steps); 2) e − = 0.2, e + = 0.1 (1e 4 to 3e 4 steps); 3) e − = 0.3, e + = 0.2 (3e 4 to 5e 4 steps); 4) e − = 0.1, e + = 0.2 (5e 4 to 7e 4 steps). In FIG6, we show that Algorithm 1 is robust against time-variant noise, which dynamically adjusts the estimatedC after the noise distribution changes. Note that we set a maximum memory size for collected noisy rewards to let the agents only learn with recent observations. CartPole and Pendulum The policies use the default network from keras-rl framework. which is a five-layer fully connected network 6. There are three hidden layers, each of which has 16 units and followed by a rectified nonlinearity. The last output layer is activated by the linear function. For CartPole, We trained the models using Adam optimizer with the learning rate of 1e −3 for 10,000 steps. The exploration strategy is Boltzmann policy. For DQN and Dueling-DQN, the update rate of target model and the memory size are 1e −2 and 50, 000. For Pendulum, We trained DDPG and NAF using Adam optimizer with the learning rate of 5e −4 for 150, 000 steps. the update rate of target model and the memory size are 1e −3 and 100, 000.Atari Games We adopt the pre-processing steps as well as the network architecture from. Specifically, the input to the network is 84×84×4, which is a concatenation of the last 4 frames and converted into 84 × 84 gray-scale. The network comprises three convolutional layers and two fully connected layers 7. The kernel size of three convolutional layer are 8 × 8 with stride 4 (32 filters), 4 × 4 with stride 2 (64 filters) and 3 × 3 with stride 1 (64 filters), respectively. Each hidden layer is followed by a rectified nonlinearity. Except for Pong where we train the policies for 3e 7 steps, all the games are trained for 5e 7 steps with the learning rate of 3e −4. Note that the rewards in the Atari games are discrete and clipped into {−1, 0, 1}. Except for Pong game, in which r = −1 means missing the ball hit by the adversary, the agents in other games attempt to get higher scores in the episode with binary rewards 0 and 1. C.1 REWARD ROBUST RL ALGORITHMS As stated in Section 3.3, the confusion matrix can be estimated dynamically based on the aggregated answers, similar to previous literature in supervised learning . To get a concrete view, we take Q-Learning for an example, and the algorithm is called Reward Robust Q-Learning (Algorithm 3). Note that is can be extended to other RL algorithms by plugging confusion matrix estimation steps and the computed surrogate rewards, as shown in the experiments FIG8 ). Get predicted true rewardr(s, a) using majority voting in everyR(s, a) Estimate confusion matrixC based onr(s, a) andr(s, a) (Eqn. FORMULA11) Empty all the sets of observed rewardsR(s, a) Obtain surrogate rewardṙ(s, a) using DISPLAYFORM0 In Algorithm 3, the predicted true rewardr(s, a) is derived from majority voting in collected noisy setsR(s, a) for every state-action pair (s, a) ∈ S × A, which is a simple but efficient way of leveraging the expectation of aggregated rewards without assumptions on prior distribution of noise. In the following, we adopt standard Expectation-Maximization (EM) idea in the our estimation framework (arguably a simple version of it), inspired by previous works BID20.Assuming the observed noisy rewards are independent conditional on the true reward, we can compute the posterior probability of true reward from the Bayes' theorem: DISPLAYFORM0 where P(r = R j) is the prior of true rewards, and P(r = R k |r = R j) is estimated by current estimated confusion matrixC: P(r = R k |r = R j) =c j,i. Note that the inference should be conducted for each state-action pair (s, a) ∈ S × A in every iteration, i.e., P(r(s, a) = R i |r(s, a, 1) = R 1, · · ·,r(s, a, n) = R n ), abbreviated as P(r(s, a) = R i ), which requires relatively greater computation costs compared to the majority voting policy. It also points out an interesting direction to check online EM algorithms for our perturbed-RL problem. After the inference steps in Eqn., the confusion matrixC is then updated based on the posterior probabilities:c DISPLAYFORM1 where P(r(s, a) = R i ) denotes the inference probabilities of true rewards based on collected noisy rewards setsR(s, a). To utilize EM algorithms in the robust reward algorithms (e.g., Algorithm 3), we need to replace Eqn. by Eqn. for the estimation of reward confusion matrix. In previous sections, to let our presentation stay focused, we consider the state-independent perturbed reward environments, which share the same confusion matrix for all states. In other words, the noise for different states is generated within the same distribution. More generally, the generation ofr follows a certain function C: S × R →R, where different states may correspond to varied noise distributions (also varied confusion matrices). However, our algorithm is still applicable except for maintaining different confusion matrices C s for different states. It is worthy to notice that Theorem 1 holds because the surrogate rewards produce an unbiased estimation of true rewards for each state, i.e., Er |r,st [r(s t, a t, s t+1)] = r(s t, a t, s t+1). Furthermore, Theorem 2 and 3 can be revised as:Theorem 4. (Upper bound) Let r ∈ [0, R max] be bounded reward, C s be invertible reward confusion matrices with det(C s) denoting its determinant. For an appropriate choice of m, the Phased Q-Learning algorithm calls the generative model DISPLAYFORM0 times in T epochs, and returns a policy such that for all state s ∈ S, |V π (s) − V * (s)| ≤, > 0, w.p. ≥ 1 − δ, 0 < δ < 1.Theorem 5. Let r ∈ [0, R max] be bounded reward and all confusion matrices C s are invertible. Then, the variance of surrogate rewardr is bounded as follows: DISPLAYFORM1 As illustrated in Theorem 3, our surrogate rewards introduce larger variance while conducting unbiased estimation which are likely to decrease the stability of RL algorithms. Apart from the linear combination idea (appropriate trade-off), some variance reduction techniques in statistics (e.g., correlated sampling) can also be applied into our surrogate rewards. Specially, BID2 proposed to a reward estimator to compensate for stochastic corrupted reward signals. It is worthy to notice that their method is designed for variance reduction under stochastic (zero-mean) noise, which is no longer efficacious in more general perturbed-reward setting. However, it is potential to integrate their method with our robust-reward RL framework because surrogate rewards guarantee unbiasedness in reward expectation. To verify this idea, we repeated the experiments of Cartpole in Section 4.2 but included variance reduction step for estimated surrogate rewards. Following BID2, we adopted sample mean as a simple approximator during the training and set sequence length as 100. As shown in Figure 5, the models with only variance reduction technique (red lines) suffer from huge biases when the noise is large, and cannot converge to the optimal policies like those under noisy rewards. Nevertheless, they benefits from variance reduction for surrogate rewards (purple lines), which achieve faster convergence or better performance in many cases (e.g., Figure 5a (ω = 0.7), 5b (ω = 0.3)).It is also not surprising that the integrated algorithm (purple lines) outperforms better as the noise rate increases (indicating larger variance from Theorem 3, e.g., ω = 0.9). Similarly, TAB6 provides quantitative which show that our surrogate benefits from variance reduction techniques ("ours + VRT"), especially when the noise rate is large. DISPLAYFORM0 Figure 5: Learning curves from five reward robust RL algorithms (see Algorithm 3) on CartPole game with true rewards (r), noisy rewards (r) (η = 1), sample-mean noisy rewards (η = 1), estimated surrogate rewards (ṙ) and sample-mean estimated surrogate rewards. Note that confusion matrices C are unknown to the agents here. From top to the bottom, the noise rates are 0.1, 0.3, 0.7 and 0.9. Here we repeated each experiment 10 times with different random seeds and plotted 10% to 90% percentile area with its mean highlighted. To validate the effectiveness of robust reward algorithms (like Algorithm 3), where the noise rates are unknown to the agents, we conduct extensive experiments in CartPole. It is worthwhile to notice that the noisy rates are unknown in the explorations of RL agents. Besides, we discretize the Figure 6 provides learning curves from five algorithms with different kinds of rewards. The proposed estimation algorithms successfully obtain the approximate confusion matrices, and are robust in the unknown noise environments. From FIG9, we can observe that the estimation of confusion matrices converges very fast. The are inspiring because we don't assume any additional knowledge about noise or true reward distribution in the implementation. and estimated surrogate rewards (ṙ). Note that confusion matrices C are unknown to the agents here. From top to the bottom, the noise rates are 0.1, 0.3, 0.7 and 0.9. Here we repeated each experiment 10 times with different random seeds and plotted 10% to 90% percentile area with its mean highlighted.
A new approach for learning with noisy rewards in reinforcement learning
1,158
scitldr
Training recurrent neural networks (RNNs) on long sequences using backpropagation through time (BPTT) remains a fundamental challenge. It has been shown that adding a local unsupervised loss term into the optimization objective makes the training of RNNs on long sequences more effective. While the importance of an unsupervised task can in principle be controlled by a coefficient in the objective function, the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be degraded or erased. Compared to existing semi-supervised sequence learning methods, this paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task. We achieve this by dividing RNN hidden space into a private space for the supervised task and a shared space for both the supervised and unsupervised tasks. We present extensive experiments with the proposed framework on several long sequence modeling benchmark datasets. Results indicate that the proposed framework can yield performance gains in RNN models where long term dependencies are notoriously challenging to deal with. Recurrent neural networks (RNNs) are widely considered the de facto tool for modeling sequences with a deep learning approach. Training RNNs usually relies on the use of backpropagation through time (BPTT). It is well known that unfortunately, it becomes difficult for BPTT to transmit gradients through very long computational graphs, as gradients tend to explode or vanish BID6. FIG0 -(a) gives an example in an oversimplified setting, where the hidden state at the first time-step does not receive gradients. To make the BPTT-based training more effective, architectures such as the long short-term memory (LSTM) BID5 ) RNN and gated recurrent unit (GRU) BID2 ) RNNs, use parameterized gates which can make gradient flow more effective over long sequences. Recently, strong evidence in BID14 suggests that simultaneously learning supervised and unsupervised tasks can also enhance an RNN's ability to capture long-term dependencies. By injecting unsupervised tasks locally along the sequence the unsupervised tasks can be harnessed to provide local and reliable gradients to more effectively optimize RNNs for long sequence learning tasks. Recent work using this strategy BID10 BID4 BID14, could be characterized as employing semi-supervised learning architectures consisting of two distinct types of RNNs, one for the primary supervised task and another for the auxiliary unsupervised tasks which are injected locally along the sequence. More concretely, the RNN for an unsupervised task updates is instantiated periodically along the sequence and its hidden states are reinitialized occasionally; whereas, the RNN for the supervised tasks operates at every time-step. FIG0 -(b) shows how gradients flow in this architecture. Despite the ability of these new semi-supervised architectures to mitigate the problem of long-distance BPTT, these approaches risk impairing the training of the main task by contaminating the entire representation-space with the unsupervised loss gradients. The challenge we address here is how to properly coordinate supervised and unsupervised tasks. Common wisdom for semi-supervised learning BID10 typically follows one of the two procedures discussed below. The first widely used approach is to weight supervised and unsupervised loss functions with varying coefficients empirically. However this method cannot radically address aforementioned problem since representations for supervised and unsupervised learning are still entangled in same space. It is true that the contribution of the unsupervised task can in principle be controlled by a coefficient in the objective function, but the gradients with respect to the unsupervised loss term still influence all the hidden state dimensions, which might cause important information about the supervised task to be erased accidentally. The second approach coordinates these two types of learning by specifying a training order and separating them into different learning phases. For example, these approaches usually first pre-train a model under unsupervised setting, then use the model for supervised learning BID11.While these methods can provide rich auxiliary knowledge which are potentially useful for the main task, there is no guarantee that this asynchronous learning fashion could let the main task utilize the auxiliary information well, and therefore long-term dependencies are still difficult to capture. It is thus crucial to ask: how exactly can auxiliary unsupervised tasks best serve the main supervised learning task for long-term dependencies learning?On the other hand, it has been demonstrated that dividing an RNN's representational space into different groups is useful for modeling long-term dependencies. One such example is clockwork RNNs BID7, where each group is responsible for a subset of hidden states and each processes input at different clock speeds. It is also possible to let each layer represent a group, and each group may run at different time scales BID12 BID3.With the above analysis in mind, we propose to solve the long-term dependency problem by enabling the two RNNs to have a shared feature space for both supervised and unsupervised tasks, and allowing an RNN to have a private space dedicated for the supervised task. The key insight is to associate different time-scale updating operations of distinct RNNs with different representational spaces. Through the shared feature space, the RNNs form an interface to exchange features useful for both of them with less inference. As a side-product, the proposed variant of RNNs trains and evaluates slightly faster since the architecture by design introduced an inductive bias that the modules for auxiliary tasks should have less parameters. FIG0 -(c) shows how the gradients flow through the hidden states during the backward pass of BPTT for the proposed architecture. It is clear that the lower (blue) space is not allowed to receive gradients from the unsupervised task. Our primary contribution is introducing a private-and-shared feature space architecture for semisupervised sequential learning tasks, which is motivated through the lens of gradient flows. While the modification is simple, its application on modeling long-term dependencies has shown significant improvement over other state-of-the-art algorithms to our knowledge, and thus we believe it will be of broad interest to the community. In Section 3, we describe the proposed method. In section 4, we present the experiments. In section 5, we give an analysis of our method and experiments.2 RELATED WORK BID1 show that a generic temporal convolutional network (TCN) outperforms some RNN variants on several benchmark datasets. However, compared with TCNs, RNNs require lower memory for inference and can handle potential parameter change for a transfer of domain BID1. Furthermore, BID14 show that RNNs with an auxiliary unsupervised loss still outperform TCNs in terms of accuracy on long sequence learning tasks. More importantly, RNNs can model, in principle, infinitely long dependencies with a finite number of parameters. Unsupervised learning is often introduced in a pre-training fashion. For example, BID11 show that for natural language understanding tasks, generative pre-training of a language model on a large amount of unlabeled text followed by discriminative fine-tuning leads to significant improvement. As another example, in BID10, after pretraining the model with unlabeled data, the authors fix the weights and add additional task-specific model capacity, making it possible to leverage large, rich and universal representations for the downstream tasks. It should be noted that BID10 utilizes additional datasets, whereas we do not. BID14 propose RNN-AE (AutoEncoder) to form an auxiliary unsupervised task to aid RNNs in handling long sequencse, i.e. r-RNN (reconstruction) and p-RNN (prediction). The r-RNN approach tries to reconstruct the original input from the internal representation. Skim-RNN BID13 ) dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. A skim-RNN contains multiple RNNs that have a shared space and also have private spaces on which they operate. However a skim-RNN only considers supervised tasks and aims at accelerating the inference speed. In contrast, our method is specifically designed for long sequence learning problems with an unsupervised loss. Furthermore, a skim-RNN only uses one RNN at each time-step which is determined by a reinforcement learning agent, whereas ours always use multiple RNNs. As a side-effect of not relying on reinforcement learning algorithms, our method is easier to train in practice. The hidden state of our proposed RNN also has multiple subsets, but they run at the same clock speed. Even more importantly, we introduce an inductive bias that different hidden sub-spaces should be responsible for different tasks. We will discuss these methods in detail. We first briefly explain our version of the RNN-AE and its key differences with that introduced by BID14; then, we dive into our method of inducing a private-and-shared structure. Following BID14, we add the unsupervised tasks of local sequence reconstruction and prediction at various anchor points within the input sequence. We sample n anchors at locations a 1, a 2,..., a n among the input sequence and at anchor a i, we obtain an unsupervised loss. This loss is generated through local reconstruction and/or prediction within a neighbourhood of the input x ai. It uses an auxiliary RNN with GRUs, initialized by f (h ai) which is a function of the hidden state at the anchor time step. In BID14, f is simply the identity function f (x) = x. This generates a total unsupervised loss as the sum of auxiliary loss at each anchor location, and its gradient flows back to each anchor neighbourhood to improve long-term dependency. There are some important differences in our implementation: Instead of randomly sampling the anchor locations over the entire sequence, we evenly divide the input into as many sub-sequences as there are anchors and only sample each anchor within its corresponding region. This is to ensure that reconstruction spans most if not all of the input sequence; this way, gradient flows back to a higher percentage of the input. During reconstruction, we ask the r-RNN to reconstruct the local sequence backward (as we found that backwards reconstruction worked better in practice). Furthermore, in various task setting, we include both unsupervised prediction and reconstruction instead of using only one of the two. And lastly, and most importantly, rather than using the entire hidden state of the anchor to do reconstruction and prediction, we only use a part of the state vector for these unsupervised tasks; this is the point we expand on next. We propose to divide the hidden space of the main RNN into task-specific sub-spaces. The intuition here is that we want to disentangle the feature space so the learned features from the unsupervised tasks do not affect the entire state vector which is later used for the supervised task. In doing so, we create an uncontaminated region learned solely through the supervised task; naturally, it would capture more specific features for the supervised tasks than the shared region that's used both for the unsupervised and supervised task. This way, we overcome the negative side effects of RNN-AE while retaining its ability to introduce gradient at all time steps. Thus we are able to facilitate the learning of long-term dependency without hindering the model's ability to perform supervised task. The experiments are designed to answer a key question: Since we divide hidden states into sub-spaces, compared with RNNs with a holistic hidden space, is our proposed method indeed more effective?We evaluate our proposed methods on two benchmark tasks: image classification and sentiment analysis. For image classification, we used pixel-level sequential MNIST, Fashion-MNIST and CIFAR-10 as our datasets. For sentiment analysis, we use the IMDB movie reviews corpus. Detailed information about the datasets is given in TAB1.In order to compare with state-of-the-art fairly, we re-implement and re-run all the baseline methods. For all of our experiments, including the baseline and re-implementations, we grid search our hyperparameters using a validation set to find the optimal values. For faster convergence, we adopt SGDR BID9 as the optimizer throughout this paper. We incorporate early stopping with patience of 50 epochs to avoid overfitting, which is also based on a validation set. As shown in Table 3, our proposed method achieves better outcome on both MNIST and CIFAR-10 than previous competitive . Table 2 provides a more comprehensive list of the experiments along with the hyperparameters used for each one. The top row(s) of each sections, the ones without parameters, are baseline run with GRU. As the share proportion parameter reaches 100%, our model reduce to the RNN-AE model introduced in BID14, thus the rows with "100%" as the value for the "shared" hyperparameter are for BID14. As previously mentioned, the hyperparameters for these cases are also grid searched to ensure fair comparison. Table 2: Performances of our models on MNIST, fashion-MNIST, CIFAR-10, and IMDB datasets. Note that when the shared proportion reaches 100%, it reduces to the model proposed in BID14.In our experiments, we are able to achieve better accuracy with either less or comparable number of parameters. In the case of MNIST, we are able to do so with less than one-tenth of the parameters compared to BID14.Overall, from these tables, we can study the effect of key hyperparameters such as private-vs-shared proportion, number of anchors and reconstruction length. we see how entirely sharing the hidden space is not optimal, which leads us to ask the follow up question: how much of the space should be shared? In the next section, we decribe our analysis on this very inquiry. 5.1. In this section, we present two crucial analyses on the private-shared structure. First, we examine how shared proportion affects our model outcomes. Then, we visualize both the private and shared hidden space learned by our model to highlight distinct features of the two. iRNN 97% N/A uRNN BID0 95.1% N/A RNN with Aux. Loss BID14 98.4% 72.2% RNN with private-shared space 98.9% 76.2% Table 3: Comparing test accuracy on sequential MNIST and sequential CIFAR10. Table 4 Acc. L Main L Aux. S.% # Param. Table 4: Experiments on shared proportions (S.%) performed on CIFAR. Accuracy is reported in percentage. L Main and L Aux. are the main loss and auxiliary loss respectively. To observe exactly what effect different sharing proportions have on classification tasks, we run our model on CIFAR-10 with varying share percentages from 0% to 100% as listed in Table 4. The are given in FIG1.With 0% shared, the model is essentially an RNN without unsupervised loss as no part of the hidden state is sent to the auxiliary RNN. In this setting, our model achieves the lowest accuracy of the group. As we slowly increase the shared percentage, the accuracy rises. From this, we confirm that the addition of auxiliary loss is indeed helpful in modeling long-term dependencies. Remarkably, there is a pronounced peak at the 50% mark -where half of the state is shared by the auxiliary RNN and half is reserved for the classification task -after which, model performance, in terms of both cross entropy loss and classification accuracy begins to worsen. One may suspect that this is caused by overfitting since the number of parameters steadily increases. However, in this set of experiments, the capacity of the main RNN is fixed as the dimension of the hidden state remains the same. As we include more of the hidden state for unsupervised task, the auxiliary RNN increases in capacity, thus causing the overall parameter number to increase. Furthermore, the auxiliary test loss lowers progressively, suggesting that there is no overfitting in the auxiliary RNN either. In this case, the change in performance should attributed to the difference in shared proportion. This finding agrees with what we posited earlier: that the learned knowledge from the unsupervised task is not all helpful towards the supervised task, and that this information may actually hurt performance by overwriting knowledge that are important to the main task. What is somewhat surprising is that at 100% shared, the model's classification is barely comparable to the 10% version. We did not expect such big drop in classification rate (∼3%) at the maximum shared level. However, this further corroborates the importance of disentangling the hidden space through a private-shared framework. To better understand how the use of our framework influences representation learning, we analyze the hidden vectors of the main RNN. For this analysis, we retrain our MNIST model with 50% shared proportion and a hidden space size of only 16. In other words, the first 8 dimensions are shared, and the last 8 are private. We hypothesize that the small representation-space would force the RNN to learn important features; it would also allow us to see how knowledge from the two tasks might contend with each other if there existed any competition. To visualize the hidden space, we collect the RNN's hidden state vectors {h t ∈ R d×1 | 1 ≤ t ≤ 784} after it receives an MNIST image (784 pixels) as input. We concatenate them horizontally to obtain H = [h 1 ||h 2 ||... ||h 784] ∈ R d×784. Then, we look at how the n th dimension of the hidden state changes across time by picking out the n th row of H. We reshape each row into a 28 × 28 image to better compare the state elements. FIG2 shows 3 instances of visualization of the hidden state vector across time, which are generated using the same network and different input image at different stages of training. Respectively from left to right, the images are generated when training begins, progresses, and converges. Each small square in the image corresponds to a single dimension of the hidden state from t = 1 to t = 784. In this setting, the left 2 columns of each subplot contain the 8 shared dimensions and the right 2 columns corresponds to the 8 private dimensions. Noticeably, as training progresses the hidden state changes how it responds to input. Moreover, there are dimensions in the shared region of the hidden state that originally display a response similar to those in the private region, only to be replaced later. For example, in the beginning of training, there are several similar responses in multiple dimensions of the state vector. In both the designated shared and private region, some dimensions seem to have a high correlation with the input image thus ing in a readable number. As the model refines itself, the aforementioned response begins to fade from the shared representation; however, it still exists in the private representation. In the end, we see the a complete absence of such response in the shared representation, replaced by more abstract features. One possible explanation would be that this type of response is more important to the supervised task than to the unsupervised one. By generating a large non-zero value when the input is non-zero, a dimension of the hidden state allows the RNN to propagate this information along to later times steps. This could be very helpful for digit classification, as we might need to know what is written in the beginning to decide which digit it is. However, it is not required for the auxiliary task which concerns with only local pixels and has less need to propagate a strong signal when a particular input is given. In this paper, we have presented a semi-supervised RNN architecture with explicitly designed private and shared hidden representations. This architecture allows information transfer between the supervised and unsupervised tasks in a hitherto unexplored way. Compared with other similar semi-supervised RNN techniques, our experiments on widely used and competitive benchmark data sets suggest that our formulation indeed yields performance gains. We conjecture that these gains come from the desirable properties of both gradient and information flow in architectures with shared and private representations. As a side-product, our proposed architecture trains and evaluates faster than the related alternatives that we have explored since the architecture introduces an inductive bias that the modules for auxiliary tasks should have fewer parameters.
This paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task.
1,159
scitldr
We investigate multi-task learning approaches which use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtained a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method. We also design an SVD-based task re-weighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset. Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language (; Liu et al. (2019a; b) ) and visual representations from large-scale data. By leveraging supervised data from related tasks, multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks. While in some cases, great improvements have been reported compared to single-task learning , practitioners have also observed problematic outcomes, where the performances of certain tasks have decreased due to task interference (; Bingel and Søgaard ). Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools. In this work, we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives. Based on these insights, we develop methods to improve the effectiveness and robustness of multi-task training. There has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning, but less is known for neural networks. The conceptual message from the earlier work (; ; ;) show that multi-task learning is effective over "similar" tasks, where the notion of similarity is based on the single-task models (e.g. decision boundaries are close). The work on structural correspondence learning uses alternating minimization to learn a shared parameter and separate task parameters. use a parameter vector for each task and learn task relationships via l 2 regularization, which implicitly controls the capacity of the model. These are difficult to apply to neural networks: it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings. To determine whether two tasks interfere constructively or destructively, we investigate an architecture with a shared module for all tasks and a separate output module for each task . See Figure 1 for an illustration. Our motivating observation is that in addition to model similarity which affects the type of interference, task data similarity plays a second-order effect after controlling model similarity. To illustrate the idea, we consider three tasks with the same number of data samples where task 2 and 3 have the same decision boundary but different data distributions (see Figure 2 for an illustration). We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1's performance, depending on the amount of contributing data along the decision boundary! This observation shows that by measuring the similarities of the task data and the models separately, we can analyze the interference of tasks and attribute the cause more precisely. Motivated by the above observation, we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings. Our theoretical contribution involves three components: the capacity of the shared module, task covariance, and the per-task weight of the training procedure. The capacity plays a fundamental role because, if the shared module's capacity is too large, there is no interference between tasks; if it is too small, there can be destructive interference. Then, we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data. By varying task covariances, we observe both positive and negative transfers from one task to another! We then provide sufficient conditions which guarantee that one task can transfer positively to another task, provided with sufficiently many data points from the contributor task. Finally, we study how to assign per-task weights for settings where different tasks share the same data but have different labels. Our theory leads to the design of two algorithms with practical interest. First, we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks. On 5 tasks from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al. (2018b) ) trained with the BERT LARGE model by , our method improves the of BERT LARGE by a 2.35% average GLUE score, which is the standard metric for the benchmark. Further, we show that our method is applicable to transfer learning settings; we observe up to 2.5% higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of. Second, we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same data but different labels. On the ChestX-ray14 image classification dataset, we compare our method to the unweighted scheme and observe an improvement of 5.6 AUC score for all tasks. In , these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications. We study multi-task learning (MTL) models with a shared module for all tasks and a separate output module for each task. We ask: What are the key components to determine whether or not MTL is better than single-task learning (STL)? In response, we identify three components: model capacity, task covariance, and optimization scheme. After setting up the model, we briefly describe the role of model capacity. We then introduce the notion of task covariance, which comprises the bulk of the section. We finish by showing the implications of our for choosing optimization schemes. We are given k tasks. Let m i denote the number of data samples of task i. For task i, let X i ∈ R mi×d denote its covariates and let y i ∈ R mi denote its labels, where d is the dimension of the data. We have assumed that all the tasks have the same input dimension d. This is not a restrictive assumption and is typically satisfied, e.g. for word embeddings on BERT, or by padding zeros to the input otherwise. Our model assumes the output label is 1-dimensional. We can also model a multi-label problem with k types of labels by having k tasks with the same covariates but different labels. We consider an MTL model with a shared module B ∈ R d×r and a separate output module A i ∈ R r for task i, where r denotes the output dimension of B. See Figure 1 for the illustration. We define the objective of finding an MTL model as minimizing the following equation over B and the A i's: where L is a loss function such as the squared loss. The activation function g: R → R is applied on every entry of X i B. In equation 1, all data samples contribute equally. Because of the differences between tasks such as data size, it is natural to re-weight tasks during training: This setup is an abstraction of the hard parameter sharing architecture . The shared module B provides a universal representation (e.g., an LSTM for encoding sentences) for all tasks. Each task-specific module A i is optimized for its output. We focus on two models as follows. The single-task linear model. The labels y of each task follow a linear model with parameter θ ∈ R d: y = Xθ + ε. Every entry of ε follows the normal distribution N (0, σ 2) with variance σ 2. The function g(XB) = XB. This is a well-studied setting for linear regression . The single-task ReLU model. Denote by ReLU(x) = max(x, 0) for any x ∈ R. We will also consider a non-linear model where Xθ goes through the ReLU activation function with a ∈ R and θ ∈ R d: y = a · ReLU(Xθ) + ε, which applies the ReLU activation on Xθ entrywise. The encoding function g(XB) then maps to ReLU(XB). Positive vs. negative transfer. For a source task and a target task, we say the source task transfers positively to the target task, if training both through equation 1 improves over just training the target task (measured on its validation set). Negative transfer is the converse of positive transfer. Our goal is to analyze the three components to determine positive vs. negative transfer between tasks: model capacity (r), task covariances and the per-task weights ). We focus on regression tasks under the squared loss but we also provide synthetic experiments on classification tasks to validate our theory. Notations. For a matrix X, its column span is the set of all linear combinations of the column vectors of X. Let X † denote its pseudoinverse. We begin by revisiting the role of model capacity, i.e. the output dimension of B (denoted by r). We show that as a rule of thumb, r should be smaller than the sum of capacities of the STL modules. Example. Suppose we have k linear regression tasks using the squared loss, equation 1 becomes: The optimal solution of equation 3 for task i is. Hence a capacity of 1 suffices for each task. We show that if r ≥ k, then there is no transfer between any two tasks. Proposition 1. Let r ≥ k. There exists an optimum B and {A i} k i=1 of equation 3 where B A i = θ i, for all i = 1, 2,..., k. To illustrate the idea, as long as B contains {θ i} k i=1 in its column span, there exists A i such that B A i = θ i, which is optimal for equation 3 with minimum error. But this means no transfer among any two tasks. This can hurt generalization if a task has limited data, in which case its STL solution overfits training data, whereas the MTL solution can leverage other tasks' data to improve generalization. The proof of Proposition 1 and its extension to ReLU settings are in Appendix B.1. Figure 3: Performance improvement of a target task (Task 1) by MTL with a source task vs. STL. Red: positive transfer when the source is Task 2, which has the same covariance matrix with target. Green: negative (to positive) transfer when the source is Task 3, which has a different covariance from the target, as its # of samples increases. See the example below for the definition of each task. To show how to quantify task data similarity, we illustrate with two regression tasks under the linear model without noise: y 1 = X 1 θ 1 and y 2 = X 2 θ 2. By Section 2.2, it is necessary to limit the capacity of the shared module to enforce information transfer. Therefore, we consider the case of r = 1. Hence, the shared module B is now a d-dimensional vector, and A 1, A 2 are both scalars. A natural requirement of task similarity is for the STL models to be similar, i.e. |cos(θ 1, θ 2)| to be large. To see this, the optimal STL model for task 1 is (X 1 X 1) −1 X 1 y 1 = θ 1. Hence if |cos(θ 1, θ 2)| is 1, then tasks 1 and 2 can share a model B ∈ R d which is either θ 1 or −θ 1. The scalar A 1 and A 2 can then transform B to be equal to θ 1 and θ 2. Is this requirement sufficient? Recall that in equation 3, the task data X 1 and X 2 are both multiplied by B. If they are poorly "aligned" geometrically, the performance could suffer. How do we formalize the geometry between task alignment? In the following, we show that the covariance matrices of X 1 and X 2, which we define to be X 1 X 1 and X 2 X 2, captures the geometry. We fix |cos(θ 1, θ 2)| to be close to 1 to examine the effects of task covariances. In Appendix B.2.1 we fix task covariances to examine the effects of model cosine similarity. Concretely, equation 3 reduces to: where we apply the first-order optimality condition on A 1 and A 2 and simplify the equation. Specifically, we focus on a scenario where task 1 is the source and task 2 is the target. Our goal is to determine when the source transfers to the target positively or negatively in MTL. Determining the type of transfer from task 2 to task 1 can be done similarly. Answering the question boils down to studying the angle or cosine similarity between the optimum of equation 4 and θ 2. Example. In Figure 3, we show that by varying task covariances and the number of samples, we can observe both positive and negative transfers. The conceptual message is the same as Figure 2; we describe the data generation process in more detail. We use 3 tasks and measure the type of transfer from the source to the target. The x-axis is the number of data samples from the source. The y-axis is the target's performance improvement measured on its validation set between MTL minus STL. Data generation. We have |cos(θ 1, θ 2)| ≈ 1 (say 0.96). For i ∈ {1, 2, 3}, let R i ⊆ R mi×d denote a random Gaussian matrix drawn from N. Let S 1, S 2 ⊆ {1, 2, . . ., d} be two disjoint sets of size d/10. For i = 1, 2, let D i be a diagonal matrix whose entries are equal to a large value κ (e.g. κ = 100) for coordinates in S i and 1 otherwise. Let Q i ⊆ R d×d denote an orthonormal matrix, i.e. Q i Q i is equal to the identity matrix, orthogonalized from a random Gaussian matrix. Then, we define the 3 tasks as follows. (i) Task 1 (target): X 1 = R 1 Q 1 D 1 and y 1 = X 1 θ 1. (ii) Task 2 (source task for red line): X 2 = R 2 Q 1 D 1 and y 2 = X 2 θ 2. (iii) Task 3 (source task for green line): X 3 = R 3 Q 2 D 2 and y 3 = X 3 θ 2. Task 1 and 2 have the same covariance matrices but task 1 and 3 have different covariance matrices. Intuitively, the signals of task 1 and 3 lie in different subspaces, which arise from the difference in the diagonals of D i and the orthonormal matrices. Analysis. Unless the source task has lots of samples to estimate θ 2, which is much more than the samples needed to estimate only the coordinates of S 1, the effect of transferring to the target is small. We observe similar for logistic regression tasks and for ReLU-activated regression tasks. Require: Task embedding layers X1 ∈ R m 1 ×d, X2 ∈ R m 2 ×d,..., X k ∈ R m k ×d, shared module B Parameter: Alignment matrices R1, R2,..., R k ∈ R d×d and output modules A1, A2..., A k ∈ R r 1: Let Zi = XiRi, for 1 ≤ i ≤ k. Consider the following modified loss (with B being fixed): Minimizef by alternatively applying a gradient descent update on Ai and Ri, given a sampled data batch from task i. Other implementation details are described in Appendix C.3. Theory. We rigorously quantify how many data points is needed to guarantee positive transfer. The folklore in MTL is that when a source task has a lot of data but the related target task has limited data, then the source can often transfer positively to the target task. Our previous example shows that by varying the source's number of samples and its covariance, we can observe both types of transfer. How much data do we need from the source to guarantee a positive transfer to the target? We show that this depends on the condition numbers of both tasks' covariances. Theorem 2 (informal). For i = 1, 2, let y i = X i θ i + ε i denote two linear regression tasks with parameters θ i ∈ R d and m i number of samples. Suppose that each row of the source task X 1 is drawn independently from a distribution with covariance Σ 1 ⊆ R d×d and bounded l 2 -norm. Let c = κ(X 2)sin(θ 1, θ 2) and assume that c ≤ 1/3. Denote by (B, A 1, A 2) the optimal MTL solution. With high probability, when m 1 is at least on the order of (κ Recall that for a matrix X, κ(X) denotes its condition number. Theorem 2 quantifies the trend in Figure 3, where the improvements for task 2 reaches the plateau when m 1 becomes large enough. The formal statement, its proof and discussions on the assumptions are deferred to Appendix B.2.2. The ReLU model. We show a similar for the ReLU model, which requires resolving the challenge of analyzing the ReLU function. We use a geometric characterization for the ReLU function under distributional input assumptions by. The is deferred to Appendix B.2.3. Algorithmic consequence. An implication of our theory is a covariance alignment method to improve multi-task training. For the i-th task, we add an alignment matrix R i before its input X i passes through the shared module B. Algorithm 1 shows the procedure. We also propose a metric called covariance similarity score to measure the similarity between two tasks. Given X 1 ∈ R m1×d and X 2 ∈ R m2×d, we measure their similarity in three steps: (a) The covariance matrix is X 1 X 1. (b) Find the best rank-r 1 approximation to be U 1,r1 D 1,r1 U 1,r1, where r 1 is chosen to contain 99% of the singular values. (c) Apply step (a),(b) to X 2, compute the score: The nice property of the score is that it is invariant to rotations of the columns of X 1 and X 2. 2.4 OPTIMIZATION SCHEME Lastly, we consider the effect of re-weighting the tasks (or their losses in equation 2). When does reweighting the tasks help? In this part, we show a use case for improving the robustness of multi-task training in the presence of label noise. The settings involving label noise can arise when some tasks only have weakly-supervised labels, which have been studied before in the literature (e.g. ;). We start by describing a motivating example. Consider two tasks where task 1 is y 1 = Xθ and task 2 is y 2 = Xθ + ε 2. If we train the two tasks together, the error ε 2 will add noise to the trained model. However, by up weighting task 1, we reduce the noise from task 2 and get better performance. To rigorously study the effect of task weights, we consider a setting where all the tasks have the same data but different labels. This setting arises for example in multi-label image tasks. We derive the optimial solution in the linear model. Proposition 3. Let the shared module have capacity r ≤ k. Given k tasks with the same covariates. Let X be full rank and U DV be its SVD. Let Q r Q r be the best rank-r approximation to d×r be an optimal solution for the re-weighted loss. Then the column span of B is equal to the column span of (X X) −1 V DQ r. We can also extend Proposition 3 to show that all local minima of equation 3 are global minima in the linear setting. We leave the proof to Appendix B.3. We remark that this does not extend to the non-linear ReLU setting and leave this for future work. Based on Proposition 3, we provide a rigorous proof of the previous example. Suppose that X is full rank, (X X) Hence, when we increase α 1, cos(B, θ) increases closer to 1. Algorithm 2 An SVD-based task reweighting scheme Input: k tasks: (X, yi) ∈ (R m×d, R m); a rank parameter r ∈ {1, 2, . . ., k} Output: A weight vector: {α1, α2, . . ., α k} 1: Let θi = X yi. 2: Ur, Dr, Vr = SVDr(θ1, θ2, . . ., θ k), i.e. the best rank-r approximation to the θi's. Algorithmic consequence. Inspired by our theory, we describe a re-weighting scheme in the presence of label noise. We compute the per-task weights by computing the SVD over X y i, for 1 ≤ i ≤ k. The intuition is that if the label vector of a task y i is noisy, then the entropy of y i is small. Therefore, we would like to design a procedure that removes the noise. The SVD procedure does this, where the weight of a task is calculated by its projection into the principal r directions. See Algorithm 2 for the description. We describe connections between our theoretical and practical problems of interest. We show three claims on real world datasets. (i) The shared MTL module is best performing when its capacity is smaller than the total capacities of the single-task models. (ii) Our proposed covariance alignment method improves multi-task training on a variety of settings including the GLUE benchmarks and six sentiment analysis tasks. Our method can be naturally extended to transfer learning settings and we validate this as well. (iii) Our SVD-based reweighed scheme is more robust than the standard unweighted scheme on multi-label image classification tasks in the presence of label noise. Datasets and models. We describe the datasets and models we use in the experiments. GLUE: GLUE is a natural language understanding dataset including question answering, sentiment analysis, text similarity and textual entailment problems. We choose BERT LARGE as our model, which is a 24 layer transformer network from. Sentiment Analysis: This dataset includes six tasks: movie review sentiment (MR), sentence subjectivity (SUBJ), customer reviews polarity (CR), question type (TREC), opinion polarity (MPQA), and the Stanford sentiment treebank (SST) tasks. For each task, the goal is to categorize sentiment opinions expressed in the text. We use an embedding layer followed by an LSTM layer proposed by. We use the GloVe embeddings (http://nlp.stanford.edu/data/wordvecs/glove.6B.zip). ChestX-ray14: This dataset contains 112,120 frontal-view X-ray images and each image has up to 14 diseases. This is a 14-task multi-label image classification problem. We use the CheXNet model from , which is a 121-layer convolutional neural network on all tasks. For all models, we share the main module across all tasks (BERT LARGE for GLUE, LSTM for sentiment analysis, CheXNet for ChestX-ray14) and assign a separate regression or classification layer on top of the shared module for each tasks. Comparison methods. For the experiment on multi-task training, we compare Algorithm 1 by training with our method and training without it. Specifically, we apply the alignment procedure on the task embedding layers. See Figure 4 for an illustration, where E i denotes the embedding of task i, R i denotes its alignment module and Z i = E i R i is the rotated embedding. For transfer learning, we first train an STL model on the source task by tuning its model capacity (e.g. the output dimension of the LSTM layer). Then, we fine-tune the STL model on the target task for 5-10 epochs. To apply Algorithm 1, we add an alignment module for the target task during fine-tuning. Figure 4: Illustration of the covariance alignment module on task embeddings. For the experiment on reweighted schemes, we first compute the per-task weights as described in Algorithm 2. Then, we reweight the loss function as in equation 2. We compare with the reweighting techniques of. Informally, the latter uses the Gaussian likelihood to model classification outputs. The weights, defined as inversely proportional to the variances of the Gaussian, are optimized during training. We also compare with the unweighted loss (cf. equation 1) as a baseline. Metric. We measure performance on the GLUE benchmark using a standard metric called the GLUE score, which contains accuracy and correlation scores for each task. For the sentiment analysis tasks, we measure the accuracy of predicting the sentiment opinion. For the image classification task, we measure the area under the curve (AUC) score. We run five different random seeds to report the average . The of an MTL experiment is averaged over the of all the tasks, unless specified otherwise. For the training procedures and other details on the setup, we refer the reader to Appendix C. We present use cases of our methods on open-source datasets. We expected to see improvements via our methods in multi-task and other settings, and indeed we saw such gains across a variety of tasks. Improving multi-task training. We apply Algorithm 1 on five tasks (CoLA, MRPC, QNLI, RTE, SST-2) from the GLUE benchmark using a state-of-the-art language model BERT LARGE. We compare the average performance over all five tasks and find that our method outperforms BERT LARGE by 2.35% average GLUE score for the five tasks. For the particular setting of training two tasks, our method outperforms BERT LARGE on 7 of the 10 task pairs. See Figure 5a for the . Improving transfer learning. While our study has focused on multi-task learning, transfer learning is a naturally related goal -and we find that our method is also useful in this case. We validate this by training an LSTM on sentiment analysis. Figure 5b shows the with SST being the source task and the rest being the target task. Algorithm 1 improves accuracy on four tasks by up to 2.5%. Reweighting training for the same task covariates. We evaluate Algorithm 2 on the ChestX-ray14 dataset. This setting satisfies the assumption of Algorithm 2, which requires different tasks to have the same input data. Across all 14 tasks, we find that our reweighting method improves the technique of by 1.3% AUC score. Compared to training with the unweighted loss, our method improves performance by 5.6% AUC score over all tasks. Model capacity. We verify our hypothesis that the capacity of the MTL model should not exceed the total capacities of the STL model. We show this on an LSTM model with the sentiment analysis tasks. Recall that the capacity of an LSTM model is its output dimension (before the last classification layer). First, we train an MTL model with all tasks and vary the shared module's capacity to find the optimal setting (from 5 to 500). Then, we train an STL model for each task and find the optimal setting similarly. In Figure 6, we find that the performance of MTL peaks when the shared module has capacity 100. This is much smaller than the total capacities of all the STL models. The confirms that constraining the shared module's capacity is crucial to achieve the ideal performance. Extended on CNN/MLP to support our hypothesis are shown in Appendix C.5. Task covariance. We apply our metric of task covariance similarity score from Section 2.3 to provide an in-depth study of the covariance alignment method. The hypothesis is that: (a) aligning the covariances helps, which we have shown in Figure 5a; (b) the similarity score between two tasks increases after applying the alignment. We verify the hypothesis on the sentiment analysis tasks. We use the single-task model's embedding before the LSTM layer to compute the covariance. First, we measure the similarity score using equation 6 between all six single-task models. Then, for each task pair, we train an MTL model using Algorithm 1. We measure the similarity score on the trained MTL model. Our confirm the hypothesis (Figure 7): (a) we observe increased accuracy on 13 of 15 task pairs by up to 4.1%; (b) the similarity score increases for all 15 task pairs. Optimization scheme. We verify the robustness of Algorithm 2. After selecting two tasks from the ChestX-ray14 dataset, we test our method by assigning random labels to 20% of the data on one task. On 20 randomly selected pairs, our method improves over the unweighted scheme by an average 2.4% AUC score and the techniques of by an average 0.5% AUC score. There has been a large body of recent work on using the multi-task learning approach to train deep neural networks. Of particular relevance to this work are those that study the theory of multi-task learning. The earlier works of; are among the first to formally study the importance of task relatedness for learning multiple tasks. See also the follow-up work of In this work, we studied the theory of multi-task learning in linear and ReLU-activated settings. We verified our theory and its practical implications through extensive synthetic and real world experiments. Our work opens up many interesting future questions. First, could we extend the guarantees for choosing optimization schemes to non-linear settings? Second, a limitation of our SVD-based optimization scheduler is that it only applies to settings with the same data. Could we extend the method for heterogeneous task data? More broadly, we hope our work inspires further studies to better understand multi-task learning in neural networks and to guide its practice. Hard parameter sharing vs soft parameter sharing. The architecture that we study in this work is also known as the hard parameter sharing architecture. There is another kind of architecture called soft parameter sharing. The idea is that each task has its own parameters and modules. The relationships between these parameters are regularized in order to encourage the parameters to be similar. Other architectures that have been studied before include the work of , where the authors explore trainable architectures for convolutional neural networks. Domain adaptation. Another closely related line of work is on domain adaptation. The acute reader may notice the similarity between our study in Section 2.3 and domain adaptation. The crucial difference here is that we are minimizing the multi-task learning objective, whereas in domain adaptation the objective is typically to minimize the objective on the target task. See Ben We fill in the missing details left from Section 2. In Section B.1, we provide rigorous arguments regarding the capacity of the shared module. In Section B.2, we fill in the details left from Section 2.3, including the proof of Theorem 2 and its extension to the ReLU model. In Section B.3, we provide the proof of Proposition 3 on the task reweighting schemes. We first describe the notations. Notations. We define the notations to be used later on. We denote f (x) g(x) if there exists an absolute constant Suppose A ∈ R m×n, then λ max (A) denotes its largest singular value and λ min (A) denotes its min{m, n}-th largest singular value. Alternatively, we have λ min (A) = min x: x =1 Ax. Let κ(A) = λ max (A)/λ min (A) denote the condition number of A. Let Id denotes the identity matrix. Let U † denote the Moore-Penrose pseudo-inverse of the matrix U. Let · denote the Euclidean norm for vectors and spectral norm for matrices. Let · F denote the Frobenius norm of a matrix. Let A, B, = Tr(A B) denote the inner product of two matrices. The sine function is define as sin(u, v) = 1 − cos(u, v) 2, where we assume that sin(u, v) ≥ 0 which is without loss of generality for our study. We describe the full detail to show that our model setup captures the phenomenon that the shared module should be smaller than the sum of capacities of the single-task models. We state the following proposition which shows that the quality of the subspace B in equation 1 determines the performance of multi-task learning. This supplements the of Proposition 1. Proposition 4. In the optimum of f (·) (equation 1), each A i selects the vector v within the column span of g B (X i) to minimize L(v, y i). As a corollary, in the linear setting, the optimal B can be achieved at a rotation matrix B ⊆ R d×r by maximizing Furthermore, any B which contains {θ i} k i=1 in its column subspace is optimal. In particular, for such a B, there exists {A i} so that B A i = θ i for all 1 ≤ i ≤ k. Proof. Recall the MTL objective in the linear setting from equation 3 as follows: Note that the linear layer A i can pick any combination within the subspace of B. Therefore, we could assume without loss of generality that B is a rotation matrix. i.e. B B = Id. After fixing B, since objective f (·) is linear in A i for all i, by the local optimality condition, we obtain that Replacing the solution of A i to f (·), we obtain an objective over B. Next, note that where we used the fact that The above on linear regression suggests the intuition that optimizing an MTL model reduces to optimizing over the span of B. The intuition can be easily extended to linear classification tasks as well as mixtures of regression and classification tasks. Extension to the ReLU setting. If the shared module's capacity is larger than the total capacities of the STL models, then we can put all the STL model parameters into the shared module. As in the linear setting, the final output layer A i can pick out the optimal parameter for the i-th task. This remains an optimal solution to the MTL problem in the ReLU setting. Furthermore, there is no transfer between any two tasks through the shared module. We consider the effect of varying the cosine similarity between single task models in multi-task learning. We first describe the following proposition to solve the multi-task learning objective when the covariances of the task data are the same. The idea is similar to the work of and we adapt it here for our study. where C C is the best rank-r approximation subspace of As a corollary, denote by λ 1, λ 2,..., λ k as the singular values of Proof. Note that B is obtained by maximizing Clearly, there is a one to one mapping between B and C. And we have B = V D −1 C. Hence the above is equivalent to maximizing over C ⊆ R d×r with Note that C(C C) −1 C is a projection matrix onto a subspace of dimension r. Hence the maximum (denote by C) is attained at the best rank-r approximation subspace of To illustrate the above proposition, consider a simple setting where X i is identity for every 1 ≤ i ≤ k, and y i = e i, i.e. the i-th basis vector. Note that the optimal solution for the i-th task is (X i X i) −1 X i y i = y i. Hence the optimal solutions are orthogonal to each other for all the tasks, with λ i = 1 for all 1 ≤ i ≤ k. And the minimum STL error is zero for all tasks. Consider the MTL model with hidden dimension r. By Proposition 5, the minimum MTL error is achieved by the best rank-r approximation subspace to Denote the optimum as B r. The MTL error is: Different data covariance. We provide upper bounds on the quality of MTL solutions for different data covariance, which depend on the relatedness of all the tasks. The following procedure gives the precise statement. Consider k regression tasks with data {( † X i y i denote the optimal solution of each regression task. Let W ⊆ R d×k denote the matrix where the i-th column is equal to θ i . Consider the following procedure for orthogonalizing W for 1 ≤ i ≤ k. Step a). Proposition 6. Suppose that r ≤ d. Let B denote the optimal MTL solution of capacity r in the shared module. Denote by Proof. It suffices to show that OP T is equal to k i=1 λ i. The then follows since h(B) is less than the error given by W 1,..., W k, which is equal to OP T − d i=r+1 λ i. We fill in the proof of Theorem 2. First, we restate the rigorously as follows. Theorem 2. For i = 1, 2, let (X i, y i) ∈ (R mi×d, R mi) denote two linear regression tasks with parameters θ i ∈ R d. Suppose that each row of X 1 is drawn independently from a distribution with covariance Σ 1 ⊆ R d×d and bounded l 2 -norm √ L. Assume that θ 1 Σ 1 θ 1 = 1 w.l.o.g. Let c ∈ [κ(X 2) sin(θ 1, θ 2), 1/3] denote the desired error margin. Denote by (B, A 1, A 2) the optimal MTL solution. With probability 1 − δ over the randomness of (X 1, y 1), when we have that B A 2 − θ 2 / θ 2 ≤ 6c + 1 1−3c ε 2 / X 2 θ 2. We make several remarks to provide more insight on Theorem 2. • Theorem 2 guarantees positive transfers in MTL, when the source and target models are close and the number of source samples is large. While the intuition is folklore in MTL, we provide a formal justification in the linear and ReLU models to quantify the phenomenon. • The error bound decreases with c, hence the smaller c is the better. On the other hand, the required number of data points m 1 increases. Hence there is a trade-off between accuracy and the amount of data. • c is assumed to be at most 1/3. This assumption arises when we deal with the label noise of task 2. If there is no noise for task 2, then this assumption is not needed. If there is noise for task 2, this assumption is satisfied when sin(θ 1, θ 2) is less than 1/(3κ(X 2)). In synthetic experiments, we observe that the dependence on κ(X 2) and sin(θ 1, θ 2) both arise in the performance of task 2, cf. Figure 3 and Figure 8, respectively. The proof of Theorem 2 consists of two steps. a) We show that the angle between B and θ 1 will be small. Once this is established, we get a bound on the angle between B and θ 2 via the triangle inequality. b) We bound the distance between B A 2 and θ 2. The distance consists of two parts. One part comes from B, i.e. the angle between B and θ 2. The second part comes from A 2, i.e. the estimation error of the norm of θ 2, which involves the signal to noise ratio of task two. We first show the following geometric fact, which will be used later in the proof. Fact 7. Let a, b ∈ R d denote two unit vectors. Suppose that X ∈ R m×d has full column rank with condition number denoted by κ = κ(X). Then we have Proof. Let X = U DV be the SVD of X. Since X has full column rank by assumption, we have X X = XX = Id. Clearly, we have sin(Xa, Xb) = sin(DV a, DV b). Denote by a = V a and b = V b. We also have that a and b are both unit vectors, and sin(a, b) = sin(a, b). Let λ 1,..., λ d denote the singular values of X. Then, This concludes the proof. We first show the following Lemma, which bounds the angle between B and θ 2. Lemma 8. In the setting of Theorem 2, with probability 1 − δ over the randomness of task one, we have that |sin(B, θ 2)| ≤ sin(θ 1, θ 2) + c/κ(X 2). Proof. We note that h(B) ≥ y 1 2 by the optimality of B. Furthermore, X2B X2B, y 2 ≤ y 2 2. Hence we obtain that For the left hand side, Note that the second term is a chi-squared random variable with expectation σ 2 1. Hence it is bounded by σ 2 1 log 1 δ with probability at least 1 − δ. Similarly, the third term is bounded by 2 X 1 θ 1 σ 1 log 1 δ with probability 1 − δ. Therefore, we obtain the following: Therefore, By matrix Bernstein inequality (see e.g.), when m 1 ≥ 10 Σ 1 log d δ /λ 2 min (Σ 1), we have that: Hence we obtain that κ 2 (X 1) ≤ 3κ(Σ 1) and X 1 θ 1 2 ≥ m 1 · θ 1 Σ 1 θ 1 /2 ≥ m 1 /2 (where we assumed that θ 1 Σ 1 θ 1 = 1). Therefore, which is at most c 2 /κ 2 (X 2) by our setting of m 1. Therefore, the follows by triangle inequality (noting that both c and sin(θ 1, θ 2) are less than 1/2). Based on the above Lemma, we are now to ready to prove Theorem 2. Proof of Theorem 2. Note that in the MTL model, after obtaining B, we then solve the linear layer for each task. For task 2, this gives weight value A 2:= X 2θ, y 2 / X 2θ 2. Thus the regression coefficients for task 2 is B A 2. For the rest of the proof, we focus on bounding the distance between B A 2 and θ 2. By triangle inequality, Note that the second term of equation 8 is equal to The first term of equation 8 is bounded by. Lastly, we have that By Lemma 8, we have Therefore, we conclude that equation 9 is at most Thus equation 8 is at most the following. Hence we obtain the desired estimation error of BA 2. In this part, we extend Theorem 2 to the ReLU model. Note that the problem is reduced to the following objective. We make a crucial assumption that task 1's input X 1 follows the Gaussian distribution. Note that making distributional assumptions is necessary because for worst-case inputs, even optimizing a single ReLU function under the squared loss is NP-hard . We state our formally as follows. Theorem 9. Let (X 1, y 1) ∈ (R m1×d, R m1) and (X 2, y 2) ∈ (R m2×d, R m2) denote two tasks. Suppose that each row of X 1 is drawn from the standard Gaussian distribution. And y i = a i · ReLU(X i θ i) + ε i are generated via the ReLU model with 2 j = 1 for every 1 ≤ j ≤ m 1 without loss of generality, and let σ 2 1 denote the variance of every entry of ε 1. Suppose that c ≥ sin(θ 1, θ 2)/κ(X 2). Denote by (B, A 1, A 2) the optimal MTL solution of equation 10. With probability 1 − δ over the randomness of (X 1, y 1), when we have that the estimation error is at most: Proof. The proof follows a similar structure to that of Theorem 2. Without loss of generality, we can assume that θ 1, θ 2 are both unit vectors. We first bound the angle between B and θ 1. By the optimality of B, we have that: From this we obtain: Note that each entry of ReLU(X 1 θ 1) is a truncated Gaussian random variable. By the Hoeffding bound, with probability 1 − δ we have As for ReLU(X 1 B), ReLU(X 1 θ 1), we will use an epsilon-net argument over B to show the concentration. For a fixed B, we note that this is a sum of independent random variables that are all bounded within O(log m1 δ) with probability 1 − δ. Denote by φ the angle between B and θ 1, a standard geometric fact states that (see e.g. Lemma 1 of) for a random Gaussian vector Therefore, by applying Bernstein's inequality and union bound, with probability 1 − η we have: By standard arguments, there exists a set of d O(d) unit vectors S such that for any other unit vector and take union bound over all unit vectors in S, we have that there existsû ∈ S satisfying B −û ≤ min(1/d 3, c 2 /κ 2 (X 2)) and the following: where φ is the angle betweenû and θ 1. Note that Together we have shown that Combined with equation 11, by our setting of m 1, it is not hard to show that Overall, we conclude that For the estimation of a 2, we have Similarly, we can show that the second part is at most O(c). Therefore, the proof is complete. In this part, we present the proof of Proposition 3. In fact, we present a more refined , by showing that all local minima are global minima for the reweighted loss in the linear case. The key is to reduce the MTL objective f (·) to low rank matrix approximation, and apply recent by which show that there is no spurious local minima for the latter problem. Lemma 10. Assume that X i X i = α i Σ with α i > 0 for all 1 ≤ i ≤ k. Then all the local minima of f (A 1, . . ., A k ; B) are global minima of equation 3. Proof. We first transform the problem from the space of B to the space of C. Note that this is without loss of generality, since there is a one to one mapping between B and C with C = DV B. In this case, the corresponding objective becomes the following. The latter expression is a constant. Hence it does not affect the optimization solution. For the former, denote by A ∈ R r×k as stacking the √ α i A i's together column-wise. Similarly, denote by Z ∈ R d×k as stacking √ α i U i y i together column-wise. Then minimizing g(·) reduces solving low rank matrix approximation: CA − Z times the best rank-r approximation to α i U y i y i U, where we denote the SVD of X as U DV. Denote by Q r Q r as the best rank-r approximation to U ZZ U, where we denote by Z = [√ α 1 y 1, √ α 2 y 2, . . ., √ α k y k] as stacking the k vectors to a d by k matrix. Hence the of Proposition 5 shows that the optimal solution B is V D −1 Q r, which is equal to (X X) −1 XQ r. By Proposition 4, the optimality of B is the same up to transformations on the column space. Hence the proof is complete. To show that all local minima are also equal to (X X) −1 XQ r, we can simply apply Lemma 10 and Proposition 3. Remark. This only applies to the linear model and does not work on ReLU models. The question of characterizing the optimization landscape in non-linear ReLU models is not well-understood based on the current theoretical understanding of neural networks. We leave this for future work. We fill in the details left from our experimental section. In Appendix C.1, we review the datasets used in our experiments. In Appendix C.2, we describe the models we use on each dataset. In Appendix C.3, we describe the training procedures for all experiments. In Appendix C.4 and Appendix C.5, we show extended synthetic and real world experiments to support our claims. We describe the synthetic settings and the datasets Sentiment Analysis, General Language Understanding Evaluation (GLUE) benchmark, and ChestX-ray14 used in the experiments. Synthetic settings. For the synthetic experiments, we draw 10,000 random data samples with dimension d = 100 from the standard Gaussian N and calculate the corresponding labels based on the model described in experiment. We split the data samples into training and validation sets with 9,000 and 1,000 samples in each. For classification tasks, we generate the labels by applying a sigmoid function and then thresholding the value to binary labels at 0.5. For ReLU regression tasks, we apply the ReLU activation function on the real-valued labels. The number of data samples used in the experiments varies depending on the specification. Specifically, for the task covariance experiment of Figure 3, we fix task 1's data with m 1 = 9, 000 training data and vary task 2's data under three settings: (i) same rotation Sentiment analysis. For the sentiment analysis task, the goal is to understand the sentiment opinions expressed in the text based on the context provided. This is a popular text classification task which is usually formulated as a multi-label classification task over different ratings such as positive (+1), negative (-1), or neutral. We use six sentiment analysis benchmarks in our experiments: • Movie review sentiment (MR): In the MR dataset , each movie review consists of a single sentence. The goal is to detect positive vs. negative reviews. • Sentence subjectivity (SUBJ): The SUBJ dataset is proposed in and the goal is to classify whether a given sentence is subjective or objective. • Customer reviews polarity (CR): The CR dataset provides customer reviews of various products. The goal is to categorize positive and negative reviews. • Question type (TREC): The TREC dataset is collected by. The aim is to classify a question into 6 question types. • Opinion polarity (MPQA): The MPQA dataset detects whether an opinion is polarized or not . • Stanford sentiment treebank (SST): The SST dataset, created by , is an extension of the MR dataset. The General Language Understanding Evaluation (GLUE) benchmark. GLUE is a collection of NLP tasks including question answering, sentiment analysis, text similarity and textual entailment problems. The GLUE benchmark is a state-of-the-art MTL benchmark for both academia and industry. We select five representative tasks including CoLA, MRPC, QNLI, RTE, and SST-2 to validate our proposed method. We emphasize that the goal of this work is not to come up with a state-of-the-art but rather to provide insights into the working of multi-task learning. It is conceivable that our can be extended to the entire dataset as well. This is left for future work. More details about the GLUE benchmark can be found in the original paper (Wang et al. (2018a) ). ChestX-ray14. The ChestX-ray14 dataset is the largest publicly available chest X-ray dataset. It contains 112,120 frontal-view X-ray images of 30,805 unique patients. Each image contains up to 14 different thoracic pathology labels using automatic extraction methods on radiology reports. This can be formulated as a 14-task multi-label image classification problem. The ChestX-ray14 dataset is a representative dataset in the medical imaging domain as well as in computer vision. We use this dataset to examine our proposed task reweighting scheme since it satisfies the assumption that all tasks have the same input data but different labels. Synthetic settings. For the synthetic experiments, we use the linear regression model, the logistic regression model and a one-layer neural network with the ReLU activation function. Sentiment analysis. For the sentiment analysis experiments, we consider three different models including multi-layer perceptron (MLP), LSTM, CNN: • For the MLP model, we average the word embeddings of a sentence and feed the into a two layer perceptron, followed by a classification layer. • For the LSTM model, we use the standard one-layer single direction LSTM as proposed by , followed by a classification layer. • For the CNN model, we use the model proposed by which uses one convolutional layer with multiple filters, followed by a ReLU layer, max-pooling layer, and classification layer. We follow the protocol of and set the filter size as {3, 4, 5}. We use the pre-trained GLoVe embeddings trained on Wikipedia 2014 and Gigaword 5 corpora 2. We fine-tune the entire model in our experiments. In the multi-task learning setting, the shared modules include the embedding layer and the feature extraction layer (i.e. the MLP, LSTM, or CNN model). Each task has its separate output module. For the experiments on the GLUE benchmark, we use a state-of-the-art language model called BERT . For each task, we add a classification/regression layer on top it as our model. For all the experiments, we use the BERT LARGE uncased model, which is a 24 layer network as described in. For the multi-task learning setting, we follow the work of Liu et al. (2019a) and use BERT LARGE as the shared module. ChestX-ray14. For the experiments on the ChestX-ray14 dataset, we use the DenseNet model proposed by as the shared module, which is a 121 layer network. For each task, we use a separate classification output layer. We use the pre-trained model 3 in our experiments. In this subsection, we describe the training procedures for our experiments. Mini-batch SGD. We describe the details of task data sampling in our SGD implementation. • For tasks with different features such as GLUE, we first divide each task data into small batches. Then, we mix all the batches from all tasks and shuffle randomly. During every epoch, a SGD step is applied on every batch over the corresponding task. If the current batch is for task i, then the SGD is applied on A i, and possibly R i or B depending on the setup. The other parameters for other tasks are fixed. • For tasks with the same features such as ChestX-ray14, the SGD is applied on all the tasks jointly to update all the A i's and B together. For classification tasks, we use accuracy as the metric. We report the average model performance over two tasks. The x-axis denotes the cosine distance, i.e. 1 − cos(θ 1, θ 2). Synthetic settings. For the synthetic experiments, we do a grid search over the learning rate from {1e − 4, 1e − 3, 1e − 2, 1e − 1} and the number of epochs from {10, 20, 30, 40, 50}. We pick the best for all the experiments. We choose the learning rate to be 1e − 3, the number of epochs to be 30, and the batch size to be 50. For regression task, we report the Spearman's correlation score For classification task, we report the classification accuracy. Sentiment analysis. For the sentiment analysis experiments, we randomly split the data into training, dev and test sets with percentages 80%, 10%, and 10% respectively. We follow the protocol of to set up our model for the sentiment analysis experiments. The default hidden dimension of the model (e.g. LSTM) is set to be 200, but we vary this parameter for the model capacity experiments. We report the accuracy score on the test set as the performance metric. GLUE. For the GLUE experiments, the training procedure is used on the alignment modules and the output modules. Due to the complexity of the BERT LARGE module, which involves 24 layers of non-linear transformations, we fix the BERT LARGE module during the training process to examine the effect of adding the alignment modules to the training process. In general, even after fine-tuning the BERT LARGE module on a set of tasks, it is always possible to add our alignment modules and apply Algorithm 1. For the training parameters, we apply grid search to tune the learning rate from {2e−5, 3e−5, 1e−5} and the number of epochs from {2, 3, 5, 10}. We choose the learning rate to be 2e−5, the number of epochs to be 5, and with batch size 16 for all the experiments. We use the GLUE evaluation metric (cf. Wang et al. (2018b) ) and report the scores on the development set as the performance metric. ChestX-ray14. For the ChestX-ray14 experiments, we use the configuration suggested by and report the AUC score on the test set after fine-tuning the model for 20 epochs. Varying cosine similarity on linear and ReLU models. We demonstrate the effect of cosine similarity in synthetic settings for both regression and classification tasks. Synthetic tasks. We start with linear settings. We generate 20 synthetic task datasets (either for regression tasks, or classification tasks) based on data generation procedure and vary the task similarity between task 1 and task i. We run the experiment with a different dataset pairs (dataset 1 and dataset i). We compare the performance gap between MTL and STL model. Figure 8a and Figure 8a, we find that for both regression and classification settings, with the larger task similarity the MTL outperforms more than STL model and the negative transfer could occur if the task similarity is too small. (b) Classification tasks with non-linearity Figure 9: The performance improvement on the target task (MTL minus STL) by varying the cosine similarity of the two tasks' STL models. We observe that higher similarity between the STL models leads to better improvement on the target task. ReLU settings. We also consider a ReLU-activated model. We use the same setup as the linear setting, but apply a ReLU activation to generate the data. Similar are shown in Figure 8c, 8d. Higher rank regimes for ReLU settings. We provide further validation of our on ReLUactivated models. Synthetic tasks. In this synthetic experiment, there are two sets of model parameters Θ 1 ⊆ R d×r and Θ 2 ⊆ R d×r (d = 100 and r = 10). Θ 1 is a fixed random rotation matrix and there are m 1 = 100 data points for task 1. Task 2's model parameter is Θ 2 = αΘ 1 + (1 − α)Θ, where Θ is also a fixed rotation matrix that is orthogonal to Θ 1. Note that α is the cosine value/similarity of the principal angle between Θ 1 and Θ 2. We then generate X 1 ⊆ R m1×d and X 2 ⊆ R m2×d from Gaussian. For each task, the labels are y i = ReLU(X i Θ i)e + ε i, where e ∈ R r is the all ones vector and ε i is a random Gaussian noise. Given the two tasks, we use MTL with ReLU activations and capacity H = 10 to co-train the two tasks. The goal is to see how different levels of α or similarity affects the transfer from task two to task one. Note that this setting parallels the ReLU setting of Theorem 9 but applies to rank r = 5. Results. In Figure 9 we show that the data size, the cosine similarity between the STL solutions and the alignment of covariances continue to affect the rate of transfer in the new settings. The study shows that our conceptual are applicable to a wide range of settings. Evaluating Algorithm 1 on linear and ReLU-activated models. We consider the synthetic example in Section 2.3 to compare Algorithm 1 and the baseline MTL training. Recall that in the example, when the source and target tasks have different covariance matrices, MTL causes negative transfer on the target task. Our hypothesis in this experiment is to show that Algorithm 1 can correct the misalignment and the negative transfer. Synthetic tasks. We evaluate on both linear and ReLU regression tasks. The linear case follows the example in Section 2.3. For the ReLU case, the data is generated according to the previous example. Results. Figure 10 confirms the hypothesis. We observe that Algorithm 1 corrects the negative transfer in the regime where the source task only has limited amount of data. Furthermore, Algorithm 1 matches the baseline MTL training when the source task has sufficiently many data points. Cross validation for choosing model capacities. We provide an cross validation experiment to indicate how we choose the best performing model capacities in Figure 6. This is done on the six sentiment analysis tasks trained with an LSTM layer. In Figure 11, we vary the model capacities to plot the validation accuracies of the MTL model trained with all six tasks and the STL model for each task. The complements Figure 6 in Section 3.3. Choosing model capacities for CNN and MLP. Next we verify our on model capacities for CNN and MLP models. We select the SST and MR datasets from the sentiment analysis tasks for this experiment. We train all three models CNN, MLP and LSTM by varying the capacities. Results. From Figure 12 we observe that the best performing MTL model capacity is less than total best performing model capacities of STL model on all models. The effect of label noise on Algorithm 2. To evaluate the robustness of Algorithm 2 in the presence of label noise, we conduct the following experiment. First, we select two tasks from the ChestXray14 dataset. Then, we randomly pick one task to add 20% of noise to its labels by randomly flipping them with probability 0.2. We compare the performance of training both tasks using our reweighting scheme (Algorithm 2) vs. the reweighting techniques of and the unweighted loss scheme. Results. On 20 randomly chosen task pairs, our method improves over the unweighted training scheme by 2.4% AUC score and 0.5% AUC score over averaged over the 20 task pairs. Figure 13 shows 5 example task pairs from our evaluation.
A Theoretical Study of Multi-Task Learning with Practical Implications for Improving Multi-Task Training and Transfer Learning
1,160
scitldr
We review three limitations of BLEU and ROUGE – the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to assess the performance of a metric in detail and show the potential of Transformers-based Language Models to assess reference summaries against hypothesis summaries. Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU [] and ROUGE []. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community [] [] [] []. In this work, we formulate an empirical criticism of BLEU and ROUGE, establish a criteria that a sound evaluation metric should have and propose concrete ways to test any metric towards these criteria. We also use recent advances in NLP to design a data-driven metric addressing the weaknesses found in BLEU and ROUGE and scoring high on the criteria for a sound evaluation metric. 2 Related Work 2.1 BLEU, ROUGE and n-gram matching approaches BLEU (Bilingual Evaluation Understudy) [] and ROUGE (Recall-Oriented Understudy for Gisting Evaluation) [] have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement []. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter [], in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al [] examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al [] examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al []. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs) [] and Transformers []. RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism. While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore [] uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover's Similarity [] uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. One other evaluation method proposed is RUSE [] this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluator's success can be determined. In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries. Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score. In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same;however, the overlap between words in s1 and s2 will not necessarily be significant. A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Although higher order BLEU scores are expected to mitigate this effect, they make the metric more sensitive to paraphrasing. To overcome the previously highlighted challenges and provide a framework by which metrics comparing reference summaries/translation can be assessed and improved, we established firstprinciples criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement in semantic similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that given s1, s2 which are semantically similar, eval(s1,s2) > eval(s1,s2(corrupted) > eval(s1,s2(more corrupted)) where corruption here includes removing words, adding noise to the word order or including grammatical mistakes. We will now give a more detailed example to how the scorecard can be implemented. For every dimension of the scorecard the experiments are done with three metrics. BLEU with equal weights between 1 to 4 grams. ROUGE with averaging ROUGE-1 and ROUGE-2 and the a neural evaluator. The evaluator is the RoBERTa large pre-trained model, which we fine tune it to predict sentence similarity (0-5 scale) on the STS-B benchmark dataset (8628 sentence pairs). The first expectation from a google similarty metric is to correlate highly with human judgment in terms of assessing semantic similarity. Here we assessed BLEU and ROUGE on the STS-B benchmark and compared their performance to a RoBERTa model fine tuned for semantic similarity (Table 1). Another characteristic of a good metric is to differentiate the argument, core meaning, in a sentence and take it into account when assessing hypothesis text with references. Here we used the MNLI dataset where for each text we have three hypothesis text representing contradiction, neutral and entailment. We expect a good metric to rank entailment higher than neutral and both of them higher than contradiction. To assess the quality of a metric we propose to use the Spearman's ranked correlation and in Table 4.2.2 we also experiment with Kendall's τ. Here we observe that the RoBERTa model remarkably outperforms BLEU and ROUGE and both of these metrics show very little correlation with human judgment. For assessing the third criteria. We start with 3479 sentence pairs from the MNLI dataset that are labelled as entailment. We introduce random corruptions such as random insertion, deletion and grammatical errors as in []. We use two different set of parameters for different corruption levels and expect that a good metric would rank the original similar sentence higher than the less corrupted and both higher than the more corrupted sentence. Here we also propose to use the Spearman's ranked correlation and also experiment with Kendall's τ. We report on In this work, we have established a framework to assess metrics comparing the quality of reference and hypothesis summary/translations. Based on these criteria, we compare evaluators using recent Transformers to BLEU and ROUGE and highlight their potential to replace BLEU and ROUGE.
New method for assessing the quaility of similarity evaluators and showing potential of Transformer-based language models in replacing BLEU and ROUGE.
1,161
scitldr
In this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe insignificant for another. Thus, rather than learning solely from words, our model also leverages their distributional signatures, which encode pertinent word occurrence patterns. Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words. We demonstrate that our model consistently outperforms prototypical networks learned on lexical knowledge in both few-shot text classification and relation classification by a significant margin across six benchmark datasets (19.96% on average in 1-shot classification).
Meta-learning methods used for vision, directly applied to NLP, perform worse than nearest neighbors on new classes; we can do better with distributional signatures.
1,162
scitldr
The description of neural computations in the field of neuroscience relies on two competing views: (i) a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; (ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks. How the two key concepts of cell classes and low-dimensional trajectories interact to shape neural computations is however currently not understood. Here we address this question by combining machine-learning tools for training RNNs with reverse-engineering and theoretical analyses of network dynamics. We introduce a novel class of theoretically tractable recurrent networks: low-rank, mixture of Gaussian RNNs. In these networks, the rank of the connectivity controls the dimensionality of the dynamics, while the number of components in the Gaussian mixture corresponds to the number of cell classes. Using back-propagation, we determine the minimum rank and number of cell classes needed to implement neuroscience tasks of increasing complexity. We then exploit mean-field theory to reverse-engineer the obtained solutions and identify the respective roles of dimensionality and cell classes. We show that the rank determines the phase-space available for dynamics that implement input-output mappings, while having multiple cell classes allows networks to flexibly switch between different types of dynamics in the available phase-space. Our have implications for the analysis of neuroscience experiments and the development of explainable AI. With recent advances in deep-learning, the novel approach of training and reverse-engineering RNNs on neuroscience tasks has led to insights on the implementation of cognitive processes (see for a review). Reverse-engineering methods have however provided only partial understanding so far, by either focusing on the characterization of neural dynamics and leaving aside the description of learnt connectivity, or the converse. Taking advantage of recent theoretical on low-rank networks, we present a reverse-engineering approach that leads to analytically tractable classes of RNNs performing various tasks. Crucially, these classes of models exhibit well defined dimensionality and number of cell classes allowing us to identify the roles of these two properties on neural computation. We consider recurrent networks of N tanh rate units with dynamics defined by: where u(t) represents inputs to the network and z is a scalar readout modeling the network's output. The directions of the network's inputs and output are defined by the column vectors of W in and W out, while the recurrent connectivity is given by the rank-K matrix For such networks, recurrently generated activity lies in a space of dimension K, and is well described by the dynamics of a set of K internal (latent) variables. In the case where connectivity and input vectors are drawn from a joint Gaussian distribution (with entry-independent covariances σ ab between entries of vectors a and b), a previous study developed a mean-field theory for the dynamics of the internal variables. As an example, for K = 2 and a single input u(t), the dynamics can be described by two internal variables κ 1 and κ 2:κ where the functional connectivities are the product of a structural component σ ab and an activity dependent term, namely the populationaveraged gain of neurons φ. In the present work we extend this framework to describe networks with neurons assigned to P populations, which can be described with connectivity vectors drawn from P -mixtures of Gaussians: each neuron i belongs to one of the P populations, and entries of the structure vectors a i and b i are drawn from a joint Gaussian distribution with covariance σ k ab, k = 1,..., P. Thus cell classes are defined in terms of connectivity profiles. In the mean-field theory, the functional connectivities becomẽ where k is an average over the entries assigned to the population k of the mixture. We consider a series of classical neuroscience tasks and for each we determine a mixture of Gaussians network model, with minimal-rank connectivity and a minimal number of cell classes. Our first step is to train a RNN, in a supervised manner using BPTT and the ADAM algorithm, by optimizing on We thus look for solutions in the restricted space of networks whose connectivity matrix is rank K, without imposing well defined Gaussian statistics. We train networks with various values of K and identify the minimal value K min for which a solution can be found. After training, we exploit mean-field theory for low-rank RNNs to reverse-engineer the trained networks. We first relate the internal variables to the task being performed, which allows us to obtain a dynamical system description of the cognitive components at hand. We then extract relevant statistical features of the trained vectors to relate this dynamical system description to the learnt connectivity structure. Guided by this analysis we are able to reconstruct rank-K min RNNs whose connectivity vectors can be described by a P -mixture of Gaussians, and to determine the minimal P for which a solution can be found. Our approach allowed us to identify two general principles for the roles of dimensionality and cell classes. Below we summarize and illustrate them on a subset of studied tasks, and then exploit them to build networks that perform multiple tasks. We first consider a perceptual integration task (random dot motion task, RDM, Figure 1A), where a network is exposed to a noisy input signal and is asked to report the sign of the temporally averaged signal. We find that a network with rank K = 1 and P = 1 population is able to perform this task (Figure 1B). The internal variable κ is easily interpreted in terms of the computation performed by the network: it integrates the input signal before converging to either one of two fixed-points encoding the positive/negative decision (Figure 1C). This internal variable closely matches accumulation of evidence in drift-diffusion models that have been proposed to model this type of perceptual integration tasks. We next consider a parametric working memory task (Romo task, Figure 1D), where two scalar signals are successively presented, interleaved by a delay period. The task of the network is to report the difference between the values of the two stimuli. Doing so requires two computational variables: one that memorizes the first stimulus, and a second that encodes the difference between the two stimuli. Accordingly, we find that the rank is required to be at least K = 2, while having P = 1 population is sufficient (Figure 1E). In Figure 1F we analyse the dynamics of a reconstructed network, showing how the two internal variables κ 1 and κ 2 implement the two computational variables required for solving the task. Overall the rank of the network determines the dimensionality of the phase space of the recurrent dynamics, and therefore the number of internal variables available to implement the computation. 3.2 Multiple populations allow multiple operations to be performed on available internal variables We now consider a context-dependent perceptual integration task (Mante task, Figure 2A), where two fluctuating scalar signals are presented and the network is asked to integrate only one of the two signals, depending on a contextual cue. The task is a more complex version of RDM, where in addition to an accumulation of evidence mechanism, an attention mechanism is required to flexibly select the integrated signal. We find that a network with a single population is not able to implement this task, whatever the rank. In contrast, a rank-1 network with P = 2 is sufficient. The corresponding single internal variable corresponds to integrated evidence, the only computational variable required for solving the task (Figure 2B). Having two populations however allows the network to switch between two operations performed by this variable. This is achieved by reconfiguring the dynamical landscape of the internal variable in a context-dependent manner as illustrated in Figure 2C. Analytical examination of the reconstructed network reveals the underlying mechanism: contextual inputs selectively modulate the gains of populations (Figure 2D), controlling network's functional connectivities, see eq.. More generally, we find that having multiple populations allows the network to flexibly switch between different dynamics of the phase space, and therefore implement several operations on the available internal variables. Here we draw some perspectives about how these two principles enable the construction of networks performing multiple-tasks. On one hand, increasing dimensionality allows multiple internal variables to process inputs in parallel. On the other hand, increasing the number of populations allows for the selective modulation of more functional connectivities, increasing the flexibility with which the dynamics of an internal variable can be controlled. We illustrate these two principles by constructing networks that perform multiple tasks in parallel, by summing the rank-1 matrix solving RDM, the rank-1 matrix solving Mante task and the rank-2 matrix solving the Romo task to get a single network performing those three tasks simultaneously (Figure 3A); and by constructing a rank-1 network of P populations that solves a generalization of the Mante task, with P input streams, with a single internal variable (Figure 3B). and of the reconstructed multi-tasking network (black). B. Psychometric curves for networks performing the Mante tasks with P input streams, and P corresponding contextual inputs. In this work we have provided an abstract description of computations performed in recurrent neural networks. By focusing on simple tasks this has allowed us to identify the complementary roles of the two important aspects that are dimensionality (section 3.1) and cell classes (section 3.2). Beyond these simple tasks, we have been able to use this understanding to build networks solving multiple tasks (section 3.3), and we expect these principles of neural computation to be important for the development of procedures aiming at reverse-engineering networks trained on more complex, real-world tasks.
A theoretical analysis of a new class of RNNs, trained on neuroscience tasks, allows us to identify the role of dynamical dimensionality and cell classes in neural computations.
1,163
scitldr
We propose the fusion discriminator, a single unified framework for incorporating conditional information into a generative adversarial network (GAN) for a variety of distinct structured prediction tasks, including image synthesis, semantic segmentation, and depth estimation. Much like commonly used convolutional neural network - conditional Markov random field (CNN-CRF) models, the proposed method is able to enforce higher-order consistency in the model, but without being limited to a very specific class of potentials. The method is conceptually simple and flexible, and our experimental demonstrate improvement on several diverse structured prediction tasks. Convolutional neural networks (CNNs) have demonstrated groundbreaking on a variety of different learning tasks. However, on tasks where high dimensional structure in the data needs to be preserved, per-pixel regression losses typically in unstructured outputs since they do not take into consideration non-local dependencies in the data. Structured prediction frameworks such as graphical models and joint CNN-graphical model-based architectures e.g. CNN-CRFs have been used for imposing spatial contiguity using non-local information BID13 BID2 BID25. The motivation to use CNN-CRF models stems from their ability to capture some structured information from second order statistics using the pairwise part. However, statistical interactions beyond the second-order are tedious to incorporate and render the models complicated BID0 BID12 ).Generative models provide another way to represent the structure and spacial contiguity in large high-dimensional datasets with complex dependencies. Implicit generative models specify a stochastic procedure to produce outputs from a probability distribution. Such models are appealing because they do not demand parametrization of the probability distribution they are trying to model. Recently, there has been great interest in CNN-based implicit generative models using autoregressive BID4 and adversarial training frameworks BID16.Generative adversarial networks (GANs) BID7 can be seen as a two player minimax game where the first player, the generator, is tasked with transforming a random input to a specific distribution such that the second player, the discriminator, can not distinguish between the true and synthesized distributions. The most distinctive feature of adversarial networks is the discriminator that assesses the discrepancy between the current and target distributions. The discriminator acts as a progressively precise critic of an increasingly accurate generator. Despite their structured prediction capabilities, such a training paradigm is often unstable. However, recent work on spectral normalization (SN) and gradient penalty has significantly increased training stability BID8. Conditional GANs (cGANs) BID19 incorporate conditional image information in the discriminator and have been widely used for class conditioned image generation. To that effect, unlike in standard GANs, a discriminator for cGANs discriminates between the generated distribution and the target distribution on pairs of samples y and conditional information x. For class conditioning, several unique strategies have been presented to incorporate class information in the discriminator BID24 BID23. DISPLAYFORM0 Adversarial loss (a) Concatenated Image Conditioning x y Adversarial loss DISPLAYFORM1 Discriminator models for image conditioning. We propose fusing the features of the input and the ground truth or generated image rather than concatenating. However, a cGAN can also be conditioned by structured data such as an image. Such conditioning is much more useful for structured prediction problems. Since the discriminator in an image conditioned-GAN has access to large portions of the image the adversarial loss can be interpreted as a learned loss that incorporates higher order statistics, essentially eliminating the need to manually design higher order loss functions. This variation of cGANs has extensively been used for image-to-image translation tasks. However, the best way of incorporating conditional image information into a GAN is not clear, and methods of feeding generated and conditional images to the discriminator tend to use a naive concatenation approach. In this work we address this gap by proposing a discriminator architecture specifically designed for image conditioning. Such a discriminator contributes to the promise of generalization that GANs bring to structured prediction problems by providing a singular and simplistic setup for capturing higher order non-local structural information from higher dimensional data without complicated modeling of energy functions. Contributions. We propose an approach to incorporating conditional information into a cGAN using a fusion discriminator architecture (Fig. 1b). In particular, we make the following key contributions:1. We propose a novel discriminator architecture designed to incorporating conditional information for structured prediction tasks. The method is designed to incorporate conditional information in feature space in a way that allows the discriminator to enforce higher-order consistency in the model, and is conceptually simpler than alternative structured prediction methods such as CNN-CRFs where higher-order potentials have to be manually incorporated in the loss function.2. We demonstrate the effectiveness of this method on a variety of distinct structured prediction tasks including semantic segmentation, depth estimation, and generating real images from semantic masks. Our empirical study demonstrates that the fusion discriminator is effective in preserving high-order statistics and structural information in the data and is flexible enough to be used successfully for many structured prediction tasks.2 RELATED WORK 2.1 CNN-CRF MODELS Models for structured prediction have been extensively studied in computer vision. In the past these models often entailed the construction of hand-engineered features. In 2015, BID15 demonstrated that a fully convolutional approach to semantic segmentation could yield state-ofthe-art at that time with no need for hand-engineering features. BID1 showed that post-processing the of a CNN with a conditional Markov random field led to significant improvements. Subsequent work by many authors have refined this approach by incorporating the CRF as a layer within a deep network and thereby enabling the parameters of both models to be learnt simultaneously BID11. Many researchers have used this approach for other structured prediction problems, including image-to-image translation and depth estimation BID14.In most cases CNN-CRF models only incorporate unary and pairwise potentials. BID0 investigated incorporating higher-order potentials into CNN-based models for semantic segmentation, and found that while it is possible to learn the parameters of these potentials, they can be tedious to incorporate and render the model quite complex. Thus there is a need to develop methods that can incorporate higher-order statistical information without requiring manual modeling of higher order potentials. Adversarial Training. Generative adversarial networks were introduced in BID7. A GAN consists of a pair of models (G, D), where G attempts to model the distribution of the source domain and D attempts to evaluate the divergence between the generative distribution q and the true distribution p. GANs are trained by training the discriminator and the generator in turn, iteratively refining both the quality of the generated data and the discriminator's ability to distinguish between p and q. The is that D and G compete to reach a Nash equilibrium that can be expressed by the training procedure. While GAN training is often unstable and prone to issues such as mode collapse, recent developments such as spectral normalization and gradient penalty have increased GAN training stability BID8. Furthermore, GANs have the advantage of being able to access the joint configuration of many variables, thus enabling a GAN to enforce higher-order consistency that is difficult to enforce via other methods BID16.Conditional GANs. A conditional GAN (cGAN) is a GAN designed to incorporate conditional information BID19. cGANs have shown promise for several tasks such as class conditional image synthesis and image-to-image translation BID19. There are several advantages to using the cGAN model for structured prediction, including the simplicity of the framework. Image conditioned cGANs can be seen as a structured prediction problem tasked with learning a new representation of an input image while making use of non-local dependencies. However, the method by which the conditional information should be incorporated into the model is often unmotivated. Usually, the conditional data is concatenated to some layers in the discriminator (often the input layers). A notable exception to this methodology is the projection cGAN, where the data is assumed to follow certain simple distributions, allowing a hard mathematical rule for incorporating conditional data can be derived from the underlying probabilistic graphical model. As mentioned in, the method is less likely to produce good if the data does not follow one of the prescribed distributions. For structured prediction tasks involving conditioning with image data, this is often not the case. In the following section we introduce the fusion discriminator and explain the motivation behind it. As mentioned, the most significant part of cGANs for structured prediction is the discriminator. The discriminator has continuous access to pairs of the generated data or real data y and the conditional information (i.e. the image) x. The cGAN discriminator can then be defined as D cGAN (x, y, θ):= A(f (x, y, θ)), where A is the activation function, and f is a function of x and y and θ represents the parameters of f. Let p and q designate the true and the generated distributions. The adversarial loss for the discriminator can then be defined as DISPLAYFORM0 Here, A is the sigmoid function, D is the conditional discriminator, and G is the generator. By design, this frameworks allows the discriminator to significantly effect the generator . The most common approach currently in use to incorporate conditional image information into a GAN is to concatenate the conditional image information to the input of the discriminator at some layer, often the first. Other approaches for conditional information fusion are limited to class conditional fusion where conditional information is often a one-hot vector rather than higher dimensional structured data. Since the discriminator classifies pairs of input and output images, concatenating high-dimensional data may not exploit inherent dependencies in the structure of the data. Fusing the input and output information in an intuitive way such as to preserve the dependencies is instrumental in designing an adversarial framework with high structural capacity. We propose the use of a fusion discriminator architecture with two branches. The branches of this discriminator are convolutional neural networks with identical architectures, say ψ(x) and φ(y), that learn representations from both the conditional data (ψ(x)) and the generated or real data (φ(y)) respectively. The learned representations are then fused at various stages FIG0. This architecture is similar to the encoder portion of the FuseNet architecture, which has previously been used to incorporate depth information from RGB-D images for semantic segmentation BID9. In FIG0, we illustrate a four layer and a VGG16-style fusion discriminator, in which both branches are similar in depth and structure to the VGG16 model BID27. The key ingredient of the fusion discriminator architecture is the fusion block, which combines the learned representations of x and y. The fusion layer (red, FIG0) is implemented as element-wise summation and is always inserted after a convolution → spectral normalization → ReLU instance. The fusion layer modifies the signal passed through the ψ branch by adding in learned representations of x from the φ branch. This preserves representation from both x and y. For structured prediction tasks, x and y will often have learned representations that complement each other; for instance, in tasks like depth estimation, semantic segmentation, and image synthesis, x and y all have highly complimentary features. Theoretical Motivation. When data is passed through two networks with identical architectures and the activations at corresponding layers are added, the effect is to pass through the combined network (the upper branch in FIG0) a stronger signal than would be passed forward by applying an activation to concatenated data. To see this in the case of the ReLU activation function, denote the k th feature map in the l th layer by h (l) k and let the weights and biases for this feature and layer be denoted W DISPLAYFORM0, where x and y represent the learned features from the conditional and real or generated data respectively. Then DISPLAYFORM1 Eq. 4 demonstrates that the fusion of the activations in ψ(x) and φ(y) produces a stronger signal than the activation on concatenated inputs. 1 Strengthening some activations does not guarantee improved performance in general; however, in the context of structured prediction the fusing operation in the strongest signals being passed through the discriminator specifically at those places where the model finds useful information simultaneously in both the conditional data and the real or generated data. A similar mechanism can be found at at work in many other successful models that require higher order structural information to be preserved; to take one example, consider the neural algorithm of artistic style proposed by BID6. This algorithm successfully transfers highly structured data from an existing image x onto a randomly initialized image y by minimizing the content loss function DISPLAYFORM2 where F l ij and P l ij denote the activations at locations i, j in layer l of x and y respectively. The loss function mechanism used here differs from the fusing mechanism used in the fusion discriminator, but the underlying principle of capturing high-level structural information from a pair of images by combining signals from common layers in parallel networks is the same. The neural algorithm of artistic style succeeds in content transfer by insuring that the activations containing information of structural importance is similar in both the generated image and the content image. In the case of image-conditioned cGAN training, it can be assumed that the activations of the real or generated data and the conditional data will be similar, and by fusing these activations and passing forward a strengthened signal the network is better able to attend to those locations containing important structural information in both the real or generated data and the conditional data; c.f. Fig. 3. Empirical Motivation. We use gradient-weighted Class Activation Mapping (Grad-CAM) BID26 which uses the class-specific gradient information going into the final convolutional layer of a trained CNN to produce a coarse localization map of the important regions in the image. We visualized the outputs of a fusion and concatenated discriminator for several different tasks to observe the structure and strength of the signal being passed forward. We observed that the fusion discriminator architecture always had a visually strong signal at important features for the given task. Representative images from classifying x and y pairs as'real' for two different structured prediction tasks are shown in Fig. 3. This provides visual evidence that a fusion discriminator preserves more structural information from the input and output image pairs and classifies overlapping patches based on that information. Indeed, this is not evidence that a stronger signal will lead to a more accurate classification, but it is a heuristic justification that more representative features from x and y will be used to make the determination. In order to evaluate the effectiveness of the proposed fusion discriminator we conducted three sets of experiments on structured prediction problems: 1) generating real images from semantic masks (Cityscapes); 2) semantic segmentation (Cityscapes); 3) depth estimation (NYU v2). For all three tasks we used a U-Net based generator. We applied spectral normalization to all weights of the generator and discriminator to regularize the Lipschitz constant. The Adam optimizer was used for all experiments with hyper-parameters α = 0.0002, β 1 = 0, β 2 = 0.9. In order to demonstrate the structure preserving abilities of our discriminator we use the proposed setup in the image-to-image translation setting. We focus on the application of generating realistic images from semantic labels. This application has recently been studied for generating realistic synthetic data for self driving cars BID29 BID3. Unlike recent approaches where the objective is to generate increasingly realistic high definition (HD) images, the purpose of this experiment is to explore if a generic fusion discriminator can outperform a concatenated discriminator when using a simple generator. We used 2,975 training images from the Cityscapes dataset BID5 and re-scaled them to 256 × 256 for computational efficiency. The provided Cityscapes test set with 500 images was used for testing. Our ablation study focused on changing the discriminator between a standard 4-layer concatenation discriminator used in the sem- Figure 5: A comparative analysis of concatenation, projection and fusion discriminators on three different structured prediction tasks, i.e., image synthesis, semantic segmentation, and depth estimation. Table 1: PSPNet-based semantic segmentation IoU and accuracy scores using generated images from different discriminators. Our outperform concatenation-based methods by a large margin and is close to the accuracy and IoU on actual images (GT/Oracle). Mean IoU Pixel Accuracy 4-Layer Concat. inal image-to-image translation work, a combination of this 4-layer discriminator with spectral normalization (SN), a VGG-16 concatenation discriminator and the proposed 4-layer and VGG-16 fusion discriminators. Since standard GAN evaluation metrics such as inception score and FID can not directly be applied to image-to-image translation tasks we use an evaluation technique previously used for such image synthesis. To quantitatively evaluate the effectiveness of our proposed discriminator architecture we perform semantic segmentation on synthesized images and compare the similarity between the predicted segments and the input. The intuition behind this kind of experimentation is that if the generated images corresponds to the input label map an existing semantic segmentation model such as a PSPNet BID30 should be able to predict the input segmentation mask. Similar experimentation has been suggested in and. Table 1 reports segmentation both pixel-wise accuracy and overall intersectionover-union (IoU). The proposed fusion discriminator outperforms the concatenated discriminator by a large margin. Our is closer to the theoretical upper bound achieved by real images. This confirms that the fusion discriminator contributes to structure preservation in the output image. The fusion discriminator could be used with high definition images, however, such analysis is beyond the scope of the current study. Representative images for this task are shown in Fig. 4. The projection discriminator was modified image conditioning according to the explanation given in for the super-resolution task. Fig. 5 shows a comparative analysis of the concatenation, projection and fusion discriminators in an ablation study upto 550k iterations. Semantic segmentation is vital for visual scene understanding and is often formulated as a dense labeling problem where the objective is to predict the category label for each individual pixel. Semantic segmentation is a classical structured prediction problem and CNNs with pixel-wise loss often fail to make accurate predictions BID16. Much better have been achieved by incorporating higher order statistics in the image using CRFs as a post-processing step or jointly training them with CNNs BID2. It has been shown that incorporating higher order potentials continues to improve semantic segmentation improvement, making this an ideal task for evaluating the structured prediction capabilities of GANs and their enhancement using our proposed discriminator. Here, we empirically validate that the adversarial framework with the fusion discriminator can preserve more spacial context in comparison to CNN-CRF setups. We demonstrate that our proposed fusion discriminator is equipped with the ability to preserve higher order details. For comparative analysis we compare with relatively shallow and deep architectures for both concatenation and fusion discriminators. We also conduct an ablation study to analyze the effect of spectral normalization. The generator for all semantic segmentation experiments was a U-Net. For the experiment without spectral normalization, we trained each model for 950k iterations, which was sufficient for the training of the concatenated discriminator to stabilize. For all other experiments, we trained for 800k iterations. The discriminator was trained twice as much as the generator. Depth estimation is another structured prediction task that has been extensively studied because of its wide spread applications in computer vision. As with semantic segmentation, both per-pixel losses and non-local losses such as CNN-CRFs have been widely used for depth estimation. Stateof-the art with depth estimation has been achieved using a hierarchical chain of non-local losses. We argue that it is possible to incorporate higher order information using a simple adversarial loss with a fusion discriminator. In order to validate our claims we conducted a series of experiments with different discriminators, similar to the series of experiments conducted for semantic segmentation. We used the Eigen testtrain split for the NYU v2 dataset containing 1449 images for training and 464 images for testing. We observed that as with image synthesis and semantic segmentation the fusion discriminator outperforms concatenation-based methods and pairwise CNN-CRF methods every time. Structured prediction problems can be posed as image conditioned GAN problems. The discriminator plays a crucial role in incorporating non-local information in adversarial training setups for structured prediction problems. Image conditioned GANs usually feed concatenated input and output pairs to the discriminator. In this research, we proposed a model for the discriminator of cGANs that involves fusing features from both the input and the output image in feature space. This method provides the discriminator a hierarchy of features at different scales from the conditional data, and thereby allows the discriminator to capture higher-order statistics from the data. We qualitatively demonstrate and empirically validate that this simple modification can significantly improve the general adversarial framework for structured prediction tasks. The presented in this paper strongly suggest that the mechanism of feeding paired information into the discriminator in image conditioned GAN problems is of paramount importance.6 SUPPLEMENTARY MATERIAL The objective function for a conditional GANs can be defined as, DISPLAYFORM0 The generator G tries to minimize the loss expressed by equation 6 while the discriminator D tries to maximize it. In addition, we impose an L1 reconstruction loss: DISPLAYFORM1 leading to the objective, DISPLAYFORM2 6.2 GENERATOR ARCHITECTURE We adapt our network architectures from those explained in. Let CSRk denote a Convolution-Spectral Norm -ReLU layer with k filters. Let CSRDk donate a similar layer with dropout with a rate of 0.5. All convolutions chosen are 4 × 4 spatial filters applied with a stride 2, and in decoders they are up-sampled by 2. All networks were trained from scratch and weights were initialized from a Gaussian distribution of mean 0 and standard deviation of 0.02. All images were cropped and rescaled to 256 × 256, were up sampled to 268 × 286 and then randomly cropped back to 256 × 256 to incorporate random jitter in the model. Decoder: CSRD512→CSRD1024→CSRD1024→CSR1024→CSR1024→CSR512→CSR256 →CSR128The last layer in the decoder is followed by a convolution to map the number of output channels (3 in the case of image synthesis and semantic labels and 1 in the case of depth estimation). This is followed by a Tanh function. Leaky ReLUs were used throughout the encoder with a slope of 0.2, regular ReLUs were used in the decoder. Skip connections are placed between each layer l in the encoder and layer ln in the decoder assuming l is the maximum number of layers. The skip connections concatenate activations from the l th layer to layer (l − n) th later. Equations 2-4 of section 3.1 illustrate that when the ReLU activation is used in a fusion block, the fusing operation in a positive signal at least as large as that obtained by concatenation. For activations with negative branches, the following similar claim holds. DISPLAYFORM0 k ) ≤ 0, fusing leads to the activation U (l) DISPLAYFORM1 k ), while concatenation in the activation −α(U (l) DISPLAYFORM2 k ). The value of α plays a significant role in shaping the combined activation, and in some instances fusing can lead to a stronger signal than concatenation despite the disagreement in the incoming signals.
We propose the fusion discriminator, a novel architecture for incorporating conditional information into the discriminator of GANs for structured prediction tasks.
1,164
scitldr
Model-based reinforcement learning (MBRL) has been shown to be a powerful framework for data-efficiently learning control of continuous tasks. Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception. In this paper, we identify a fundamental issue of the standard MBRL framework -- what we call the objective mismatch issue. Objective mismatch arises when one objective is optimized in the hope that a second, often uncorrelated, metric will also be optimized. In the context of MBRL, we characterize the objective mismatch between training the forward dynamics model w.r.t. the likelihood of the one-step ahead prediction, and the overall goal of improving performance on a downstream control task. For example, this issue can emerge with the realization that dynamics models effective for a specific task do not necessarily need to be globally accurate, and vice versa globally accurate models might not be sufficiently accurate locally to obtain good control performance on a specific task. In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of the one-step ahead prediction is not always correlated with downstream control performance. This observation highlights a critical flaw in the current MBRL framework which will require further research to be fully understood and addressed. We propose an initial method to mitigate the mismatch issue by re-weighting dynamics model training. Building on it, we conclude with a discussion about other potential directions of future research for addressing this issue.
We define, explore, and begin to address the objective mismatch issue in model-based reinforcement learning.
1,165
scitldr
There has recently been a heated debate (e.g. , , ,) about measuring the information flow in Deep Neural Networks using techniques from information theory. It is claimed that Deep Neural Networks in general have good generalization capabilities since they not only learn how to map from an input to an output but also how to compress information about the training data input . That is, they abstract the input information and strip down any unnecessary or over-specific information. If so, the message compression method, Information Bottleneck (IB), could be used as a natural comparator for network performance, since this method gives an optimal information compression boundary. This claim was then later denounced as well as reaffirmed (e.g. , ,), as the employed method of mutual information measuring is not actually measuring information but clustering of the internal layer representations . In this paper, we will present a detailed explanation of the development in the Information Plain (IP), which is a plot-type that compares mutual information to judge compression , when noise is retroactively added (using binning estimation). We also explain why different activation functions show different trajectories on the IP. Further, we have looked into the effect of clustering on the network loss through early and perfect stopping using the Information Plane and how clustering can be used to help network pruning. Deep Neural Networks (DNNs) have recently achieved promising in many areas especially computer vision and natural language processing. Yet, the learning process and design principles of configuring DNN architecture are under-investigated . There are some recent attempts towards addressing this challenge. From an information theoretic viewpoint, have investigated the learning dynamics of DNN -how the mutual information (MI) of the layer activation with input and target develops over the course of training. The finding is that DNNs generally first increase the MI of the layers with both, but then reduce the MI with the input. This perceived compression has led to promising of DNN in many applications 1. This compression behaviour resembles the IB-method, a constraint method which aims to retain maximum information content for given compression levels and these possible maxima are depicted by the IB-bound. 2 Through the similarity, the IB-bound could be used as a way to judge network architecture . The closer to the IB-bound the better the NN is likely to perform. However, this finding is controversial, which has been supported by e.g.;; and denied. Most prominently, have argued that this does not generalize for all activation functions and that compression does not necessarily lead to good generalization. , , , Banerjee & Montúfar and have tried to implement the IB-constraint as optimization parameter for DNN training leading to promising . criticize these attempt claiming that they were not really sticking to the IB for their optimization process since in deterministic NNs the mutual information is either infinite or constant. Hence, the IB cannot produce optimizeable gradients. They therefore reason, that the of these authors were only possible by giving up a hard IB constraint. Recent success with fully invertible neural networks (which cannot experience any form of compression) cast doubt on the notion of compression being a necessary factor for good generalization (e.g. , , , ). Finally, a recent paper by assessed that measuring MI in this scenario is actually tracking how clustered the internal layer representations of the samples are. Building on , this work attempts to explain the trajectories in the IP created through MI estimation using binning. Through this we will shed more light on the investigation of the learning dynamics of DNNs through usage of the IP. Section 2.2.1 shows that the smaller the bin size for the binning estimator, the more the layers drift towards a fixed point in the IP. Section 2.2.2 highlights that higher layers strongly influence the shape of lower layers in the IP. Section 2.3 explains why the IP looks the way it does. 3 Clustering is then examined as a design parameter. This is done by investigating the connection between the loss function and clustering through the usage of early and perfect stopping in section 2.4.1. Here, no clear connection is found. Lastly, network pruning is attempted in section 2.5 using the IP, where slightly positive indications are found. At first though, the experimental setup is outlined. This section will introduce the experiments undertaken with the framework. Firstly, set goals are explained. Secondly, the experimental design is layed out. Lastly, the are presented. To make the experiments replicable and comparable, we use the same experiment setup of and. To do so networks with the following designs have been chosen: • 5 fully connected intermediate layers with 10-7-5-4-3 neurons; • input and output layer size depend on the used dataset; that is, 12-2 for and 784-Amount of Classes for MNIST; • Softmax activation function for output as all experiments are classifications. The activations of each layer are tracked for each iteration. Full tracking in beginning since the fastest movement is to be expected there and increased gaps for later iterations will help reduce computation complexity. Then the activations are binned. To judge the clustering of the activations, bin histograms are used for different layers. We also follow the same design by using and comparing networks with TanH and ReLU activations functions. Additionally combinations of the two are used to check for potential interference. Hence these 4 combinations: The first one is the activation setup by and the second one is by. All of the networks do not include any form of regularization and use SGD. Initially the experiments are run on the same dataset as in. Additionally, the MNIST-dataset is used as in to have a more complicated case with more features. Looking at the of the mixed networks, one can see that ReLU-layers influence deeper TanHlayers. Figure 2a shows the IP of a network where the higher layers are ReLU and the lower TanH. One can see that the layers following the ReLU-layers changed their pathway to a similar shape than the ReLU-layer itself and not the "normal" pathway of 2 compression phases. This effect gets weaker the deeper in the network. Figure 2b shows the IP for a network with TanH in higher and ReLU in lower layers. Here there is a significant impact of the TanH-layers on the ReLU-layers findable. The ReLU-layers take on a similar behaviour to the TanH-layers even though the TanH-layers show no compression at all. The effect gets also weaker the further the layer is apart from the TanH effect. These effects have not always been present or prominent for every time mixture networks have been trained (see fig. 9 in appendix B and fig. 12a in appendix D). Nonetheless, there seems to be an influence of the previous layers on the compression in the lower layers. , in the TanH-networks, mutual mutual information is either infinite (for continuous input features) or constant (for discrete input features). Because the datasets in usage here are both discrete, the mutual information has to be a constant; that is, MI of the input with the output I(X; Y) has to be the same as the MI of the layers with the output I(T i ; Y). Also given the input X I(T |X), the conditional entropy of the layer activities T is 0, which is explained in Appendix A.1. Therefore, we calculate these values for comparison and Table 1 shows the for the binning estimator 6. Comparing these values to the top right points in figures 1 and 11 (MNIST) one can see that these values seem to exactly represent the most extreme point of the information plane on the top right where the layers drift towards if the bin size is reduced (H(X) as max I(X; T) and I(X; Y) as max I(T ; Y)). Hence, this gives evidence, that the mutual information is as claimed constant and that conditional entropy I(T |X) = 0 which in I(X; T) = H(X). 2.54 in the 2.55 bin one cannot trace which one is which afterwards). The smaller the bin size and therefore, the more bins there are, the better is the estimation by default (see appendix G). Section 2.2.1 establishes that the more bins the estimator uses the more of the layers drift towards the top right which indicates that mutual information is indeed constant. Referring to Appendix A.1, mutual information of the input X with a layer T is defined as I(X; T) = H(X) − H(X|T). Since H(X) is a constant, a change in mutual information with the layer has to be a of a change in H(X|T). Because H(X|T) is only 0 if X is a function of T, one can infer that if there is no direct mapping between X and T, there will be an increase in H(X|T) which will reduce the mutual information between the two. The same applies to the output Y. Figure 3 shows different sample cases of how one can lose mutual information through binning. In the first case there are two samples from two different classes placed into one bin. This makes it impossible to differentiate between classes given only the bin number. This leads to a decrease in mutual information of the layer with the output since there is no direct mapping between the array which contains this bin and an output class possible (see Appendix G where array building is explained). Also, since there are two samples in one bin, one cannot differentiate between the two samples and therefore one loses mutual information with the input of the network since it is not possible to directly map between the input and the bin. The second and third case have two samples of the same class. Hence, it will only lead to a decrease in I(X; T). For example, two samples from two different classes are producing the activations 0.1, 0.33, 0.4 and 0.1, 0.34, 0.4. If no binning was performed, this could lead to the arrays 0103304 and 0103404 (this is just an explanatory example in reality they get transformed to binary first). Each sample is uniquely identifiable and each sample can get a class allocated. However, if there was binning in 0.05 steps one would end up with the two arrays 0103504 and 0103504 and no differentiation is possible. 5 ) and compares it with the movement in the information plane (see fig. 4), one can see that this roughly seems to fit this notion. As the layers which show compression with respect to the input also show that their amount of empty bins increases over the course of the training and their clustering into smaller amounts of bins increases. So why do the bins empty out? This is driven by the Softmax output-layer whose neurons during training are going to be driven to output either 1 or 0 for each sample to learn to correctly classify. Hence, by definition this layer is going to reduce its filled bins down to two bins -0 and 1. This explains where the output layer is drifting towards I(X; T) = 1 since the entropy will be 1 and there is only 1 bit of information needed to find out which state the activation is in. Figure 5 shows that the deeper the layer, the more the activations get binned into the extreme bins for TanH and the more training proceeds the more bins are emptied out. The output layer seems to force the next higher layer to take on extreme values as well as itself gets more and more decisive on the sample classes. This gets propagated to the next layer and so on, but loses intensity. Figure 4 demonstrates that this matches the corresponding information plane. The two layers that get clustered show compression while the others do not. The situation with the output Y is different, since information is only lost when there are samples of different classes in one bin leading to equal arrays (see above example). In the beginning we have very low mutual information of the layers and the output. This is because we have random weight initialization and the samples are more or less randomly allocated to the bins. This will cause a lot of bins to have samples of different classes. As training goes on, the network learns to better differentiate between classes and will be more likely to put samples from the same classes in the same binning areas. The best example is the output layer where the network either produces 0 or 1 at the end of training. Since Softmax is used this will produce an array of multiple zeros (depending on the number of classes -1) and one 1 (see appendix G: activations get transformed into continuous array for binning). This allows to uniquely identify which class the sample is allocated to by the network. The next question about the pathway of the layers in the information plane is: why do ReLU-layers take such a different path than TanH-layers?. The reason is that ReLUs has an inbuilt bin, because activations can only be non negative. The weights of the network get initialized randomly and often are negative values. This make the input of the ReLU activation function negative and because of that the will be 0. Hence, a lot of the activations are going to be 0 in the beginning. This means that the total range of our bins will have less activations available to fill bins and for the same argument as before this will lead to a lower entropy which will lead to less estimated mutual information. Over the course of training more and more of the activation functions are leaving the 0-bin and filling previously empty bins and this "frees" information which in the ReLU-layers path towards the top right. This can be observed in the plots 5b, 5d and 5f in Figure 5 together with the IP in Figure 4b where there is a wider spread of activations in bins. As training goes on, the amount of empty bins is decreasing for the ReLU-layers. Thus entropy is increasing, which in return increases the estimated mutual information. Figures 5e and 5f also show why the the output layer initially drifts to an increase in MI with the input. It is because there is an initial drop in empty bins since in the beginning the spread in the bins is very small (big aggregation in the center at the beginning). Figures 14-13 presents the same trend for the MNIST-dataset 8. The remaining question is why a larger bin size in the layers drifting away from the "real" mutual information. Naturally, larger bins in fewer total bins which makes it more likely that 7 It is important to note though that empty bins are just an indicator. It could be that there are few empty bins but some with large sample concentration for the same . 8 Here the bin size has been increased since a too small bin size in very thin bins that are not visible. multiple activations land in the same bin. Thus, it is more likely that the built array is the same which in returns leads to information loss. Another way to show that clustering is at play is if one inserts a bottleneck into the NN. Figure 6 shows the IP for a network with a 12-3-2-12-2-2 layer-neuron composition. The 3rd layer which is the bottleneck actually shows higher compression than the 4th. If one considers data processing inequality this should be impossible, but it is purely a of many samples being in the same bins. In summary, there seems to be a strong indication that the information plane is merely tracking clustering as suggested by. Hence, the advantages of clustering should be investigated. This is done in the following section. Since it does not seem that the highest level of compression is beneficial for generalization (see appendix F or , it makes sense to see what happens when one uses a fairly standard way of trying to achieve good generalization -early stopping. Figure 7a shows the of early stopping and fig. 7b for perfect stopping compared to the full 8000 iterations for a TanH-network 9. It seems that early stopping occurs when the network is starting to enter the compression phase of the output-layer (early stop point is in the curve turning point). The same happens for a ReLU network (see fig. 10a). This stopping does not generalize to the MNIST-dataset (see Figure 15 in Appendix D). Figure 7b presents that early stopping does not guarantee to stop at the global minimum, as it is decided to see what the information plane shows at the global minimum for a "perfect" stop. Therefore, there seems to be a not general connection between compression and the first minimum but no connection to the global minimum. We believe, that in deep neural networks, layers at the end of the network are more susceptible to be pushed into a clustered state by the output layer when they are less needed by the network to map the input to the output, since the network is essentially just passing the information through without refining it. It has been shown that sometimes much smaller networks can perform similar generalization performance than bigger ones (e.g. le , , ,). Hence, clustering could potentially be used to identify less effective layers at the end which could be safely removed to reduce computational complexity. To investigate this the amount of hidden layers is doubled by adding additional 3 neuron layers at the end, assuming that these are potentially prune-able. Additionally early stopping is performed to have a more realistic scenario. The ing IPs 10 can be seen in Figure 8. The goal is to remove layers that have similar trajectories from the network; e.g. layers 5,6 and 7 in Figure 8a. This process is repeated until there are no similar layers and for each pruning, we re-plot the IP. Table 2 presents the scores after each pruning. Both network types have shown prune-able layers until all added layers have been removed. All pruning actions does not in decreased score. We are aware that this is a very much designed scenario, but it shows that this clustering-based pruning might be worthy of further investigation. It has been reaffirmed that the information plane only seems to track clustering of the activations, and now we will revisit and summarise the previous findings. We have highlighted the effect of mixing activation functions. Adding ReLU-layers before TanH-layers changes the behaviour on the TanH-layers, which is often observed on ReLu-layer. With clustering in mind, the reason can be that in the beginning of training ReLU-layers often have 0 as activation 11 which is going to be transmitted to the TanH neurons. No matter what the weight of the connection is, TanH is going to receive a 0. This will lead to a clustering of the TanH-activations in the 0-bin which will lead to an initial reduced mutual information with the input and the output. The longer training goes the more will the ReLU-layer usually diversify the activations which in return will also lead to a diversification in the TanH activations. Hence I(X; T) and I(T ; Y) will increase. When there is a TanH-layer in front of a ReLU-layer, in the beginning there will be many negative activations sent to the ReLU-layers since TanH ranges from -1 to 1 and this will force more activations of the ReLU-layer to be 0. Random weight initialization will sometimes make this effect stronger or weaker (see Appendix B). This will lead to an initial decrease in the MI before the network stabilizes and learns to adjust the weights accordingly. Therefore, the composition of the network will influence the clustering in the network. On the one hand there seems to be a connection between clustering in the output-layer and Early Stopping, on the other hand this does not apply to global minima. The on the MNIST dataset have shown that this connection does not generalize for different datasets. This is another indication that the claims of do not generally hold even when translated from compression to clustering. Nonetheless, the pruning experiments have shown that clustering might be useful for NN design. This paper studies the information plane as a neural network analysis tool. We have looked into the influence of different bin sizes for binning estimation, which has led to a detailed explanation of why certain behaviour is happening in the information plane. Thus, finding new strong evidence that the information plane only tracks clustering as suggested. The usage of measuring clustering has been investigated using early stopping and perfect stopping, which we have not been able to generalise the finding across different datasets. Clustering can be used to design a NN in terms of pruning, which might be worthy of further investigation. The information plane holds value as a measure of clustering and could potentially lead to advancements in Deep Learning. One aspect that has not been part of the discussion so far is that in contrast to non-linearly saturating activation functions like TanH, which has no binning during the real training process, ReLU in fact has a bin. The 0-bin could actually lead to a loss in mutual information because the injectiveness of the activation function gets lost (not invertible anymore) and mutual information is not bound to be constant or infinite. Therefore, networks with ReLU could experience a form of compression. ReLU does in general show better generalization capabilities than TanH, which could partially support the claim that compressed neural networks generalize better. A well known problem of ReLUs is called "dying ReLUS" which could be a case of "too high" compression. Which would disturb the mapping between input and output. Taking out the binning of ReLUs, like in LeakyReLUs, is almost always favorable compared to standard ReLUs in terms of generalization . Since LeakyReLUs restore the invertibility of the activation function and therefore prevent compression, this also indicates that compression does not necessarily generalizes better in DNNs. It remains a task for future investigations, how this can be explained in detail. The following addresses some necessary aspects of information theory to then briefly introduce the IB method and excerpts of the debate about its applicability in deep learning. Information theory is a mathematical way to represent the transmission and processing of information . It is mostly founded on work of Claude Shannon . There lies interest in studying information theory when trying to understand neural networks, as in a neural networks information of an input X about an output Y gets propagated through the network in an attempt to learn the mapping of this information and with this to then generalize this mapping on unseen data. For the later, two important concepts "entropy" and "mutual information" have to be understood which are briefly outlined in the following. After that the IB method is summarised. Entropy is a term coined by Clausius. It originates from thermodynamics and is a measure of disorder in a system or how uncertain events are. It denotes the average amount of information needed to describe a random variable. For a discrete random variable X, it can be expressed the following way: where p(x) is the probability mass function and the is denoted in information bits. Conditional entropy is the entropy of a random variable conditioned on another random variable's knowledge . Between two distributions X and Y it is calculated like this: an important property of conditional entropy is that the conditional entropy of X given Y H(X|Y) is 0 is and only if X is a deterministic function of Y . Relative entropy also known as Kullback-Leiber divergence is a measure of instance between the two distributions. It can also be interpreted as how inefficient it is to assume the probability distribution q if the real one is p. If one falsely assumes q the average description length would change to H(p) + D(p||q) which is the sum of the entropy and the Kullback-Leiber divergence. KullbackLeiber divergence is defined as follows: this always non negative and only 0 if p = q . The other relevant metric which is Mutual Information (MI). MI measures how much information one variable has about another one. It is therefore a measure of dependency and in contrast to the linear correlation coefficient, it also measures dependencies that do not show in covariance. An important property is that it is invariant to reparametizations. Having variables X and Y this means, that if there is a direct invertible mapping between variable X and a reparametized form of it X 12, then I(X; Y) = I(X ; Y) . MI can be calculated as follows (,): 12 X and X are homeomorphisms Mutual information can also be used as a measure of independence between two variables, as it is 0, only if the two variables are strictly independent . Shannons information theory focused on transmission of information in signal processing. Signal transmission, is usually subject to some form of loss as not 100% of the information reach their recipient. This principle is known as the Data Processing Inequality (DPI), which states that the information content of some input cannot be increased through data manipulation (e.g. by reparameterization) . It is possible, that one does not need all of the information to reconstruct the initial message. To do so, the relevant part of the information has to reach the recipient. This means, that the original message can be compressed until it only conveys this relevant part, which allows a direct mapping back to the original message (it is invertible). Sometimes, one wants to compress information since information transfer is expensive. 13 The standard way to determine how compressed a message is, is to calculate mutual information I(X; X) where X denotes the compressed message and X the original (see eq. 4). The standard way to express how compressed a message can be is rate distortion theory which can be used to derive the bottom optimal distortion for a given distortion rate. This smallest possible representation is called a "minimal sufficient statistic". It allows to map the output to the input without mistakes . introduced a way to not only compress the message, but also to compress it such that it retains a maximum of its information about an output Y measured through the mutual information of the input with the output I(X; Y). Therefore, X becomes a minimal sufficient statistic of X with respect to Y. This means that it is the least complex mapping of X capturing the mutual information I(X; Y) . This method is called the IB method. The main advantage is, that this optimization problem has an exact solution the Information Bound in the IP . 14 asserted that deep neural networks can be interpreted as Markov-Chains 15, since each layer is just depending on the previous layer's output. Thus, DPI would apply and each additional layer (ignoring single neurons) would have at most equal or less (mutual-)information about input and output than the previous one. They therefore assume that a DNN compresses relevant information much like the IB method down to something close to a minimal sufficient statistic. Through this, they claim that the IB limit, the Information Bound, serves as a natural comparator of DNN architecture performance -the closer to the IB Bound the more optimal the architecture. 13 E. g. sending a JPEG image is a lot faster and cheaper than sending a raw file. 14 See ap. E fig. 16 for examples. 15 A series of memory-less stochastic events. These examples show that the trajectory of the information planes is very susceptible to how the weights are initialised. Concerning the for MNIST using binning (see figure 11), one can see, that the general trend of the layer movement in the IP is the same as it was found for the -dataset. 16 Another difference is that the mixture-networks are less susceptible to the different activation functions in the higher layers. There is no noticeable influence iff ReLU is in the higher layers (see the plots in figure 12a). For leading TanH-layers there is still a significant influence perceive-able in the early periods at the bottom for the green and the yellow layer (see plots in fig. 12b). The biggest difference is, that there is no distinct behaviour for Early Stopping on this dataset findable (see fig. 15). Hence, the connection between the paths in the information plane and Early Stopping does not seem to be a general one. What generalizes perfectly though is that the smaller the bin size used to estimate mutual information, the more the layers tend to be close to the maximum point in the top right. Hence the next section is going to investigate the reasons for this. Also figures 13 and 14 show that the bins empty out and redistribute themselves in the same way as for the dataset. it is obvious, that these models are very overfitted. What is shown in the plots, is on the one hand, the model already reached a very good accuracy after roughly 1000 iterations and then just continuous to be very high. On the other hand, the cross entropy loss starts to increase around the same region. This means that the model is actually loosing its confidence. It still assigns almost all the samples correctly as before, but is much more likely to change its mind and assign a sample to the wrong class. Therefore, the model worse at 8000 iterations than it was at 1000. Hence, the maximum compression, which is usually reached at the maximum epoch, cannot be equalled to good generalization. G use a binning process to estimate the probability distribution of the activations of the different layers with TanH. The idea is to separate the continuous activation range into discrete bins. An example is shown in Figure 18 where a double linearly saturating function is split in 12 bin compartments. For activation functions with known bounds like TanH, these bounds are used to define the "span-width" or the outer borders of the domain. Functions without defined bounds approximate these through usage of the maximum and minimum occurrence in the samples. For each epoch and layer, the recorded activations are then allocated to the bins. And the number of samples in each bin are typically used to estimate the probability distribution. This is a very common approach as it is easy to use . use one sample of the inputs and all of its feature values as a whole, meaning they do not look at the probability of a feature having a certain manifestation, but at the probability of the sample being this feature combination as a whole. To do so the feature manifestations of each sample get "lined-up" as a binary array and the arrays probability is the one actually used. The same happens with the output array and the now discrete (binned) values of the activations. With these the respective probabilities P (X), P (Y) and P (T) can be calculated. The conditional probabilities P (T i, X) and P (T i, Y) are calculated using the inverse of the X and Y arrays and their respective already calculated probabilities. The process is as follows: 1. take the activation data where the inverse of X equals the manifestation of X (meaning where the inverse equals the index of the i for all indices of the probability vector P (X)) 2. convert these activations into continuous binary arrays like before 3. calculate the probability of these arrays as P (T, X) With the now estimated probabilities, one can calculate the entropies using the formulas displayed in equations 1 and 2 and with these one can calculate mutual information using equation 4.
We give a detailed explanation of the trajectories in the information plane and investigate its usage for neural network design (pruning)
1,166
scitldr
Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obtaining novel theoretical observations regarding the inductive bias of convolutional networks. Specifically, we show a structural equivalence between the function realized by a convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which facilitates the use of quantum entanglement measures as quantifiers of a deep network's expressive ability to model correlations. Furthermore, the construction of a deep ConvAC in terms of a quantum Tensor Network is enabled. This allows us to perform a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in its underlying graph. We demonstrate a practical outcome in the form of a direct control over the inductive bias via the number of channels (width) of each layer. We empirically validate our findings on standard convolutional networks which involve ReLU activations and max pooling. The description of a deep convolutional network in well-defined graph-theoretic tools and the structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work. A central factor in the application of machine learning to a given task is the restriction of the hypothesis space of learned functions known as inductive bias. In deep convolutional networks, inductive bias manifests itself in architectural features such as number of layers, number of channels per layer, and more BID17. Formal understanding of the inductive bias behind convolutional networks is limited -the assumptions encoded into these models, which seem to form an excellent prior knowledge for different types of data (e.g. BID16 ; BID15 ; van den), are for the most part a mystery. An important aspect of the influence that a certain architectural feature has on the inductive bias, is its effect on the network's ability to model correlations between regions of its input. In this regard, one typically considers partitions that divide input regions into disjoint sets, and asks how far the function realized by the network is from being separable with respect to these partitions BID5 ). For example, BID5 show that when separability is measured through the algebraic notion of separation-rank, deep Convolutional Arithmetic Circuits (ConvACs) BID7 support exponential (in network size) separation-ranks for certain input partitions, while being limited to polynomial separation-ranks for others. ConvACs are a special class of convolutional networks, characterized by linear activations and product pooling, which served a key role in theoretical analyses of convolutional networks, in virtue of their algebraic structure. In this work, we draw upon formal similarities between how physicists describe a system of manyparticles as a quantum mechanical wave function, and how machine learning practitioners map a high-dimensional input (e.g. image) to a set of output labels through a deep network. In particular, we show that there is a structural equivalence between a function modeled by a ConvAC and a many-body quantum wave function, which relies on their underlying tensorial structure. This allows employment of the well-established physical notion of quantum entanglement measures , which subsumes other algebraic notions of separability such as the separation-rank mentioned above, for the analysis of correlations modeled by deep convolutional networks. Importantly, quantum entanglement is used by physicists as prior knowledge to form compact representations of many-body wave functions in what is known as Tensor Networks (TNs) (Östlund and ; ; ; BID11 . In the domain of machine learning, a network in the form of a ConvAC is effectively a compact representation of a multi-dimensional array related to the convolutional weights. This has been analyzed to date via tensor decompositions -where the representations are based on linear combinations of outer-products between lower-order tensors BID7 . A TN, on the other hand, is a way to compactly represent a higher-order tensor through inner-products among lower-order tensors, which allows a natural representation of TNs through an underlying graph. Although the fundamental language is different, we show that a ConvAC can be mapped to a TN, and thus a graph-theoretic setting for studying functions modeled by deep convolutional networks is brought forth. In particular, notions of max-flow/min-cut are shown to convey important meaning. The we present, connect the inductive bias of deep convolutional networks to the number of channels in each layer, and indicate how these should be set in order to satisfy prior knowledge on the task at hand. Specifically, the ability of a ConvAC to represent correlations between input regions is shown to be related to a min-cut over all edge-cut sets that separate the corresponding input nodes in the associated TN. Such enable one to avoid bottle-necks and adequately tailor the network architecture through application of prior knowledge. Our are theoretically proven for a deep ConvAC architecture; their applicability to a conventional deep convolutional network architecture, which involves ReLU activations and max pooling, is demonstrated through experiments. Some empirical reasoning regarding the influence of the channel numbers on the network's performance has been suggested (e.g.), mainly regarding the issue of bottle-necks which is naturally explained via our theoretical analysis below. Such insights on the architectural design of deep networks are new to machine learning literature, and rely on TN bounds recently derived in physics literature, referred to as'quantum min-cut max-flow' BID8. The mapping we present between ConvACs and TNs indicates new possibilities for the use of graphtheory in deep networks, where min-cut analysis could be just the beginning. Additionally, the connections we derive to quantum entanglement and quantum TNs may open the door to further well-established physical insights regarding correlation structures modeled by deep networks. The use of TNs in machine learning has appeared in an empirical context where trained a matrix product state (MPS) TN architecture to perform supervised learning tasks on the MNIST data-set. Additionally, there is a growing interest in the physics community in RBM based forms for variational many-body wave functions (e.g. BID1). BID2 present a theoretical mapping between RBMs and TNs which allows them to connect the entanglement bounds of a TN state to the expressiveness of the corresponding RBM. We provide below the minimal tensor analysis required for following the analyses of ConvACs and TNs that are carried out in this paper. The core concept in tensor analysis is a tensor, which may be thought of as a multi-dimensional array. The order of a tensor is defined to be the number of indexing entries in the array, which are referred to as modes. The dimension of a tensor in a particular mode is defined as the number of values that may be taken by the index in that mode. If A is a tensor of order N and dimension M i in each mode i ∈ [N], its entries are denoted A d1...d N, where the index in each mode takes values between 1 and the appropriate dimension, d i ∈ [M i]. Suppose A is a tensor of order N, and let (A, B) be a partition of [N]:= {1, . . ., N}, i.e. A and B are disjoint subsets of [N] whose union covers the entire set. The matricization of A w.r.t. the partition (A, B), denoted A A,B, is essentially the arrangement of the tensor elements as a matrix whose rows correspond to A and columns to B (see appendix A for exact definition). A TN (see overview in Orús is a weighted graph, where each node corresponds to a tensor whose order is equal to the degree of the node in the graph. Accordingly, the edges emanating out of a node, also referred to as its legs, represent the different modes of the corresponding tensor. The weight of each edge in the graph, is equal to the dimension of the appropriate tensor mode. conv 0 (j 0,)Figure 2: The Convolutional Arithmetic Circuit (ConvAC) network BID7.Moving on to the connectivity properties of a TN, edges which connect two nodes in the TN represent an operation between the two corresponding tensors. A index which represents such an edge is called a contracted index, and the operation of contracting that index is a summation over all of the values it can take. An index representing an edge with one loose end is called an open index. The tensor represented by the entire TN, whose order is equal to the number of open indices, can be calculated by summing over all of the contracted indices in the network. In FIG0, a TN corresponding to the operation of multiplying a vector v ∈ R r1 by a matrix M ∈ R r2×r1 is depicted. The computation is performed by summing over the only contracted index, k. Since there is only one open index, d, the of contracting the network is an order 1 tensor (a vector): u ∈ R r2 which upholds u = M v. Though we use below the contraction of indices in a more elaborate TN, this operation can be essentially viewed as a generalization of matrix multiplication. When describing the quantum mechanical properties of a system composed of many interacting particles, referred to as a many-body quantum system, physicists are required to employ functions which are able to express an elaborate relation between the different particles. Similarly, machine learning tasks require functions with the ability to express a complex relation between many input elements, e.g. many pixels in an image. In this section, we formulate this analogy. Our construction will be based on the ConvAC architecture introduced by BID7, illustrated in FIG17. The ConvAC is a deep convolutional network that operates similarly to a regular convolutional network, only with linear activations and product pooling layers (which introduce the non-linearity) instead of the more common non-linear activations (e.g. ReLU) and average/max pooling. ConvACs are closely related to SimNets BID3 BID6, and their underlying operations lend themselves to mathematical analyses based on measure theory and tensor analysis. From an empirical perspective, ConvACs work well in many practical settings, e.g. for optimal classification with missing data (Sharir et al.), and for compressed networks BID6. Importantly, through the concept of generalized tensor decompositions, a ConvAC can be transformed to a standard convolutional network with ReLU activation and average/max pooling, which laid the foundation for extending its proof methodologies to such ConvNets BID4. This deep learning architecture was chosen for our analysis below due to its underlying tensorial structure which resembles the quantum many-body wave function, as will soon be shown. The input space of the network, denoted by X = (x 1, ..., x N), can be thought of as an image, where each x j corresponds to a local patch from that image. The Y network outputs, denoted by h y (x 1, ..., x N) for y ∈ [Y], are shown in BID7 to have the following form: DISPLAYFORM0 where A y and A (rank-1) are tensors of order N and dimension M in each mode. The entries of the conv-weights tensor A y are given by polynomials in the network's convolutional weights, a l,j,γ (see FIG17 and BID7). The entries of A (rank-1) are given by the application of the M linearly independent representation functions {f θ d} M d=1 on the input patches, which are an initial mapping of the inputs to an M -dimensional feature space. We now turn to a brief presentation of the methods with which physicists describe the quantum mechanical properties of a many-body system (see appendix B for a more detailed introduction).A state of a system, which is a complete description of a physical system, is given in quantum mechanics as a wave function, denoted by |ψ. We limit our discussion to states which reside in finite dimensional Hilbert spaces, as these are at the heart of our analogy to convolutional networks. We discuss the case of N particles, each corresponding to a local Hilbert space H j for j ∈ [N] such that ∀j: dim(H j) = M. Denoting an orthonormal basis of the local Hilbert space by {|ψ d} M d=1, the many-body wave function |ψ ∈ H = ⊗ N j=1 H j can be written as: DISPLAYFORM1 where |ψ d1 ⊗ · · · ⊗ |ψ d N is a basis vector of the M N dimensional Hilbert space H, and the coefficients tensor A d1...d N is the tensor holding the corresponding coefficients. We will tie between the function realized by a ConvAC given in eq. 1, and the many-body quantum wave function given in eq. 2. First, we consider a special case of N particles which exhibit no quantum correlations (to be formulated in section 4 below). The state of such a system is called a product state, and can be written down as a single tensor product of local states |φ j ∈ H j: |ψ ps = |φ 1 ⊗ · · · ⊗ |φ N. By expanding each local state in the respective basis, DISPLAYFORM2 dj ψ dj, the product state assumes a form similar to eq. 2: DISPLAYFORM3 with the entries of its coefficients tensor given by: DISPLAYFORM4 dj. If we compose each local state |φ j such that its projection on the local basis vector equals v DISPLAYFORM5, then the inner product between the many-body quantum state |ψ and the tailored product state |ψ ps is equal to: DISPLAYFORM6 reproducing eq. 1 for a single class y, as A DISPLAYFORM7 by construction. This ties between the function realized by a convolutional network to that which a many-body wave function models. Specifically, the conv-weights tensor is analogous to the coefficients tensor of the manybody wave function, while the input to the convolutional network is analogous to the constructed product state. In the following sections, we will use this analogy to acquire means of analyzing the expressiveness of a convolutional network via the properties of its underlying tensor. The structural connection between the many-body wave function and the function realized by a ConvAC, presented in the previous section, creates an opportunity to employ well-established physical insights and tools for analyzing the inductive bias of convolutional networks. We present in this section the concept of quantum entanglement measures, and use it to motivate and extend previously suggested means for quantifying correlations of a deep convolutional network. In BID5; , the algebraic notion of separation-rank is used as a tool for measuring correlations modeled by a function between two disjoint parts of its input. Let f (·) be a function over x 1... x N, and let (A, B) be a partition of [N]. The separation-rank of f (·) w.r.t. (A, B) measures the strength of correlation that f (·) models between input elements corresponding to A ({x i} i∈A ) and those corresponding to B ({x j} j∈B ). If f (·) is separable w.r.t. (A, B), meaning there exist functions g(·) and h(·) such that f (x 1, ..., x N) = g((x i) i∈A ) · h((x j) j∈B ), then under f (·) there is absolutely no correlation between the inputs of A and those of B.1 In this case, the separation-rank is equal to 1 by definition. In general, the separation rank of f (·) w.r.t. (A, B) is defined to be the minimal number of summands that together give f (·), where each summand is separable w.r.t. (A, B). Higher separation rank indicates larger deviation from separability, i.e. stronger interaction (correlation) modeled between sides of the partition. 2 The analysis of separation ranks allows control over the inductive bias when designing a deep network architecture -the network can be designed such that characteristic correlations in the input are modeled, i.e. partitions that split correlated regions have high separation ranks. In the physics domain, special attention is given to the inter-particle correlation structure characterizing a many-body wave function, as it has broad implications regarding the physical properties of the examined system. We present below the concept of quantum entanglement measures that is widely used by physicists as a quantifier of correlations in a many-body quantum system. Remarkably, this approach for quantifying correlations is very similar to the above presented tool of the separation-rank, which in fact corresponds to a particular quantum entanglement measure. Consider a partition of N particles labeled by integers [N], which splits it into two disjoint subsystems A and B. Let H A and H B be the Hilbert spaces corresponding to particles in subsystems A and B, respectively. In what is referred to as a'Schmidt decomposition', the many-body quantum wave function in eq. 2 can be written as (see appendix B.1 for derivation): DISPLAYFORM0 where DISPLAYFORM1 are the singular values of the matricization A A,B, and DISPLAYFORM2 are r vectors in the bases of H A and H B, respectively, obtained by a singular value decomposition. Eq. 5 represents the N particle wave function in terms of a sum of tensor products between two disjoint parts of it. Each summand in eq. 5 is a separable state w.r.t. the partition (A, B), which is analogous to the separable function in the above discussion. Intuitively, as above, the more correlated two sides of a partition are, the more'complicated' the function describing their relation should be. Essentially, a measure of entanglement w.r.t. the partition (A, B) is a quantity that represents the difference between the state in question and a state that is separable w.r.t. this partition. There are several different measures, such as the entanglement entropy This method for quantifying quantum correlations can now be readily transferred into the machine learning domain. Utilizing the structural analogy that was established in section 3, the measures of entanglement constitute an instrument for quantifying the correlations that a convolutional network can model. Specifically, we've shown the conv-weights tensor to be analogous to the coefficients tensor of the many-body wave function, thus the entanglement measures can be analogously defined using the singular values of a matricization of the conv-weights tensor. Since it was shown by BID5 that the separation-rank is equal to the rank of the matricized convweights tensor, it is precisely equal to the Schmidt number. The analogy to physics suggests that correlation measures more sensitive than the separation rank may be borrowed, providing a more sensitive algebraic perspective on the hypothesis space of convolutional networks, which takes into account the relative magnitudes of A A,B's non-zero singular values and not merely their number. Physicists have a rich tool-set for exploiting knowledge regarding quantum entanglement measures for the design of computational representations of quantum wave functions. We are now in a position to borrow such tools, and use them for the design of convolutional networks. In particular, we will establish a relation between the correlations modeled by a ConvAC and the widths of its hidden layers, and make use of these relations for controlling the inductive bias of the network. In the previous section, we have seen that the coefficients or conv-weights tensor A d1...d N, which has M N entries, encapsulates the information regarding the correlations of the many-body quantum wave function or of the function realized by a ConvAC. The curse of dimensionality manifests itself in the exponential dependence on the number of particles or image patches. In a quantum many-body setting, this renders impractical the ability to investigate or even store a wave function of more than a few dozens of interacting particles. A common tool used to tackle this problem in the physics community is a Tensor Network, which allows utilizing prior knowledge regarding correlations when attempting to represent an exponentially complicated wave function with a polynomial amount of resources. In appendix C we provide a thorough introduction to TNs, which were briefly introduced in section 2. In this section, we draw inspiration from the physics approach and present a construction of a ConvAC as a TN. This construction will allow us to demonstrate how adequately tailoring the number of channels in each layer of the deep network can enhance its expressivity by fitting the form of the function realized by it to given correlations of the input. In this we show how the parameters of the ConvAC can be most efficiently distributed given prior knowledge on the nature of the input, which is in fact matching the inductive bias to the task at hand. FIG17, with pooling windows of size 2 and N = 8 (see appendix D for full details of construction). The round (order 2) nodes in each layer represent matrices holding the convolutional weights of that layer. The triangle nodes correspond to a special tensor that hosts 1's on its super-diagonal and 0's elsewhere, effectively enforcing the same channel pooling attribute of the network. The tensor A y d1...d N is obtained upon summation over all the indices which correspond to internal edges, leaving the external edges which correspond to d 1,..., d N, y open. As mentioned above, a TN is a weighted graph, and the weights marked next to each edge in this TN are equal by construction to the number of channels in the corresponding ConvAC layer l, denoted r l. This last equivalence will allow us to draw a direct relation between the number of channels in each layer of a deep ConvAC and the functions it is able to model. Accordingly, it will allow us to provide prescriptions regarding the layer widths for the design of a network that is meant to support known input correlations. Our main , presented in theorem 1, relies on one of the most recent advances in the study of the quantitative connection between quantum entanglement and TNs, namely'quantum min-cut max-flow' BID8. The key accomplishment that the TNs tool brings forth, is the ability to apply graph-theoretic tools to a deep convolutional network. Specifically, we tie the network's ability to model correlations between two disjoint input regions A and B, as measured by the Schmidt entanglement measure, to the minimal value, over all cuts separating A from B, of the multiplication of the cut edges' weights (the multiplicative minimal cut). FIG17 with pooling windows of size 2. Assume that the channel numbers across the layers are all powers of the same integer, 5 and suppose we randomize the network weights by some continuous distribution. Then, with probability 1, the rank of the matricization A y A,B (the Schmidt measure w.r.t. (A, B) ) is equal to the multiplicative minimal cut separating A from B in the respective TN.Theorem 1 leads to practical implications regarding the construction of a deep network architecture when there is prior knowledge on the task at hand. If one wishes to construct a deep ConvAC that is expressive enough to model an intricate correlation structure according to some partition, it is advisable to choose the channel numbers such that the network is able to support such correlations, by ensuring that all the cuts separating these two parts in the corresponding TN have high weights. For example, consider the left-right partition in which A and B hold the left and right input patches, respectively. The multiplicative minimal cut weight is in this case equals min(r L−1, r L−2, ..., r DISPLAYFORM0, where L := log 2 N (in the example given in FIG2, L = 3). We see that choosing a small number of channels for the deeper layers can create an undesired'shortcut' which harms the expressiveness of the network in a way that prevents it from modeling the long ranged correlations which correspond to this partition, present for example in symmetric face images. Alternatively, considering the interleaved partition where A and B hold the odd and even input patches, respectively, the multiplicative minimal cut weight will be equal to min(r N/4 0, M N/2) -dependent only on the first layers' channel numbers, and exponential in N. The partitions mentioned above represent two extreme cases that correspond to shortest and longest ranged correlations. However, the min-cut applies to any partition of the inputs, so that regarding the layer widths can be established for any intermediate length-scale of correlations. For example, the relevant factors that contribute to the min-cut between (A, B) for which both A and B have contiguous segments of a certain length ξ are M, r 0,..., r log 2 ξ. This is in fact a generalization of the treatment above with ξ = 1 for the interleaved partition and ξ = N/2 for the left-right partition, and can be understood by flow considerations in the graph underlying the TN: a cut that is located above a certain sub-branch can not assist in cutting the flow between A and B vertices that reside within that sub-branch. Thus, the addition of more parameters to layers l such that l > log 2 ξ would in an increase of the capacity of edges in the TN which will not belong to the min-cut. The observation presented in the previous paragraph has practical implications. For a data-set with features of a characteristic size D (e.g. in a two-dimensional digit classification task, D could be the size of digits that are to be classified), such partitions of length scales ξ < D are guaranteed to separate between different parts of a feature placed in any input location. In order to classify a feature correctly, an elaborate function modeling a strong dependence between different parts of it must be realized by the network. As discussed above, this means that a high measure of entanglement w.r.t. partitions that separate the feature must be supported by the network, and theorem 1 allows us to describe this measure of entanglement in terms of a min-cut in the TN graph. The following'rule of thumb' is thus implied -the channel numbers up to layer l = log 2 D are more important than those of deeper layers, therefore it is advisable to concentrate more parameters (in the form of more channels) in these levels. Additionally, an analysis of the min-cut in the ConvAC TN shows that among the more important layers l = 1,..., log 2 D, deeper ones need to be wider, as is apparent for example in the above expression of the minimal cut weight for the high-low partition. In a more general task it may be hard to point out a single most important length scale D, however the presented in this section can be viewed as an incentive to develop adequate means of characterizing the most relevant data correlations for different tasks. The min-cut analysis on the TN representing a deep ConvAC translates prior knowledge on how correlations among input variables (e.g. image patches) are modeled, into the architectural design of number of channels per layer in a ConvAC. In this section, we demonstrate empirically that the theoretical findings established above for the deep ConvAC, apply to a regular convolutional network architecture which involves the more common ReLU activations and average or max pooling. Two tasks were designed, one with a short characteristic length to be referred to as the'local task', and the other with a long characteristic length to be referred to as the'global task'. Both tasks are based on the MNIST data-set and consist of 64 × 64 black images on top of which resized binary MNIST images were placed in random positions, to make sure we account for correlation distances without biasing towards a particular location in the image. For the local task, the digits were shrunken to 8 × 8 images while for the global task they were enlarged to size 32 × 32. Note that both tasks are more challenging than the standard MNIST task, and that the local task is even more challenging than the global one. We designed two network architectures that tackle these two tasks, with a difference in the channel ordering scheme. Each architecture was designed to better match the correlation structure of one of the above tasks, in accordance with the analysis presented in the previous section. In both networks, the first layer is a representation layer -a 3 × 3 (stride 1) shared convolutional layer. Following it are 6 hidden layers, each with 1 × 1 shared convolution kernels followed by ReLU activations and 2×2 max pooling (stride 2). Classification in both networks was performed through Y = 10 outputs, with prediction following the strongest activation. The difference between the two networks is in the channel ordering -in the wide-base (WB) network they are wider in the beginning and narrow down in the deeper layers, while in the wide-tip (WT) network they follow the opposite trend. Specifically, we set a parameter r to determine each pair of such networks according to WB: [10; 4r; 4r; 2r; 2r; r; r; 10] and WT: [10; r; r; 2r; 2r; 4r; 4r; 10] (The channel numbers from left to right go from shallow to deep). According to the above , this choice for increase of widths towards deeper layers in the WT network makes it fit the global task in which all layers are important. similarly, the dictate that the choice WB network makes it fit the local task, in which shallower layers are more important. The specific channel arrangement ensures that the amount of learned parameters for both configurations is equal. Figure 4: Applying ReLU networks with max pooling to the global and local classification tasks. Fig. 4 shows the of applying both the WB and WT networks to the local and global tasks. Each task consisted of 60000 training images and 10000 test images, in correspondence with the MNIST database. Indeed, the WB network significantly outperforms the WT network in the local task, whereas a clear opposite trend can be seen for the global task. This complies with our theoretical analysis, according to which the WB network which holds more parameters in shallow layers should be able to support short correlation lengths of the input, whereas the WT network in which deeper layers are wider is predicted to put focus on longer correlation lengths. The fact that the global task gets higher accuracies for all choices of r is unsurprising, as it is clearly an easier task. Overall, these experiments constitute a demonstration of how prior knowledge regarding a task at hand may be used to tailor the inductive bias of a deep convolutional network by appropriately designing layer widths. We have shown how phenomena that were indicated by the theoretical analysis that was presented in this paper in the context of ConvACs, manifest themselves on the most prevalent and successful convolutional network architectures (ReLU activation, max pooling). The construction of a deep ConvAC in terms of a TN brought forth the main theoretical achievements of this paper. This method enabled us to carry a graph-theoretic analysis of a convolutional network, and tie its expressiveness to a minimal cut in the graph characterizing it. Our construction began with a structural equivalence between the function realized by a ConvAC and a quantum many-body wave function. This facilitated the transfer of mathematical and conceptual tools employed by physicists, such as the tool of TNs and the concept of'entanglement measures', providing well-defined quantifiers for a deep network's expressive ability to model correlations between regions of its input. By employing these tools, we were able to present theoretical observations regarding the role that the number of channels in each layer fulfills in the overall expressiveness of a deep convolutional network, and how they affect its ability to model given input correlations. Furthermore, practical implications were presented for the construction of a deep network architecture when there is prior knowledge regarding the input correlations. Apart from the direct discussed above, two important interdisciplinary bridges emerge from this work. The we drew between min-cut in the graph representation of a ConvAC to network expressivity measures, may constitute an initial example for employing the connection to TNs for the application of graph-theoretic measures and tools to the analysis of the function realized by a deep convolutional network. The second bridge, is the mathematical connection between the two fields of quantum physics and deep learning. The field of quantum TNs is a rapidly evolving one, and the established construction of a successful deep learning architecture in the language of TNs may allow applications and insights to be transferred between the two domains. For example, the tree shaped TN that was shown in this work to be equivalent to a deep convolutional network, has been known in the physics community for nearly a decade to be inferior to another deep TN architecture by the name of MERA , in its expressiveness and in its ability to model correlations. The MERA TN constitutes an exemplar case of how the TNs/deep-learning connection established in this work allows a bi-directional flow of tools and intuition. MERA architecture introduces over-laps by adding'disentangling' operations prior to the pooling operations, which, in translation to terms of deep learning, effectively mix activations that are intended to be pooled in different pooling windows. Physicists have a good grasp of how these specific overlapping operations allow a most efficient representation of functions that exhibit high correlations at all length scales . Accordingly, a new view of the role of overlaps in the high expressivity of deep networks as effectively'disentangling' intricate correlations in the data can be established. In the other direction, as deep convolutional networks are the most empirically successful machine learning architectures to date, physicists may benefit from trading their current'overlaps by disentangling' scheme to the use of overlapping convolutional windows (proven to contribute exponentially to the expressive capacity of neural networks by Sharir and Shashua FORMULA1), in their search for expressive representations of quantum wave functions. Overall, We view this work as an exciting bridge for transfer of tools and ideas between fields, and hope it will reinforce a fruitful interdisciplinary discourse. DISPLAYFORM0 We provide below a short introduction to the notation used by physicists when describing quantum mechanical properties of a many-body system. We follow relevant derivations in and Hall FORMULA1, referring the interested reader to these sources for a more comprehensive mathematical introduction to quantum mechanics. A state of a system, which is a complete description of a physical system, is given in quantum mechanics as a ray in a Hilbert space (to be defined below). Relevant Hilbert spaces in quantum mechanics are vector spaces over the complex numbers. We restrict our discussion to vector spaces over R, as the properties related to complex numbers are not required for our analysis and do not affect it. Physicists denote such vectors in the'ket' notation, in which a vector ψ is denoted by: |ψ ∈ H. The Hilbert space H has an inner product denoted by φ|ψ, that maps a pair of two vectors in H to a scalar. This inner product operation is also referred to as'projecting |ψ onto |φ'. A ray is an equivalence class of vectors that differ by multiplication by a nonzero scalar. For any nonzero ray, a representative of the class, |ψ, is conventionally chosen to have a unit norm: ψ|ψ = 1. A'bra' notation φ|, is used for the'dual vector' which formally is a linear mapping between vectors to scalars defined as |ψ → φ|ψ. We can intuitively think of a'ket' as a column vector and'bra' as a row vector. Relevant Hilbert spaces can be infinite dimensional or finite dimensional. We limit our discussion to quantum states which reside in finite dimensional Hilbert spaces, as these lie at the heart of our analogy to convolutional networks. Besides being of interest to us, these spaces are extensively investigated in the physics community as well. For example, the spin component of a spinful particle's wave function resides in a finite dimensional Hilbert space. One can represent a general single particle state |ψ ∈ H1, where dim(H1) = M, as a linear combination of some orthonormal basis vectors: DISPLAYFORM0 where v ∈ R M is the vector of coefficients compatible with the basis {|ψ d} M d=1 of H1, each entry of which can be calculated by the projection: DISPLAYFORM1 The extension to the case of N particles, each with a wave function residing in a local finite dimensional Hilbert space Hj for j ∈ [N] (e.g. N spinful particles), is readily available through the tool of a tensor product. In order to define a Hilbert space which is the tensor product of the local Hilbert spaces: H:= ⊗ N j=1 Hj, we will specify its scalar product. Denote the scalar product in each Hj by ·|· j, then the scalar product in the tensor product finite dimensional Hilbert space DISPLAYFORM2 For simplicity, we set the dimensions of the local Hilbert spaces Hj to be equal for all j, i.e. ∀j: dim(Hj) = M. In the spin example, this means that the particles have the same spin, e.g. for N electrons (spin 1/2), M = 2. Denoting as above the orthonormal basis of the local Hilbert space by DISPLAYFORM3, the many-body quantum wave function |ψ ∈ H = ⊗ N j=1 Hj can be written as: DISPLAYFORM4 Reproducing eq. 2. Consider a partition of the above described system of N particles labeled by integers [N]:= {1, . . ., N}, which splits it into two disjoint subsystems DISPLAYFORM0 DISPLAYFORM1 where DISPLAYFORM2 and {|ψ DISPLAYFORM3 are bases for H A and H B, respectively, 9 and A A,B is the matricization of A w.r.t. the partition (A, B). Let us denote the maximal rank of A A,B by r:= min(dim(H A), dim(H B)). A singular value decomposition on A A,B in the following form (also referred to as the Schmidt decomposition): DISPLAYFORM4 where λ1 ≥ · · · ≥ λr are the singular values of A A,B, and DISPLAYFORM5 are r vectors in new bases for H A and H B, respectively, obtained by the decomposition. It is noteworthy that since |ψ is conventionally chosen to be normalized, the singular values uphold α |λα| 2 = 1, however this constraint can be relaxed for our needs. A Tensor Network (TN) is formally represented by an underlying undirected graph that has some special attributes, we elaborate on this formal definition in appendix E.1. In the following, we give a more intuitive description of a TN, which is nonetheless exact and required for our construction of the ConvAC TN. The basic building blocks of a TN are tensors, which are represented by nodes in the network. The order of a tensor represented by a node, is equal to its degree -the number of edges incident to it, also referred to as its legs. FIG6 (a) shows three examples: 1) A vector, which is a tensor of order 1, is represented by a node with one leg. 2) A matrix, which is a tensor of order 2, is represented by a node with two legs. 3) Accordingly, a tensor of order N is represented in the TN as a node with N legs. In a TN, each edge is associated with a number called the bond dimension. The bond dimension assigned to a specific leg of a node, is simply the dimension of the corresponding mode of the tensor represented by this node (see definitions for a mode and its dimension in section 2).A TN is a collection of such tensors represented by nodes, with edges that can either be connected to a node on one end and loose on the other end or connect between two nodes. Each edge in a TN is represented by an index that runs between 1 and its bond dimension. An index representing an edge which connects between two tensors is called a contracted index, while an index representing an edge with one loose end is called an open index. The set of contracted indices will be denoted by K = {k1, ..., kP} and the set of open indices will be denoted by D = {d1, ..., dN}. The operation of contracting the network is defined by summation over all of the P contracted indices An example for a contraction of a simple TN is depicted in FIG6. There, a TN corresponding to the operation of multiplying a vector v ∈ R r 1 by a matrix M ∈ R r 2 ×r 1 is performed by summing over the only contracted index, k. As there is only one open index, d, the of contracting the network is an order 1 tensor (a vector): u ∈ R r 2 which upholds u = M v. In FIG6 (c) a somewhat more elaborate example is illustrated, where a TN composed of order 2 and 3 tensors represents a tensor of order 5. This network represents a decomposition known as a tensor train in the tensor analysis community or a matrix product state (MPS) (see overview in e.g. Orús) in the condensed matter physics community, which arranges order 3 tensors in such a'train' architecture and allows the representation of an order N tensor with a linear (in N) amount of parameters. The MPS exemplifies a typical desired quality of TNs. The decomposition of a higher order tensor into a set of sparsely interconnected lower order tensors, was shown (; BID0) to greatly diminish effects related to the curse of dimensionality discussed above. 8 A ⊗ H B with equality obtained upon a permutation of the local spaces that is compliant with the partition (A, B).9 It is possible to write ψ hidden layer 0 DISPLAYFORM0 Figure 6: The original Convolutional Arithmetic Circuits network as presented by BID7. We begin by reviewing tensor decompositions of the conv-weights tensor shown in BID7 to be equivalent to the shallow and deep versions of the ConvAC network given in the main text and reproduced for convenience in fig. 6.The CP decomposition of the conv-weights tensor corresponds to a ConvAC depicted in fig. 6 with one hidden layer, which collapses the entire spatial structure through global pooling -a shallow ConvAC. Explicitly, the CP decomposition of the order N conv-weights tensor of a specific class y is a sum of rank-1 tensors, each of which is attained by a tensor product of N weights vectors: DISPLAYFORM0 where DISPLAYFORM1 The deep version of fig. 6, where the pooling windows between convolutional layers are of minimal size, corresponds to a specific tensor decomposition of A y, which is a restricted version of a hierarchical Tucker decomposition, referred to in short as the HT decomposition. The restriction is related to the fact that the pooling scheme of the ConvAC architecture presented in fig. 6 involves only entries from the same channel, while in the general HT decomposition pooling operations would involve entries from different channels. For brevity of notation, we will present the expressions for a scenario where the input patches are aligned along a one-dimensional line (can also correspond to a one-dimensional signal, e.g. sound or text), and the pooling widows are of size 2. The extension to the two-dimensional case follows quite trivially, and was presented in BID5. Under the above conditions, the decomposition corresponding to a deep ConvAC can be defined recursively by BID7 ): DISPLAYFORM2 The decomposition in eq. 11 recursively constructs the conv-weights tensor {A y} y∈ [Y] by assembling vectors DISPLAYFORM3 in an incremental fashion. This is done in the form of tensor products, which are the natural form for tensor decompositions. The index l stands for the level in the decomposition, corresponding to the l th layer of the ConvAC network given in fig. 6. j represents the'location' in the feature map of level l, and γ corresponds to the individual tensor in level l and location j. The index r l is referred to as level-l rank, and is defined to be the number of tensors in each location of level l (we denote for completeness rL := Y). In the ConvAC network given in fig. 6, r l is equal to the number of fig. 6 in its shallow form, i.e. with one hidden layer followed by a global pooling operation which collapses the feature maps into Y different class scores. The matrices A (j) hold the convolutional weights of the hidden layer and the matrix G holds the weights of the final dense layer. The central δ tensor effectively enforces the same channel pooling, as can be seen by its form in eq. 12 and its role in the calculation of this TN given in eq. 13. channels in the l th layer -this is important in our analysis of the role played by the channel numbers. The tensor φ l,j,γ is of order 2 l, and we assume for simplicity that N -the order of A y, is a power of 2 (this is merely a technical assumption also made in , it does not limit the generality of the analysis). The parameters of the decomposition are the final level weights {a DISPLAYFORM4, the intermediate levels' DISPLAYFORM5, and the first level weights {a DISPLAYFORM6 In order to construct the TN equivalent of the shallow ConvAC, we define the order N +1 tensor δ ∈ R K×···×K, referred to as the δ tensor, as follows: DISPLAYFORM0 with kj ∈ [K] ∀j ∈ [N + 1], i.e. its entries are equal to 1 only on the super-diagonal and are zero otherwise. We shall draw nodes which correspond to such δ tensors as triangles in the TN, to remind the reader of the restriction given in eq. 12. Let G ∈ R Y ×K be a matrix holding the convolutional weight vector of the final layer v y ∈ R K in its y th row and let A (j) ∈ R K×M be a matrix holding the convolutional weights vector a k,j ∈ R M in its k th row. One can observe that per class y, the k th summand in eq. 10 is equal to the tensor product of the N vectors residing in the k th rows of all the matrices A (j), j ∈ [N], multiplied by a final weight associated with class y. Tensors represented by nodes in the TN will have parenthesis in the superscript, which denote labels such as the position j in the above, to differentiate them from'real' indices that must be taken into consideration when contracting the TN. Per convention, such'real' indices will be written in the subscript. Having defined the above, the TN equivalent of the CP decomposition is illustrated in FIG7. Indeed, though they represent the same exact quantity, the form of eq. 10 isn't apparent at a first glance of the network portrayed in FIG7. Essentially, the TN equivalent of the CP decomposition involves contractions between the matrices A (j), G, and the δ tensor, as can be seen in the expression representing it: DISPLAYFORM1 The role of the δ tensor in eq. 13 can be observed as'forcing' elements of the k th row of any matrix A (j) to be multiplied only by elements of k th rows of the other matrices which in effect enforces same channel pooling. 10 If one were to switch the δ k 1...k N in eq. 13 by a general tensor G k 1...k N ∈ R K×···×K, a TN equivalent of an additional acclaimed decomposition would be attained, namely the Tucker decomposition. Similar to other tensor decompositions, the Tucker decomposition is more commonly given in an outer product form: DISPLAYFORM0 We describe below a TN corresponding to the deep ConvAC calculation, given by eq. 1. The ConvAC calculation is constructed as an inner-product between two tensors: the conv-weights tensor A DISPLAYFORM0 which is given in eq. 11 in terms of a tensor decomposition, and A Considering the upper block, it is worth noting that it is not a sketch of a TN but the actual full description compliant with the graph notations described in appendix C. Accordingly, the two legged nodes represent matrices, where each matrix A (l,j) ∈ R r l ×r l−1 (with r−1 := M) is constructed such that it holds the convolutional weight vector a l,j,γ ∈ R r l−1, γ ∈ [r l] in its γ th row. The triangle node appearing between levels l − 1 and l represents an order 3 tensor δ ∈ R r l−1 ×r l−1 ×r l−1, obeying eq. 12. The δ tensor is the element which dictates the same channel pooling in this TN construction. As mentioned above, the lower block in FIG8 is the TN representing A assumes is exactly the form that the coefficients tensor of a product state assumes when represented as a TN. As can be seen in FIG8, a final contraction of the indices d1,..., d8 in the class scores vector calculated by the ConvAC, hy(x1, ..., x8).The calculation performed by a one-dimensional ConvAC for a general N (s.t. log 2 N ∈ N), is given by the recursively defined TN representation shown in FIG12. v (l,j) ∈ R r l−1 is a vector of actual activations generated during a computation across in the l th level of the network shown in fig. 6. Recall that r−1:= M, and that DISPLAYFORM1 is a vector in the representation layer (see FIG8). To demonstrate that this TN indeed defines the calculations performed by a ConvAC, we conjecture that the equality in FIG12 holds, namely that for DISPLAYFORM2 where d ∈ [r l−1]. In the first line of eq. 14 we simply followed the TN prescription given in appendix C and wrote a summation over all of the contracted indices in the left hand side of FIG12, and in the second line we used the definition of matrix multiplication. According to the construction of A (l,j) given in appendix D.2, DISPLAYFORM3 where the weights vector a l,j,γ ∈ R r l−1 was introduced in eq. 11. Thus, eq. 14 is reduced to: DISPLAYFORM4 Finally, by definition of the δ tensor, the sum vanishes and we obtain the required expression for the operation of the ConvAC: DISPLAYFORM5 where an activation in the d th feature map of the l th level holds the multiplicative pooling of the of two activation vectors from the previous layer convolved with the d th convolutional weight vector for that layer. Applying this procedure recursively is exactly the conv→pool→... →conv→pool scheme that lies at the heart of the ConvAC operation (fig. 6). Recalling that rL:= Y, the output of the network is given by: DISPLAYFORM6 To conclude this section, we have presented a translation of the computation performed by a ConvAC to a TN. The convolutional weights are arranged as matrices (two legged nodes) placed along the network, and the same channel pooling characteristic is made available due to three legged δ tensors in a deep network, and an N + 1 legged δ tensor in a shallow network. Finally, and most importantly for our analysis, the bond dimension of each level in the TN representing the ConvAC is equal to r l, which is the number of feature maps (i.e. the number of channels) comprising that level in the corresponding ConvAC architecture. Below we provide upper and lower bounds on the ability of a deep ConvAC to model correlations of its inputs, as measured by the Schmidt entanglement measure (see section 4 for definition). We address a general setting FIG0: The components comprising a'ConvAC-weights TN' φ that describes the weights tensor A y of a ConvAC, are an undirected graph G(V, E) and a bond dimensions function c. The bond dimension is specified next to each edge e ∈ E, and is given by the function c(e). As shown in appendix D.2, the bond dimension of the edges in each layer of this TN is equal to the number of channels in the corresponding layer in the ConvAC. The node set in the graph G(V, E) presented above decomposes to V = V tn · ∪ V inputs, where V tn (grey) are vertices which correspond to tensors in the ConvAC TN and V inputs (blue) are degree 1 vertices which correspond to the N open edges in the ConvAC TN. The vertices in V inputs are'virtual' -were added for completeness, so G can be viewed as a legal graph. The open edge emanating from the top-most tensor (marked by a dashed line) is omitted from the graph, as it does not effect our analysis below -no flow between any two input groups can pass through it.of the number of channels in a deep ConvAC. The stated in theorem 1, which applies when all of the channel numbers in a deep ConvAC architecture are powers of some integer, is implied (specifically by the equality of the upper bound in claim 1 and the lower bound in lemma 2 below). We begin by presenting a description of the TN as a'legal' graph in section E.1 and move on to prove the bounds in sec E.2. The ability to represent a deep convolutional network (ConvAC) as a'legal' graph, is a key accomplishment that the Tensor Networks tool brings forth. Our main rely on this graph-theoretic description and tie the expressiveness of a ConvAC to a minimal cut in the graph characterizing it, via the connection to quantum entanglement measures. This is in fact a utilization of the'Quantum min-cut max-flow' concept presented by BID8. Essentially, the quantum max-flow between A and B is a measure of the ability of the TN to model correlations between A and B, and the quantum min-cut is a quantity that bounds this ability and can be directly inferred from the graph defining it -that of the corresponding TN.We focus on the TN that describes A To turn φ into a graph we do the following. First, we remove the open edge associated with the output. As our analysis is going to be based on flow between groups of input vertices, no flow can pass through that open edge therefore removing it does not influence our analysis. Second, we add N virtual vertices incident to the open edges associated with the input. Those virtual vertices are the only vertices whose degree is equal to 1 (see FIG0). The TN φ is now described below using graph terminology:• An undirected graph G(V, E), with a set of vertices V and a set of edges E. The set of nodes is divided into two subsets V = V tn · ∪ V inputs, where V inputs are the N degree-1 virtual vertices and V tn corresponds to tensors of the TN. • A function c: E → N, associating a number r ∈ N with each edge in the graph, that equals to the bond dimension of the corresponding edge in the TN.Having described the object representing the ConvAC-weights TN φ, let us define an edge-cut set with respect to a partition of the N nodes of V inputs, and then introduce a cut weight associated with such a set. An edge-cut set with respect to the partition V A · ∪V B = V inputs is a set of edges C s.t. there exists a partitionṼ DISPLAYFORM0 We note that this is a regular definition of an edge-cut set in a graph G with respect to the partition of vertices (V A, V B). Let C = {e1, ..., e |C|} be such a set, we define its multiplicative cut weight as: DISPLAYFORM1 The weight definition given in eq. 18 is simply a multiplication of the bond dimensions of all the edges in a cut. FIG0 shows a pictorial demonstration of this weight definition, which is at the center of our to come. In the following section, we use a max-flow / min-cut analysis on φ to obtain new on the expressivity of the corresponding deep ConvAC via measures of entanglement w.r.t. a bi-partition of its input patches, which are related to the number of channels in each layer of the ConvAC. In claim 1 below, we provide an upper bound on the ability of a deep ConvAC to model correlations of its inputs, as measured by the Schmidt entanglement measure (see section 4). This claim is closely related to attributes of TNs that are known in different forms in the literature. is no greater than: minC WC, where C is a cut w.r.t (V A, V B) and WC is the multiplicative weight defined by eq. 18.Proof. We will use the example shown in FIG0 (a) of a general TN with arbitrary connectivity. The edges of the TN φ are marked by the index associated with them. Any index p ∈ {d, k} runs between 1 and its bond dimension marked by cp, which upholds cp:= c(ep) where ep ∈ E is the edge associated with the index p. For the given partition (A, B), denote A = {a1, ..., a |A|}, B = {b1, ..., b |B|} and let IA · ∪ IB = {d1, . . ., dN} be the corresponding partition of external indices, where IA = {da 1, ..., da |A|} and IB = {d b 1, ..., d b |B|}. A and H B be the spaces corresponding to the different configurations of the indices in IA and IB, respectively, their dimensions given by: DISPLAYFORM0 In the example shown in FIG0, the graph is arranged s.t. A is on the left and B is on the right. The marked cut C that separates between A and B is arbitrarily chosen as a representative cut, and we denote the indices of the cut edges by IC = {k1, ..., k |C|}. It is noteworthy, that any index ki in the cut is allowed to be an external index, i.e. the cut is allowed to contain any amount of external edges. Now, two contractions can be performed, separately contracting all the tensors to the left of the cut and to the right of it. We are left with two higher order tensors, FIG0 ). DISPLAYFORM1 According to the TN in FIG0, the matricization A A,B can be written as a multiplication of two matrices. Component wise, this can be written as: DISPLAYFORM2 where any amount of cut indices that are also external indices translate as blocks of the identity matrix on the diagonal. Finally, since this construction is true for any cut C w.r.t (A, B), the rank of A A,B upholds: rank(A A,B) ≤ minC WC, satisfying the claim for any general TN, and specifically for the ConvAC TN.The upper bound provided above, alerts us when a deep ConvAC is too weak to model a desired correlation structure, according to the number of channels in each layer. Below, we provide a lower bound similar in spirit to a bound shown in BID8. Their claim is applicable for a TN with general tensors (no δ tensors), and we adapt it to the ConvAC-weights TN (that has δ tensors) which in effect ensures us that the entanglement measure cannot fall below a certain value for any specific arrangement of channels per layer. Theorem 2 above implies that the upper bound given in Claim 1 is saturated when all of the channel numbers in a deep ConvAC architecture are powers of some integer p. For a general arrangement of channel numbers, the upper bound is not tight and theorem 2 guarantees that the rank will not be lower than that of any ConvAC architecture with channel numbers which are powers of some integer p yet are not higher than the original ConvAC channel numbers. Even though this is the lower bound we prove, we have a reason to believe the actual lower bound is much tighter. In section G, we show simulations which indicate that deviations from the upper bound are actually quite rare and unsubstantial in value. In the following we prove theorem 2. Our proof strategy is similar to the one taken in Cui et al. FORMULA1, however we must deal with the restricted δ tensors present in the network corresponding to a ConvAC (the triangle nodes in FIG8). We first quote and show a few that will be of use to us. We begin by quoting a claim regarding the prevalence of the maximal matrix rank for matrices whose entries are polynomial functions -claim 2. Next, we quote a famous graph theory known as the Undirected Menger's Theorem (, BID10, BID12) which relates the number of edge disjoint paths in an undirected graph to the cardinality of the minimal cut -theorem 3. After this, we show that the rank of matricization of the tensor represented by φ p that is defined in theorem 2, is a lower bound on the rank of matricization of the tensor represented by φ -lemma 1. Then, we prove that the upper bound in claim 1 is tight when all of the channel numbers are any powers of the same integer p ∈ N -lemma 2 (effectively showing theorem 1). Finally, when all the preliminaries are in place, we show how the in theorem 2 is implied. DISPLAYFORM3 If there exists a point x ∈ R K s.t. rank(A(x)) ≥ r, then the set {x ∈ R K : rank(A(x)) < r} has zero measure (w.r.t. the Lebesgue measure over R K).Proof. See Sharir et al.. Claim 2 implies that it suffices to show an assignment of the ConvAC network weights achieving a certain rank of matricization of the conv-weights tensor, in order to show this is the rank for all configurations of the network weights but a set of Lebesgue measure zero. Essentially, this means that it is enough to provide a specific assignment that achieves the required bound in theorem 2 in order to prove the theorem. Next, we present the following well-known graph theory :Theorem 3. (, BID10, BID12) [Undirected Menger's Theorem] Let G = (V, E) be an undirected graph with a specified partition (A, B) of the set of degree 1 vertices. Let M F (G) be the maximum number of edge disjoint paths (paths which are allowed to share vertices but not edges) in G connecting a vertex in A to a vertex in B. Let M C(G) be the minimum cardinality of all edge-cut sets between A and B. Then, M F (G) = M C(G).Proof. See e.g. BID8.Theorem 3 will assist us in the proof of lemma 2. We will use it in order to assert the existence of edge disjoint paths in an auxiliary graph (FIG0), which we eventually utilize in order to provide the required weights assignment in lemma 2. Next, we show lemma 1, which roughly states that a tensor which'contains' another tensor in some sense will not have a lower matricization rank than that of the'contained' tensor. fig. 6. Let φ be the TN corresponding to this ConvAC network, and let φ p be a TN with the same connectivity as φ, where all of the bond dimensions are modified to be equal the closest power of p to their value in φ from below. Let (A p) y be the tensor represented by φ p and let there exist an assignment of all of the tensors in the network φ p for which rank((A p) y A,B ) = R. Then, rank(A y A,B) is at least R almost always, i.e. for all configurations of the weights of φ but a set of Lebesgue measure zero. Proof. Consider the specific assignment of all of the tensors in the network φ p which achieves rank((A p) y A,B ) = R, and leads to the ant tensor (A p) y upon contraction of the network. Observing the form of the deep ConvAC TN presented in appendix D.2, we see that it is composed of δ tensors and of weight matrices A (l,j) ∈ R r l ×r l−1. Recalling that the entries of the former are dictated by construction and obey eq. 12, the assignment of all of the tensors in the network φ p is an assignment of all entries of the weight matrices in φ DISPLAYFORM4 We denote the bond dimension at level l ∈ [L] ∨ {−1, 0} of φ p by r p l (recall that we defined r−1 = M). By the definition of φ p, this bond dimension cannot be higher than the bond dimension in the corresponding level in φ: ∀l r p l ≤ r l. Accordingly, the matrices in φ do not have lower dimensions (rows or columns) than the corresponding matrices in φ p. Thus, one can choose an assignment of the weights of all the matrices in φ p l when observing only its first r p l entries in each dimension, as is clear from the δ tensor definition in eq. 12. Finally, we observe that this construction leads to A y containing the tensor (A p) y as a hypercube in its entirety and holding zeros elsewhere, leading to rank(A Lemma 1 basically implies that showing that the upper bound on the rank of the matricization of the deep ConvAC conv-weights tensor that is presented in claim 1 is tight when all of the channel numbers are powers of some integer p (which we show below in lemma 2), is enough in order to prove the lower bound stated in theorem 2. It is noteworthy, that lemma 2 is stated similarly to claim 1, with two differences: 1) minC WC appears as a lower bound on the rank of matricization of the conv-weights tensor rather than an upper bound, and 2) all of the channel numbers are restricted to powers of the same integer p. That is to say, by proving this lemma we in fact show that the upper bound proven in claim 1 is tight for this quite general setting of channel numbers. In the following, we provide an assignment of indices for the tensors in φ for which the rank of the matricization A y A,B is at least: minC WC. In accordance with claim 2, this will satisfy the lemma as it implies this rank is achieved for all configurations of the ConvAC network weights but a set of Lebesgue measure zero. The proof of lemma 2 is organized as follows. We begin with the construction of the TN φ * presented in FIG0 from the original network φ, and the show that it suffices to analyze φ * for our purposes. Next, we elaborate on the form that the δ tensors in φ assume when constructed in φ *. We then use this form to define the concept of δ restricted edge disjoint paths, which morally are paths from A to B that are guaranteed to be compliant with the form of a δ tensor when passing through it. Finally, we use such paths in order to provide an assignment of the indices for the tensors in φ * which upholds the required δ condition. A and H B with dimensions obeying eq. 19 be the spaces corresponding to the different configurations of the indices in IA and IB, respectively. We construct a TN φ * with a graph G * (V *, E *) and a bond dimensions function c *: E * → N for which there is a one-to-one correspondence between the tensor assignments in φ and tensor assignments in φ *, such that the ing linear maps between H A and H B have the same rank. For each edge e ∈ E, denote ne:= log p c(e). By the conditions of the lemma, ∀e: ne ∈ N as c(e) is an integer power of p for all edges in E. The graph G * of the network φ * is constructed as follows. Starting with G * = (V, ∅), for each edge e = (u, v) ∈ E we insert ne parallel edges connecting u to v in G *, to form the edge set E *. Additionally, we define the bond dimensions function of the network φ * to assign the value of p to all of the added edges, i.e. ∀e * ∈ E *: c * (e *) = p. In FIG0 an example for such a construction of φ * is shown for some N = 8 ConvAC TN.In the paragraphs below, we derive eq. 25 which shows that an analysis of φ * suffices for our purposes. This is intuitive in some sense, as the construction of φ * keeps intact the architecture of the network and the distribution of the degrees of freedom to some extent. As it is the key to our proof, we formulate this argument hereinafter. As each edge e ∈ E was translated into ne edges in E *, there are. This means that an index number in A * corresponding to an edge e * ∈ E * would be in A * (resp. B *) if the edge e ∈ E from which it originated corresponded to an index number in A that was in A (resp. B). This is easily understood pictorially, see FIG0. Accordingly denote the corresponding partition of the degree 1 vertices in G * by (V A *, V B *). We will now show that the rank of the matricization of A w.r.t. the partition (A, B) is equal to the rank of the matricization of A * w.r.t. the partition (A *, B *). DISPLAYFORM0 We denote by τv the tensors corresponding to a vertex v ∈ V in the network φ, and by τ * v the tensors corresponding to the same vertex v in the network φ *. Let z be the order of τv, and denote the set of edges in E incident to v by {e k 1, ..., e kz} where k1,..., kz are the corresponding indices. For every index kj, j ∈ [z], let K * j = {k * j 1, ..., k * j ne k j} be the indices corresponding to the edges which were added to φ * in the location of e k j in φ. According to the construction above, there is a one-to-one correspondence between the elements in K * j and kj, that can be written as: DISPLAYFORM1 where DISPLAYFORM2 Thus, if one has the entries of the tensors in φ *, the following assignment to the entries of the tensors in φ: DISPLAYFORM3 would ensure: DISPLAYFORM4 Effectively, we have shown that the claim to be proved regarding rank(A A,B) can be equivalently proved for rank(A * A *,B *).The Form of the δ Tensor in φ *:It is worthwhile to elaborate on the form of a tensor in φ * which corresponds to an order 3 δ tensor in φ. We denote by τ v δ = δ a δ tensor in φ, and by τ * v δ the corresponding tensor in φ *. FIG0 shows an example for a transformed tensor in φ * that originated in an order 3 δ tensor in φ, all edges of which uphold ne = 2. From eqs. 23 and 24, and from the form of the δ tensor given in eq. 12, it is evident that in this case an entry is non-zero in τ * v δ only when k * 1 DISPLAYFORM5 2. In the general case, the condition for an entry of 1 in τ * v δ is: DISPLAYFORM6 where ne = log p c(e) for any edge e incident to v in G. Hence, a tensor τ * v δ in φ * which corresponds to a δ tensor in φ can be written as: DISPLAYFORM7 δ Restricted Edge Disjoint PathsConsider an edge-cut set in G that achieves the minimal multiplicative weight over all cuts w.r.t the partition (V A, V B) in the graph G: Cmin ∈ argmin C WC, and consider the corresponding edge-cut set C * min in G * s.t. for each edge e ∈ Cmin, the ne edges constructed from it are in C * min. By the construction of G *, there are exactly L:= log p (minC WC) edges in C * min and their multiplicative weight upholds DISPLAYFORM8 A search for a minimal multiplicative cut, can be generally viewed as a classical min-cut problem when defining a maximum capacity for each edge that is a logarithm of its bond dimension. Then, a min-cut/max-flow value can be obtained classically in a graph with additive capacities and a final exponentiation of the provides the minimal multiplicative value of the min-cut. Since all of the bond dimensions in φ * are equal to p, such a process in a network with all of its edges assigned capacity 1. From the application of theorem 3 on such a graph, it follows that the maximal number of edge disjoint paths between V A * and V B * in the graph G *, which are paths between V A * and V B * that are allowed to share vertices but are not allowed to share edges, is equal to the cardinality of the minimum edge-cut set C * min. In our case, this number is L, as argued above. Denote these edge disjoint paths by q1,..., qL.In accordance with the form of τ * v δ, the tensors in φ * corresponding to δ tensors in φ given in eq. 27, we introduce the concept of δ restricted edge disjoint paths between V A * and V B * in the graph G *, which besides being allowed to share vertices but not to share edges, uphold the following restriction. For every δ tensor τ v δ of order 3 in the graph G, with e ∈ E a representative edge incident to v in G, a maximum of ne such paths can pass through v in G *, each assigned with a different number t ∈ [ne]. The paths uphold that when passing through v in G * each path enters through an edge with index k * j in t in and leaves through an edge with index k * j out t out only if jin = jout: jin, jout ∈ and tin = tout = t, where no two paths can have the same t. This restriction imposed on the indices of τ * v δ in φ *, to be called hereinafter the δ restriction, is easily understood pictorially, e.g. in FIG0 the paths crossing the τ * v δ tensor must only contain edges of the same color in order to uphold the δ restriction. We set out to show, that for the network in question one can choose the L edge disjoint paths to uphold the δ restriction. Then, a weight assignment compliant with the δ tensors in the network can be guaranteed to uphold the requirements of the lemma, despite the fact that most of the entries in the δ tensors are equal to zero. Denote the set of ne edges in G * that originated from a certain edge e in G, by X * e ⊂ E *. We first show that one can choose the L edge disjoint paths s.t. in a flow directed from V A * to V B * w.l.o.g, there is no set of edges X * e that corresponds to any e ∈ E for which two edges e * i, e * j ∈ X * e ⊂ E * belong to paths qi, qj which flow in opposite directions. FIG0 clarifies this claim. We observe the classical max-flow in the graph G, i.e. when assigning a maximum capacity for each edge e that is equal to ne:= log p c(e), a maximum flow of L is possible between V A and V B in G. Observing the paths in G that flow w.l.o.g. from V A to V B, together they can transfer a maximum capacity of L. Note that in G, these paths most certainly do not need to be edge disjoint paths. We argue that one can choose such paths from V A to V B in G such that on each edge e there is an integer capacity transferred. The existence of such paths in G follows directly from the integral flow theorem BID9 ), which states that if each edge has integral capacity, then there exists such an integral maximal flow. Note, that these paths must also uphold the basic rule that the sum of capacities transferred on a certain edge e ∈ E, even if this is performed via several paths, is less than the edge maximum capacity ne. Published as a conference paper at ICLR 2018 FIG0, each edge is split into n e = 2 edges of bond dimension p. The δ tensor structure in φ translates into this τ * v δ tensor holding a non-zero entry only when the indices corresponding to all of the edges that are marked by the same color are equal to each other (eq. 27). Additionally, paths crossing this τ * v δ tensor must only contain edges of the same color in order to be called δ restricted edge disjoint paths. (b) There are L guaranteed edge disjoint paths between V A * and V B *. In a flow directed from V A * to V B * (w.l.o.g), we argue that one can choose these paths such that they have the same flow direction in all edges in φ * that originate from a certain edge in φ. One can now construct L paths in G * in a recursive manner, if a maximum additive capacity for each edge e * ∈ E * is similarly defined to be log p c * (e *) = log p p:= 1. Starting with a single quanta of flow along some path in G, construct a single path in the corresponding position in G *. Each edge that is part of this path in G * will transfer exactly one quanta of flow, as that is their maximum capacity that is chosen to be saturated in order to transfer the same amount of capacity that is transferred in G. Now, remove the full edges in G * and reduce the capacities of all edges along the original path in G by one. Repeating this process until a capacity of L is transferred in both graphs, since ne is the number of new edges added to G * in the place of each edge e, and it is also an upper bound on the integer capacity this path transfers in G, it follows that in G * one finds L paths between V A * and V B * that correspond exactly to the paths transferring integer capacity in G guaranteed by integral flow theorem. These paths in G * are edge disjoint since the edges of each path were removed from the graph when constructed. Choosing precisely these edge disjoint paths in G *, one is guaranteed that the flow from V A * to V B * in all of the edges in X * e that belong to these paths would be in the same direction, as they originated in the same edge e in G that had a flow in that single specific direction from A to B. Pictorially, since the different edges in X * e all originate from one single edge that obviously cannot have two opposite directions of net flow, they can all be chosen to transfer flow in the same direction. Observing an order 3 δ tensor τ v δ in φ, denote the three edges incident to v in G by e1, e2, e3 ∈ E, and denote ne:= ne 1 = ne 2 = ne 3. Now that we have asserted that all of the L edge disjoint paths in G * may uphold the above condition, we choose the paths as such, i.e. under this choice all of the edges in each respective set (namely X * e 1, X * e 2 or X * e 3) pass flow from V A * to V B * in the same direction. In this case, a maximum of ne paths can pass through the delta tensor. This can be easily understood by the following argument. Denote a set X * e i by'I' if the paths passing through its edges are incoming to the δ tensor in a flow from V A * to V B *, and by'O' if they are outgoing from the δ tensor in such a flow. W.l.o.g. we assume that X * e 1, X * e 2 are denoted by'I' and X * e 3 is denoted by'O'. in this case, only ne such edge disjoint paths can flow out of the δ tensor. In the opposite case, where two groups of edges out of the three are denoted by'O' and only one group is denoted by'I', only ne such edge disjoint paths can flow into the δ tensor. The contrary, i.e. if more than ne such paths were to cross the δ tensor, would imply a cross flow of edge disjoint paths in at least one of the sets X * e 1, X * e 2, X * e 3, in contradiction to this choice of paths. This provides us with the ability to distribute the paths in the following manner, that upholds the δ restriction described above. Assume w.l.o.g that X * e 1 is the set for which the most edges are in the chosen edge disjoint paths. Denote by q1,..., qN 2 the paths that include edges in X * e 1 and X * e 2, and by qN 2 +1,..., qN 2 +N 3 the paths that include edges in X * e 1 and X * e 3. Finally, assign the index t to the path qt. From the statement above, it is guaranteed that N2 + N3 ≤ ne. Therefore, this choice of paths is guaranteed to uphold the delta restriction defined above, which states that each path must receive a different value t ∈ [ne]. Specifically, this implies that the maximal number of δ restricted edge disjoint paths between V A * and V B * in the graph G * is L.With all the preliminaries in place, the proof of theorem 2 readily follows: DISPLAYFORM9 For a specific p, consider the network φ p such as defined in theorem 2, i.e. a TN with the same connectivity as φ, where all of the bond dimensions are modified to be equal the closest power of p to their value in φ from below. Let (The exponential depth efficiency shown in BID7, can be straightforwardly reproduced by similar graph-theoretic considerations. We show below an upper bound on the rank of matricization of the conv-weights tensor for a case of a general pooling window. The bound implies that any amount of edges in a cut that are connected to the same δ tensor will contribute their bond dimension only once to the multiplicative weight of this cut, thus effectively reducing the upper bound when many cut edges belong to the same δ tensor. This does not affect our analysis of the deep network above, as the δ tensors in that network are only three legged (see FIG8). Therefore, in the above analyzed deep ConvAC, a cut containing more than one δ tensor leg can be replaced by an equivalent cut containing only one leg of that δ tensor, and the value of minC WC is unchanged. Formally, in order to apply similar considerations to the ConvAC with general sized pooling windows, such as the one presented in fig. 6, one must consider more closely the restrictions imposed by the δ tensors. To this end, we define the object underlying a ConvAC-weights TN with general sized pooling windows φ to be composed of the following three:• An undirected graph G(V, E), with a set of vertices V and a set of edges E. The set of nodes is divided into two subsets V = V tn · ∪ V inputs, where V inputs are the N degree-1 virtual vertices and V tn corresponds to tensors of the TN. DISPLAYFORM0, where b is the number of δ tensors in the network. If we label each δ tensor in the network by a number i ∈ [b], this function upholds f (e) = i for e ∈ E that is incident to a vertex which represents the i th delta tensor in the ConvAC TN. For each edge e ∈ E incident to a degree 1 vertex, this function assigns a different number f (e) = i for i ∈ b + [N]. Such an edge is an external edge in the ConvAC TN, which according to the construction presented in appendix D is the only type of edge not incident to any δ tensor. In words, the function f divides al the edges in E into b + N groups, where edges are in the same group if they are incident to the same vertex which represents a certain δ tensor in the network.• A function c: [b + N] → N, associating a bond dimension r ∈ N with each different group of edges defined by the set: Ei = {e ∈ E : f (e) = i}.Observing an edge-cut set with respect to the partition (A, B) and the corresponding set G C = {f (e): e ∈ C}. We denote the elements of G C by g DISPLAYFORM1 These elements represent the different groups that the edges in C belong to (by the definition of f, edges incident to the same delta tensor belong to the same group). We define the modified weight of such an edge-cut set C as: DISPLAYFORM2 The weight definition given in eq. 32 can be intuitively viewed as a simple multiplication of the bond dimensions of all the edges in a cut, with a single restriction: the bond dimension of edges in the cut which are connected to a certain δ tensor, will only be multiplied once (such edges have equal bond dimensions by definition, see eq. 12). An example of this modified weight can be seen in FIG0, where the replacement of a general tensor by a δ tensor in a reduction in the minimal cut, due to the rules defined above. In the following claim, we provide an upper bound on the ability of a ConvAC with a general pooling window to model correlations of its inputs, as measured by the Schmidt entanglement measure (see section 4). fig. 6 with a general pooling window. Let G(V, E, f, c) represent the TN φ corresponding to the ConvAC network, and let (V A, V B) be the vertices partition of V inputs in G corresponding to (A, B). Then, the rank of the matricization A y A,B is no greater than: minCWC, where C represents a cut w.r.t (V A, V B) andWC is the modified multiplicative weight defined by eq. 32. FIG0: An example for the effect that a δ tensor has on the upper bound on the rank of the matricization of the overall tensor represented by a TN. min CWC is defined in eq. 32 and shown in claim 3 to be the upper bound on the rank of the matricization of the conv-weights tensor of a ConvAC represented by a TN. In this example, the upper bound is reduced upon changing a single general tensor in the TN shown in (a) (identical to FIG0), whose entries are free to be equal any value, with a δ tensor in the TN shown in (b) which obeys the constraint given in eq. 12. The centrality of the δ tensor in the TN compliant with a shallow ConvAC (that is depicted in FIG7, is in effect the element which limits the expressiveness of the shallow network, as is discussed in appendix F.Having seen the proof of the claim 1 above and its accompanying graphics, the proof of the upper bound presented in claim 3 can be readily attained. The only difference between the two lies in the introduction of the δ tensors to the network, which allows us to derive the tighter lower bound shown in claim 3. The modification to the above proof of claim 1 focuses on the coalescence of the cut indices IC into a single index m ∈ [ DISPLAYFORM0 Assume that any two indices of multiplicands in this product, denoted by ki and kj, are connected to the same δ tensor that has some bond dimension q := c k i = c k j . Upon contraction of the TN in FIG0, the cut indices are internal indices that are to be summed upon. However, whenever ki ∈ [q] and kj ∈ [q] are different, by the constraint imposed in the δ tensor definition (eq. 12), the entire term vanishes and there is no contribution to the final value of A d 1...d N calculated by this contraction. Thus, ki, kj and any other index connected to the same δ tensor can be replaced by a representative index k α ∈ [q] whenever they appear in the summation. α ∈ G C upholding c(α) = q, is the group index of this δ tensor, given by α = f (e k i) = f (e k j) with e k i and e k j the edges corresponding to the indices ki and kj in the network. Thus, the single index m achieved by coalescing all of the cut indices can be defined in the range m ∈ [WC], with WC defined by eq. 32 upholdingWC ≤ |C| i=1 c k i, where the equality is satisfied when no two edges in the cut are incident to the same δ tensor. Finally, the matricization A A,B can be written as a multiplication of two matrices as portrayed in FIG0: DISPLAYFORM1 DISPLAYFORM2 Recalling that as in the proof of claim 1 the edge-cut set may include the external edges, we attain: rank(A A,B) ≤ min CW C.Observing FIG7 which shows the TN corresponding to the shallow ConvAC architecture, the central positioning of a single δ tensor implies that under any partition of the inputs (A, B) s.t. |A| = |B| = N/2, the minimal cut will obey W min C = min(M N/2, k). Thus, in order to reach the exponential in N measure of entanglement w.r.t. the interleaved partition that was obtained in section 5 for the deep network, the number of channels in the single hidden layer of the shallow network k, must grow exponentially with N. Therefore, one must exponentially enlarge the size of the shallow network in order to achieve the expressiveness that a polynomially sized deep network achieves, and an exponential depth efficiency is demonstrated. In this section, we describe simulations performed on an N = 16 deep ConvAC TN (with pooling windows of size 2), which are aimed at quantifying the prevalence of deviations from the upper bound on the ranks of the matricization of conv-weights tensor presented in claim 1. In section E.2 we proved theorem 2, showing in effect that this upper bound is tight when all of the channel numbers are powers of some integer p, and guaranteeing a positive in all cases. However, for the general setting of channel numbers there is no theoretical guarantee that the upper bound is tight. Indeed, BID8 show a counter example where the matricization rank is effectively lower the minimal multiplicative cut for a general TN (that has no δ tensors such as in the ConvAC TN). There is no reason to believe that the upper bound is tight for the TN representing a ConvAC for a general setting of channel numbers, and indeed our simulations below show deviations from it. However, as is indicated below such deviations are negligible in prevalence and low in value. A theoretical formulation of this is left for future work. The experiments were performed in matlab, and tensor contractions were computed using a function introduced by. An N = 16 with M = 2 ConvAC TN was constructed (see figs. 8 and 9), with the entries of the weights matrices randomized according to a normal distribution. The bond dimensions of layers 0 through 3 were drawn from the set of the first 6 prime numbers:, to a total of 360 different arrangements of bond dimensions. This was done in order to resemble a situation as distinct as possible from the case where all of the bond dimensions are powers of the same integer p, for which the tightness of the upper bound is guaranteed by theorem 2. Per bond dimension arrangement, all of the = 6435 different partitions were checked, for a total of 360 · 6435 = 2.3166 · 10 6 different configurations. As argued in section E.2, the logarithm of the upper bound on the rank of the conv-weights tensor matricization that is shown in claim 1, is actually the max-flow in a network with the same connectivity that has edge capacities which are equal to the logarithm of the respective bond dimensions. Therefore, a configuration for which the rank of matricization is equal to the exponentiation of the max-flow through such a corresponding network, effectively reaches the upper bound. We calculated the max-flow independently for each configuration using the Ford-Fulkerson algorithm BID12 ), and set out to search for deviations from such an equivalence. The of the above described simulations are as follows. Only 1300 configurations, which constitute a negligible fraction of the 2.3166 million configurations that were checked, failed to reach the upper bound and uphold the min-cut max-flow equivalence described above. Moreover, in those rare occasions that a deviation occurred, the percentage of deviations from the upper bound did not exceed 10% of the value of the upper bound. This check was performed on a bond setting that is furthest away from all channel numbers being powers of the same integer, yet the tightness of the upper bound emerges as quite robust, justifying experimentally our general view of the minimal weight over all cuts in the network, minC WC, as the effective indication for the matricization rank of the conv-weights tensor w.r.t. the partition of interest. A caveat to be stated with this is that we checked only up to N = 16, and the discrepancies that were revealed here might become more substantial for larger networks. As mentioned above, this is left for future theoretical analysis, however the lower bound shown in theorem 2 guarantees a positive regarding the rank of the matricization of the conv-weights tensor in all of the cases.
Employing quantum entanglement measures for quantifying correlations in deep learning, and using the connection to fit the deep network's architecture to correlations in the data.
1,167
scitldr
Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property (e.g., toxicity). The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task. We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function. We evaluate the approach on two benchmark toxicity datasets. We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales. Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments. Recently, deep learning has been successfully applied to the development of predictive models relating chemical structures to physical or biological properties, outperforming existing methods BID8 BID14. However, these gains in accuracy have come at the cost of interpretability. Often, complex neural models operate as black boxes, offering little transparency concerning their inner workings. Interpretability plays a critical role in many areas including cheminformatics. Consider, for example, the problem of toxicity prediction. Over 90% of small molecule drug candidates entering Phase I trials fail due to lack of efficacy or due to adverse side effects. In order to propose a modified compound with improved properties, medicinal chemists must know which regions of the molecule are responsible for toxicity, not only the overall level of toxicity BID1. We call the key molecular substructures relating to the outcome rationales. In traditional cheminformatics approaches such as pharmacophore mapping, obtaining such a rationale behind the prediction is an intrinsic part of the model BID24 BID7 BID12 In this paper, we propose a novel approach to incorporate rationale identification as an integral part of the overall property prediction problem. We assume access to the same training data as in the original prediction task, without requiring annotated rationales. At the first glance, the problem seems solvable using existing tools. For instance, attention-based models offer the means to highlight the importance of individual atoms for the target prediction. However, it is challenging to control how soft selections are exploited by later processing steps towards the prediction. In this sense, the soft weighting can be misleading. In contrast, hard selection confers the guarantee that the excluded atoms are not relied upon for prediction. The hard selection of substructures in a molecule is, however, a hard combinatorial problem. Prior approaches circumvent this challenge by considering a limited set of predefined substructures (typically of 1-6 atoms), like the ones encoded in some molecular fingerprints BID7. Ideally, we would like the model to derive these structures adaptively based on their utility for the target prediction task. We formulate the problem of selecting important regions of the molecule as a reinforcement learning problem. The model is parametrized by a convolutional network over a molecular graph in which the atoms and bonds are the nodes and edges of the graph, respectively. Different from traditional reinforcement learning methods that have a reward function provided by the environment, our model seeks to learn such a reward function alongside the reinforcement learning algorithm. More generally, our model works as a search mechanism for combinatorial sets, which readily expands to applications beyond chemistry or graphs. Our iterative construction of rationales provides several advantages over standard architectures. First, sequential selection enables us to incorporate contextual features associated with past selections, as well as global properties of the whole molecule. Second, we can explicitly enforce desirable rationale properties (e.g., number of substructures) by including appropriate regularization terms in the reward function. We test our model on two toxicity datasets: the Tox21 challenge dataset, which is a series of 12 toxicity tests, and the human ether-a-go-go-related gene (hERG) channel blocking. The reinforcement learning model identifies the structural components of the molecule that are relevant to these toxicity prediction tasks while simultaneously highlighting opportunities for molecular modification at these sites. We show that by only selecting about 40-50% of the atoms in the molecules, we can create models that nearly match the performance of models that use the entire molecule. By comparing selected regions with rationales described in chemical literature, we further validate the rationales extracted by the model. Deep Learning for Computing Chemical Properties One of the major shifts in chemical property prediction is towards the use of deep learning. The existing models fall into one of two classes. The first class of models is based on an expert-constructed molecular representation such as fingerprints that encapsulate substructures thought to be important and a range of molecular properties BID33 BID26. These models are not well suited for extracting rationales because desired structures may not be part of the fingerprint. Moreover, it may be challenging to attribute properties recorded in fingerprints to specific substructures in the molecule. One would have to restrict the feature space of the fingerprint, which can harm the performance of the model. The second class of models move beyond traditional molecular fingerprints, instead learning tasktailored representations. Specifically, they employ convolutional networks to learn a continuous representation of the molecule BID8 BID14. BID10's work takes this a step further, and uses the Weisfeiler-Lehman kernel inspired neural model as a way to generate better local representations. Following this direction, our work is also based on learned molecular representation. Our focus, however, is on augmenting these models with rationales. As articulated in the introduction, the task is challenging due to the number of candidate substructures and the need to attribute properties aggregated in convolutions to individual atoms. Reinforcement Learning on Graphs Our work utilizes a similar framework as the reinforcement learning model over graphs described by BID5. However, their work focuses on solving computational problems where the reward is provided by the environment (i.e., a deterministically computable property of the graph, such as max-cut or the traveling salesman problem). In contrast, in our formulation, the rewards are also learned by the system. Another related work in this area BID18 ) utilizes a policy gradient method to search for substructures starting from random points in the graph. Since their model is designed for large graphs that do not fit into memory, it does not consider the global context of the molecule as a whole. This design is limiting in the context of molecular prediction, where such information is valuable for prediction, and where it is also feasible to compute since the graphs are small. Moving away from the convolutional network approach, their model imposes an artificial ordering of the nodes through a sequence network as the reinforcement learning agent traverses the graph. Both of the above models focus on the prediction task, whereas the emphasis of our work is on interpretability. Learning Rationales The topic of interpretability has recently gained significant attention BID20 BID15. The proposed approaches cover a wide spectrum in terms of the desired rationales and the underlying methods. Work in this area include visualization of activations in the network BID11 BID9, and examination of the most influential data examples to provide interpretability BID16. Attention-based models have also been widely used to extract interpretability (Ba et al., Figure 1: Our model makes sequential selections of atoms (light blue) in the molecule and is specified by two networks, the Q-Network and the P-Network. The former constitutes the reinforcement learning agent that assigns a Q-value to each atom, and the latter takes the atom selections of the Q-Network and trains a classifier to predict based solely on those atoms. This prediction is used as a reward that is fed back to the Q-Network.2014; BID4 BID25 BID31 BID32. Our work is mostly closely related to approaches focused on extractive rationales BID19 BID27. BID19 present a model to extract parts of text as rationale, but their model does not readily generalize to graphs, and the sequential nature of our model can place a meaningful ordinal ranking over the atom selections. Our approach uses reinforcement learning as a method to iteratively select important atoms from the graph. We use a Q-learning approach similar to that of BID5. The state of the system at time t corresponds to the atoms selected thus far. The agent takes an action a t at each time step t, where the action is the selection of an atom that has not already been selected. After the agent takes an action, the state s t is updated to include the newly selected atom. Unlike traditional reinforcement learning algorithms in which the agent receives a reward from the environment, we use a separate model (P-Network) to generate the reward r t to the agent. The P-network learns to predict molecular properties such as toxicity based on the selected atoms, and rewards the agent according to the accuracy of its predictions. The agent itself learns a Q-Network to represent the action-value function Q(s, a) needed to maximize the reward throughout the process. In this case, maximizing the reward is equivalent to selecting atoms that help P-Network make accurate predictions based on the partial selections of atoms. The overall approach is illustrated in Figure 1.In the following sections, we will describe the two networks, the Q-Network that guides the agent's actions, and the P-Network that maps agent's selections to predicted molecular properties. The two networks are trained iteratively, in a feedback loop, so that the Q-network can provide good selections of atoms that, when fed to the P-Network, in good overall predictions of molecular properties. Both networks are parametrized by the graph convolutional network that we describe separately below. In a convolutional network, the model updates the feature representation of individual atoms iteratively based on local contexts. The nature of the operations is parameterized and can be learned to support the end-to-end task. We prefer convolutional networks due to their expressiveness and adaptability as compared to traditional molecular fingerprint methods. Define a molecular graph as G = (A, B), in which A is the set of nodes denoting the atoms, and B is the set of edges denoting the bonds between atom pairs. In each successive layer of the network, an atom's representation is updated to incorporate the currently available information from its neighbors, while the bond features remain unchanged. These updates propagate information across the molecule, and allow the network to generalize beyond standard local substructures. Let h l i be a vector-valued atom representation for atom i at layer l, and let A i be the input feature vector for atom i, and B i,j be the input feature vector for the bond between atoms i and j. In this notation, we use h 0 i = A i. (The input atom features include: one-hot encoding of the atomic number, degree, valence, hybridization, aromaticity, and whether or not the atom is in a ring. Bond features include: one-hot encoding of the bond order, aromaticity, and whether or not the bond is in a ring). This initialization differs slightly for the Q-Network so as to incorporate the current state of selections into the convolutional architecture. The update step for atom feature vectors involves a gated unit that receives separate contributions from the atom itself and its neighbors. Specifically, DISPLAYFORM0 where N (i) is the set of neighbors of atom i, W l's are the specific weight matrices that vary across layers (bias weights omitted for brevity), and σ is the sigmoid activation function, applied coordinate-wise. After N iterations, we arrive at final atom representations h N t. The Q-Network is parametrized by the convolutional network described in section 3.1, in which the size of the atom representation at the final layer is set to 1 so that h N i is scalar and interpreted as the Q-value for selecting atom i next. The initial atom features in the Q-Network include the binary indicator of whether the atom has already been selected. In other words, if we define the state s as a binary vector encoding the atoms that have already been selected, then we use an augmented h DISPLAYFORM0 The convolutional network is rerun with these initializations, using the same parameters, after each selection. Thus, despite the fact that the selections are greedy with respect to the Q-values, the model will choose atoms in a manner that is aware of the global context of the molecule as well as state s representing already selected atoms. The P-Network is a prediction network that takes the partial selection of atoms from the Q-Network, and makes a prediction about the label of the molecule using only those atoms selected by the Q-Network. Like the Q-Network, the P-Network is separately parametrized by the convolutional network. To incorporate the selected atoms, we zero out all the initial atom features that are not in the current selection before running the convolutional model. It is important to note that we do not zero out the updated hidden states of these atoms. This allows interaction between disjoint selections on the molecular graph and preserves information related to graph distances. The reasoning behind this is that there are often several substructures of interest on the molecule and their interactions might prove important. We want to allow the network to learn these interactions by facilitating the propagation of information throughout the whole molecule. The P-Network is geared towards predicting molecular properties rather than atomic properties and therefore requires an additional aggregation of the atom vectors h N i. In our model, these atom vectors are first renormalized via an attention mechanism: DISPLAYFORM0 and turned into a neural (adaptable) fingerprint by summing the ing renormalized feature vectors f = iĥ N i. This fingerprint is then passed through a sigmoid function to generate the class prediction,ŷ = σ(W f * f). The prediction loss is measured through standard cross-entropy loss L P = −y log(ŷ) − (1 − y) log(1 −ŷ) where y is the actual label andŷ is the predicted probability of y = 1. The reward for the Q-network is induced by the P-network and defined as: DISPLAYFORM0 where θ refers to the parameters of the P-network and s t,· is the binary state vector updated from s t−1,· in response to action a t. Because we are interested in selecting the important substructures of a molecule, which consist of at least several atoms, we found it useful to train the Q-network after n selections rather than a single selection. The Q-Network is therefore trained using an n-step Q-learning algorithm, in which the network receives a signal for each sequence of n actions that it makes. The loss function for the Q-network is then: DISPLAYFORM1 Where γ is a decay constant, Q t specifies the Q-value of the current state and action, and Q target t+n is the max Q-value of the state n steps from t induced by a separate but jointly trained target QNetwork. During the training process, we use an -greedy search policy, where the agent will choose a random atom with probability instead of the one with the highest Q-value. Keeping the idea of molecular substructures in mind, we find that it is helpful to search for a random neighbor of a selected atom, than a completely random atom. We also employ action-replay using a target Q network that utilizes soft parameter updates in order to increase training stability BID23.In our model, we place limits on the numbers of atoms to be selected. This number is proportional to the number of atoms in the molecule up to a certain ceiling. We note that fixing the number of atoms chosen by the agent is a limitation of our model, but seems to work well in practice. Specifically, we find that taking 40-50% of the atoms up to a limit of 12-15 atoms for larger molecules works well, although this number varies with the problem. We also impose a lower limit of 5 atoms for smaller molecules, because it becomes impossible to distinguish distinct molecules when too few atoms are chosen. Additionally, we impose regularization constraints on the model, in order to enforce certain properties on the selections. Since we are interested in the model selecting specific substructures from the molecule, we impose a penalty to the model for selecting too many disjoint groups. That is, we define a variable C g t = # {disjoint groups of atoms at step t}. We then modify the reward equation 3 as follows: DISPLAYFORM2 Datasets We evaluate our model on two toxicity datasets. The first dataset that we explore is the Tox21 challenge dataset which contains a series of 12 toxicity tests categorized by nuclear response signals and stress response pathways 1. We parse the data using RDKit BID17, removing duplicate entries and cleaning the molecules by removing salts. Because this dataset has data coming from multiple sources, there are conflicting labels for some molecules, which we remove from the dataset entirely. TAB0 contains the information about each of the 12 datasets, highlighting the small number of positive examples in many of the datasets. The second toxicity dataset that we evaluate our model on is the inhibition of the hERG channel 2. Because this protein channel is well-studied, we explore this dataset to see if we can create a predictive model that can generate rationales that match the information in chemical literature. This dataset, taken from BID22, consists of a training set with 3792 molecules, and a test set with 1090 molecules, with 25% positive labels. Since the data was already cleaned, we do no further preprocessing of this dataset. Evaluation Measures: Predictions For each dataset, we compare our model against the top reported systems BID26; BID22. These approaches utilize extensive feature engineering through molecular fingerprints and other computed molecular descriptors. In addition, they use additional training sources for data augmentation. Specifically, BID26 utilize a data augmentation method called kernel-based structural and pharmacological analoging (KSPA), which uses public databases containing similar toxicity tests. We measure the predictive performance of the convolutional model (Mol-CNN, which utilizes the full molecule) to demonstrate that is comparable to the state-of-the-art . Next, we evaluate the performance of our reinforcement learning method (RL-CNN) that makes predictions on a fraction of atoms in the molecule. We compare these different models using the AUC metric, since the datasets contain an unbalanced set of labels. Evaluation Measures: Rationales Because the RL-CNN model makes predictions using only atoms selected as rationales, its quantitative performance indirectly measures the quality of these rationales. However, we are also interested in directly evaluating their quality relative to rationales described in the chemical literature by domain experts. In the ideal case, we would identify rationales that are characteristic to a single class of examples -either positive or negative. Unfortunately, many known toxic substructures are prevalent in both positively and negatively labeled compounds. In fact, BID26 show that adding features representing structural similarity to 2,500 common toxicophores (toxic substructures) to their model does not improve performance on the Tox21 challenge dataset. This shows that the expert-derived "important" regions are not actually sufficient nor necessary for the prediction task. Rationales extracted for the hERG dataset are directly compared with rationales described in the literature. Multiple studies have shown that the presence of a protonable nitrogen atom and 3 hydrophobic cores has a high affinity for binding to this particular protein BID2 BID30. Usually, this nitrogen is secondary or tertiary so that it is more basic. When protonated, the positive charge exhibits cation-pi interactions with certain residues of the hERG channel, which is the crux of the binding mechanism. We show that our model can identify these basic nitrogen atoms within the dataset. For a baseline comparison, we also evaluate rationales obtained by selecting atoms with the strongest influence on the logistic regression model prediction using Morgan fingerprints of radius 3 and length 2048. Morgan fingerprints are boolean vectors constructed by enumerating atom-centered neighborhoods up to a certain radius, assigning integer values to each neighborhood by hashing a categorical feature vector of that neighborhood, and reducing those integers to vector indeces BID29. The importance of an atom can be approximated by the absolute difference between a prediction made with the full molecular fingerprint and a prediction made when substructures containing that atom are excluded from the fingerprint BID28. We restrict the baseline rationales to select the same number of atoms as in the RL-CNN model. Evaluation Measures: Rationales (Synthetic Experiment) Since we do not find well-defined substructures in literature for the tests offered in the Tox21 dataset, we also construct a synthetic experiment. For this experiment, we select specific substructures and set the labels of all molecules containing those substructures to be positive; all other molecules' labels are left unchanged. We specifically focus on 3 toxic substructures: the aromatic diazo group, polyhalogenation, and the aromatic nitro group, two of which are from BID13's work on toxicophores common to multiple toxicity assays. We demonstrate that our model can capture important substructures if the data provides a clear enough signal. Quantitative Evaluation of Convolutional Representation We first demonstrate that our convolutional model performs competitively compared to neural models using molecular fingerprints. Columns four and five in TAB0 compare our for Mol-CNN with the highest performing model DeepTox BID26, both run in the multitask setting. The DeepTox model performs better than our convolutional model by 2.3% on average across the twelve tasks. This is not surprising, as their method uses substantial amounts of additional training data which is not available to our model. The on the hERG dataset are summarized in Table 2. Our model outperforms the top performing model BID22, which uses molecular fingerprints, on the external test set. Quantitative Evaluation of Reinforcement Learning Algorithm For the Tox21 dataset, we use the multi-task instance to compare the of our base convolutional model (Mol-CNN). How- Table 2: Results of different models on the hERG dataset using AUC as the metric. The first 4 models are baselines from BID22 and use molecular fingerprints as input to random forest (RF), support vector machine (SVM), k nearest neighbors (KNN) and associative neural networks (ASNN). For each of their models, we take the average performance for the same model run with different input features. ever, we turn to the single-task instance to evaluate the performance of our rationale model. This is due to the fact that different toxicity tests warrant different important substructures. Therefore, we run individual models for each of the toxicity tests using the base convolution model as well as the reinforcement learning model. We observe a small decrease in performance, ing in a 0.7% decrease in AUC on average, using around 50% of the atoms in the dataset as seen in TAB0. On the hERG dataset, we selected 45% of the atoms, and also observe that the reinforcement learning algorithm performs similarly to the convolution network as seen in Table 2, with a 3.4% decrease in AUC. We see a smaller decrease in performance for the Tox21 datasets on average, likely because many of the datasets have comparatively few number of positive examples, so predicting on fewer atoms allows the model to generalize better. Evaluation of Rationales using Human Rationales In the absence of ground truth rationales, we turn to a specific structural motif-a tertiary nitrogen atom-that is known to exhibit cation-pi interactions with residues of the hERG protein when copresent with certain hydrophobic side chains in the correct 3-D conformation BID6 BID2. In the dataset we used, these tertiary nitrogen substructure occurs more often in positive examples compared to negative examples (78.4 % vs 44.9 %). This suggests that while this substructure is important in positive examples, it is not sufficient to indicate that a molecule is positive. We observe that our model captures this important substructure frequently, and more often in positive examples than negative Figure 2: Two examples of rationales selected by the reinforcement learning model. The selected atoms are highlighted in large light blue circles. In both cases, we see that the model selects the tertiary nitrogen motif, highlighted in small green circles, which is implicated in many inhibitors of the hERG channel. examples (63.6 % vs 46.1 %). Here, we require the model to have selected the nitrogen and at least two of its three carbon neighbors. Figure 2 shows two example selections made by the model. The similar statistical bias in this prediction demonstrates that the model can provide insights at least consistent with prior domain expertise. In contrast, when the fingerprint baseline approach is used, the baseline model matches this substructure less frequently and with no discriminating power between positive and negative examples (19.1 % vs 23.9 %).Evaluation of Rationales using Synthetic Experiments Here, we evaluate how often our model would capture target substructures if those substructures were a sufficient indicator of our target task, toxicity. TAB2 summarizes the , and shows that our model can reliably identify them. Examples of the generated rationales can be seen in Figure 3.The baseline approach matches fewer instances of the aromatic diazo and polyhalogenation motifs, but does identify more of the aromic nitro groups. Substructures symmetrically centered around a single atom -as in the nitro group -directly correspond to a fingerprint index and are well-described by the baseline model. Correlation between adjacent atoms can cause the RL-CNN model to make an incomplete selection; that is, some of the initial atom features implicitly contain information about its neighbors, which leads to these neighbors appearing less important as rationales. Simplifying the initial atom featurization to decrease this correlation causes the model to successfully select the aromatic nitro group in 17/17 cases. Figure 3: From left to right, example rationales generated for the dataset altered based on the presence of aromatic diazo group, polyhalogenation, and aromatic nitro group. The selected atoms are highlighted in large light blue circles; the predefined toxicophores are highlighted in small green circles. This confirms that while fingerprint models can do well when the relevant features happen to coincide with a fingerprint index, our rationale model is superior when the relevant features are less well-captured by the exact features of the fingerprint. We present a model that treats the problem of selecting rationales from molecules as a reinforcement learning problem. By creating an auxiliary prediction network, we use a learned reward structure to facilitate the selection of atoms in the molecule that are relevant to the prediction task, without significant loss in predictive performance. In this work, we explore the applicability of rationales in the chemistry domain. Through various experiments on the Tox21 and hERG datasets, we demonstrate that our model successfully learns to select important substructures in an unsupervised manner, requiring the same data as an end-to-end prediction task, which is relevant to many applications including drug design and discovery. Molecules are far more complicated to reason about as compared to images or text due to complex chemical theories and a lack of definitive ground truth rationale labels. As deep learning algorithms continue to permeate the chemistry domain, it will be ever more important to consider the interpretability of such models.
We use a reinforcement learning over molecular graphs to generate rationales for interpretable molecular property prediction.
1,168
scitldr
In recent years, substantial progress has been made on graph convolutional networks (GCN). In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization. Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF). The correctness of our analysis is verified by thorough experiments. The experimental show that CUMF achieves similar or superior performances compared to GCN. In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN. The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods. Thus, CUMF greatly benefits large scale and complex real-world applications. In recent years, works on graph convolutional networks (GCN) have achieved great success in many graph-based tasks, e.g., semi-supervised node classification , link prediction and recommendation systems . GCN defines a graph convolution operation, which generates the embedding of each node by aggregating the representations of its neighbors. Given a graph, GCN performs the graph convolution operation layer by layer to obtain the final node representations, which will be passed to neural networks to support various tasks. To perform GCN on large scale graphs in constrained memory or distributed computing environments, different sampling methods have been proposed, such as neighbor sampling and importance sampling (b). Instead of sampling, Cluster-GCN proposes an approach to convert computation on a huge matrix to computing on a set of small matrices. However, these methods still suffer from performance loss when conducting distributed computing. To take use of various contextual information on edges in a graph, Relational GCN (RGCN) extends neighbor aggregation by using edge types in link prediction. Besides the edge types, Edge-enhanced Graph Neural Networks (EGNNs) takes more contextual features into consideration. However, in general, GCN still has the efficiency problem when facing complex forms of contextual information. Besides GCN, graph embedding methods (; b; a;) are also widely applied. In general, these methods rely on first-order and secondorder proximity to embed very large information networks into low-dimensional vector spaces. The first-order proximity in a graph is the local pairwise proximity between two vertices, and the secondorder proximity between a pair of vertices in a graph is the similarity between their neighborhood structures. As for GCN, previous work shows that the graph convolution operation is actually a special form of Laplacian smoothing. Thus, as the converging of the model, the smoothing process can keep the final representation of a node more and more similar to those of its neighbors. Therefore, GCN is consistent with graph embedding methods in capturing the structural information. According to previous work , graph embedding methods have been successfully unified as matrix factorization (MF). Thus, we believe that there might be some connections between GCN and MF. Meanwhile, comparing with GCN, MF-based methods are extremely flexible and suitable for distributed computing (; ;). MF-based methods are also easy and efficient to be extended to tasks with complex forms of contextual information on graph edges (; ; ; ;). Thus, if we can unify the GCN model as a special form of MF, large scale and complex real-world applications will benefit from this. In this paper, we theoretically reveal the connections between GCN and MF, and unify GCN as matrix factorization with co-training and unitization in section 2. Here, the co-training process means co-training with the classification task of labeled nodes as in , and the unitization indicates conducting vector unitization on node representations. Then, under the guidance of our theoretical analysis, we formally propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF) in section 3. Extensive experiments are conducted on several real-world graphs, and show co-training and unitization are two essential components of CUMF. Under centralized computing settings, CUMF achieves similar or superior performances comparing with GCN. These observations strongly verify the correctness of our theoretical analysis. Moreover, GCN performs poor on dense graphs, while CUMF has great performances. This is may caused by the over-smoothing of graph convolution on dense graphs, while CUMF can balance the smoothing of neighbours and the classification of labeled nodes through the co-training process. Experiments under distributed computing settings are also conducted, and distributed CUMF significantly outperforms the state-of-the-art distributed GCN method, i.e., cluster-GCN . Thus, CUMF is extremely friendly to large scale real-world graphs. Meanwhile, lots of works have been done to model contextual information in MF-based methods (; ; ; ;), which have shown great effectiveness, efficiency and flexibility. In this section, we plan to theoretically unify GCN as a specific form of matrix factorization. First, we need to start from the analysis of how node representations are learned in GCN. According to the definition in previous work , we can formulate each layer of GCN as where ∼ A = A + IN is the adjacency matrix of the graph G with added self-connections, IN is the identity matrix for N nodes in graph G, ∼ D is a diagonal degree matrix with the representation of each node at layer l, W (l) is a layer-specific trainable weight matrix, and σ (·) denotes an activation function (such as ReLU (·) = max (0, ·)). For a node classification task, we can obtain a classification loss where Y is the ground truth labels for the classification task, H (−1) is the representation of each node at the final layer of GCN. Via optimizing Eq., the cross-entropy error of the node classification task can be minimized, and the GCN model can be learned. As in the implementation in previous work (; Veličković et al., 2017;), there is no activation function on the last layer of GCN, and the final representations can be formulated as In, GCN has been proven to be a special form of Laplacian smoothing. The final representations in Eq. tends to converge to which is an approximate solution of the final representations in GCN. Specifically, for each node i in graph G, the approximate solution of the corresponding final representation is from which we have Then, the representation of node i can be obtained as where di is the degree of node i. According to above analysis, to train an approximate GCN model with one loss, which simultaneously models the structure of graph convolution and the node classification task, we can minimize the following loss function where α a hyper-parameter to control the balance between two losses, and the structure loss lstructure refers to where I denotes the set of all the nodes in graph G, and dis (·, ·) is a distance measurement. Here, we apply the commonly used cosine similarity and obtain which is equivalent to To verify whether cosine similarity is consistent with the convergence of GCN during the training procedure, we conduct empirical experiments and train GCN models on the Cora dataset and the Pubmed dataset. Figure 1 demonstrates the average cosine similarity between nodes in the graph and their neighbors, as well as the convergence curves estimated by accuracy during the training of GCN on the two datasets. It is obvious that, the curves on the same dataset share similar tendency. That is to say, as we train a GCN model, the cosine similarity between nodes in the graph and their neighbors is being optimized. This strongly proves that cosine similarity is consistent with the convergence of the GCN model. Then, to simplify the form of Eq., we conduct vector unitization in the learned representations H (−1), and thus each representation h (−1) i has similar l2-norm. Accordingly, through unitization, Eq. is equivalent to which leads to where Ci denotes all the nodes that node i is connected to, and for simplicity. Moreover, for better optimization, we can incorporate negative log likelihood and minimize the following loss function equivalently to Eq. where λ (·) = sigmoid (·). So far, the structure loss lstructure is equivalent to factorizing all the positive edges in the graph G weighted by Usually, in graph embedding methods (; b; a;), negative edges sampling is used, for better convergence. Thus, we can randomly sample negative edges for each edge in graph G. Following previous work in unifying word embedding and graph embedding as implicit matrix factorization, we can rewrite Eq. as and where k is the number of negative samples for each edge, and PG denotes the distribution that generates negative samples in graph G. For each node j, PG (j) = dj / |G|, where |G| is the number of edges in graph G. Then, we can explicitly express the expectation term as from which we have Via combining Eq. and Eq., we can obtain the local structure loss lstructure for a specific edges (i, j) To optimize the objective, we need to calculate the partial derivative of lstructure with respect to After setting Eq. to zero, we can obtain which has two solutions, e v i v j = −1 and which leads to Accordingly, the GCN model can be unified as the following matrix factorization co-training with the classification loss lclass, where node representations in V are unitized. D is a diagonal degree matrix with D i,i = d i. Moreover, according to the in , the unified matrix of GCN in Eq. is as the same as the unified matrix of LINE with the second order proximity (b), except there are two representation matrices in LINE. More specifically, the matrix factorization in Eq. is as the same as LINE with the first order proximity, which is implicit matrix factorization. Our theoretical analysis on GCN is summarized as: Conclusion 2.1 Given a connected graph G with adjacency matrix A, graph convolutional networks can be unified as implicit matrix factorization with co-training with labeled node classification; unitization of node representations. In this section, based on above analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF). Figure 2 provides an overview of the proposed CUMF model, which will be described in detail below. Let x i ∈ R d denote the feature vector of node i and f 1 denote the first MLP in Figure 2. According to our theoretical analysis, given x i, we conduct vector unitization in f 1 (x i) to obtain v i (the representation of node i) as According to the theoretical analysis in section 2, the structural part in our proposed CUMF model should be implicit matrix factorization with negative sampling. Thus, lstructure can be formulated as Furthermore, as in GCN, the classification loss lclass can be obtained as where I L is the set of labeled nodes, y i is the classification label of node i and f 2 is the second MLP in Figure 2. Combining Eq., Eq. and Eq., we obtain the loss function of the proposed CUMF model. That is to say, the proposed CUMF model needs to co-train the classification loss lclass with the structural loss lstructure. In practice, like many semi-supervised models , when we do co-training, the structural loss lstructure and the classification loss lclass are alternately optimized. To be more specific, we frist pick b batches of positive and negative edges, where we sample k additional negative edges for each positive edge according to Eq.. For each batch, we take a gradient step for lstructure. Then we pick a batch of labeled instances and take a gradient step for lclass. We repeat this process until convergence. In summary, the proposed CUMF model is based entirely on our theoretical analysis in section 2. Comparing with previous graph modeling methods (; ; b; a; ;), the unique features of our proposed CUMF method are mainly illustrated in the unitization of node representations, which is derived exactly from our theoretical analysis. Therefore, under the effective guidance of our theoretical analysis on GCN, the proposed CUMF model is extremely clear, concise and reasonable. 4 RELATED WORK GCN updates node representations with the aggregation of its neighbors. Based on GCN, Graph Attention Network (GAT) (Veličković et al., 2017) proposes to aggregate neighbors with taking the diversity of nodes into account. The GCN model needs the whole graph structure. However, in large scale and complex real-world applications, we have millions of nodes and billions of graph edges. Thus, GCN is both time and space consuming, and is hard to perform in constrained memory or distributed computing. Random sampling and importance sampling (b) propose sampling approaches to reduce the computation of aggregation. Instead of approximating the node representations, variance controlled GCN (a) uses sampled node to estimate the change of node representations in every updating step. Cluster-GCN uses graph partition method to split the whole graph into a set of small sub-graphs, where aggregation happens within each small sub-graph. Thus, Cluster-GCN supports constructing mini-batches and distributed computing. The original GCN only considers node features, while the features on edges are also important in a graph. To take contextual features on edges into account, RGCN uses edge types to identify nodes' transition matrix. It achieves good performances in link prediction and entity classification. EGNN treats edge features as a tensor and proposes the doubly stochastic normalization to avoid the element values of output features too big. Based on GCN and GAT, EGNN proposes EGNN(C) and EGNN(A) to handle edge tensor respectively. • Q1 What are the roles of different components in the CUMF model (co-training & unitization)? • Q2 How does the performance of our CUMF model compare to that of GCN on different datasets? • Q3 How the two hyper-parameters effect the performance of our method? • Q4 Comparing with GCN, is our proposed CUMF model more friendly to distributed computing? We evaluate our proposed method based on seven datasets. The statistic of the datasets are shown in Table 1. Cora, Citeseer and Pubmed are three standard citation network benchmark datasets. BlogCatalog and Flickr are social networks. The posted keywords or tags in BlogCatalog and Flickr networks are used as node features. Therefore, these five datasets all have node features. By removing their node features and preserving only the node itself, we made Cora-F, Citeseer-F, Pubmed-F, BlogCatalog-F and Flickr-F. In addition, USA and Europe are two air-traffic networks , where each node corresponds to an airport and edge indicates the existence of flights between the airports. So these two datasets do not contain node features. In summary, in terms of whether or not node features are included, we test our model on two types of datasets: • Structural Datasets: Cora-F, Citeseer-F, Pubmed-F, BlogCatalog-F, Flickr-F, USA and Europe. • Attributed Datasets: Cora, Citeseer, Pubmed, BlogCatalog and Flickr. Firstly, to verify the roles of co-training and unitization, we include CUMF-C (Our method without co-training), CUMF-U (Our method without unitization) and CUMF-C-U (Our method without cotraining and unitization) in our comparisons. Specifically, minus C in name means that the model training is conducted in two stages. That is to say, we first optimize the structure loss independently to obtain the final representations of nodes. Then we keep these representations fixed and only update the parameters of classification model when optimizing the classification loss. Accordingly, minus U in name means that the node representation will not be unitized. Secondly, we compare against the strong baseline (Planetoid*) and the state-of-the-art approach (GCN) as in . Lastly, to test the performace of distributed CUMF, we also include the baseline (Random-GCN) and the state-of-the-art method (Cluster-GCN) as in in our comparisons. We implement our methods on Tensorflow. For the other methods, we use the original papers' code from their github pages. We use the Adam optimizer for all methods with weight decay as zero. And for all methods, we report the mean accuracy of 10 runs with random weight initializations. For our methods: mini-batch size is 256, the dimension of embedding is 0.1 * the dimension of feature, dropout rate is 0.5, learning rate ∈ [0.001, 0.01]. As mentioned in section 3, there are two other important hyper-parameters: the number of negative edges (k) and the balance between structure loss and classification loss (b). We will analyze the effects of k and b in next section. Results of our experiments are summarized in Table 2, Table 3, Table 4 and Figure 3, which will be described in detail below. Firstly, we mainly examine the roles of different components of our proposed method (co-training & unitization). Specifically, the impact of different components on the capturing of structural information needs to be verified. Thus we conduct experiments on structural datasets. According to Table 2, CUMF for semi-supervised node classification outperforms other versions of CUMF by a significant margin. This fully verifies that unitization and co-training are two essential components of CUMF. Therefore, the correctness of our theoretical analysis is also verified. Meanwhile, to be noted that, CUMF is superior to GCN on all structural datasets. This shows that CUMF is most likely to make better use of structural information than GCN. After verifying that our proposed CUMF model is not inferior to GCN in capturing structural information, we directly compare CUMF, GCN and Planetoid* on attributed datasets. Based on the shown in Table 3, the performance of CUMF is consistent to that of GCN on Cora, Citeseer and Pubmed, and it has significant improvement on BlogCatalog and Flickr. It is important to note that the average node degree of BlogCatalog or Flickr is much higher than that of the former three datasets. That is to say, BlogCatalog and Flickr are denser graphs. Therefore, CUMF is consistent with GCN on sparse graphs and significantly outperforms GCN on dense graphs. Previous work shows that the graph convolution operation is actually a special form of Laplacian smoothing, and starcking many convolution layers may lead to over-smoothing. From this perspective, the reason might be that dense graphs are easier to make GCN becoming over-smoothing than sparse gaphs. As mentioned in section 3, we optimize the structure loss by mini-batch gradient descent. Comparing with the graph convolution operation in GCN, the capturing of structural information in our approach is much more flexible. Thus, our proposed CUMF model is less likely converge to over-smoothing. The following experimental phenomena in stability analysis also support this view. We also formally analyze the effects of two important hyper-parameters (b & k) on model performance. As mentioned in section 3, b controls the balance between classification loss and structure loss, and k is the number of negative samples for each positive edge. We conduct empirical experiments on attributed datasets, and the are shown in Figure 3. The flat lines in Figure 3 (a) demonstrates that k has little effect on the testing performances of our proposed CUMF model. As shown in Figure 3 (b), for sparse datasets, b has little impact on the testing performances of CUMF. For dense datasets, i.e., BlogCatalog and Flickr, the performances of CUMF model decrease with the increasing of b. The larger value of b indicates the CUMF model pays more attention on capturing structural information. This may give a reason of the poor performances of GCN on dense datasets: the graph convolution operation is easily to be extremely over-smoothing for capturing structural information on dense datasets. On the other hand, CUMF is flexible with the sparsity of graphs via adjusting the parameter b, which is relatively stable. In the end, we verify the capacity of CUMF (our proposed method) and GCN on distributed computing in our experiments. Specifically, as the performances of centralized GCN is close to those of centralized CUMF on Cora, Citeseer and Pubmed, we only conduct empirical experiments on these datasets for convenience of comparisons. And we include Random-GCN and Cluster-GCN as baselines. Besides, our distributed experiments are conducted in synchronous mode. Experimental are shown in Table 4. On these datasets, distributed CUMF does not suffer performance loss, but distributed GCN methods greatly do, comparing with in Table 3. The reason is that CUMF is MF-based, and MF-based methods naturally support constructing mini-batches . However, for GCN, constructing mini-batches is equivalent to solving graph partitioning problem , which is actually a quite challenging work . Moreover, it is most likely that only a small portion of each mini-batch is labeled, which may greatly affect the convergence of GCN. To the best of our knowledge, CUMF is the first work that connects GCN to MF. We theoretically unify GCN as co-training and unitized matrix factorization, and a CUMF model is therefore proposed. We conduct thorough and empirical experiments, which strongly verify the correctness of our theoretical analysis. The experimental show that CUMF achieve similar or superior performances compared to GCN. We also observe that GCN performs poor on dense graphs, while CUMF has great performances. This is may caused by the over-smoothing of graph convolution on dense graphs, while CUMF can balance the smoothing of neighbours and the classification of labeled nodes via co-training. Moreover, due to the MF-based architecture, CUMF is extremely flexible and easy to be applied to distributed computing for large scale real-world applications, and significantly outperforms state-of-the-art distributed GCN methods.
We unify graph convolutional networks as co-training and unitized matrix factorization.
1,169
scitldr
Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF. GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: high model flexibility for data density estimation; efficient parallel computation for training; an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking. Experimental show that GraphAF is able to generate 68\% chemically valid molecules even without chemical knowledge rules and 100\% valid molecules with chemical rules. The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN. After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization. Designing novel molecular structures with desired properties is a fundamental problem in a variety of applications such as drug discovery and material science. The problem is very challenging, since the chemical space is discrete by nature, and the entire search space is huge, which is believed to be as large as 10 33 . Machine learning techniques have seen a big opportunity in molecular design thanks to the large amount of data in these domains. Recently, there are increasing efforts in developing machine learning algorithms that can automatically generate chemically valid molecular structures and meanwhile optimize their properties. Specifically, significant progress has been achieved by representing molecular structures as graphs and generating graph structures with deep generative models, e.g., Variational Autoencoders (VAEs) , Generative Adversarial Networks (GANs) and Autoregressive Models. For example, proposed a Junction Tree VAE (JT-VAE) for molecular structure encoding and decoding. studied how to use GANs for molecular graph generation. You et al. (2018a) proposed an approach called Graph Convolutional Policy Network (GCPN), which formulated molecular graph generation as a sequential decision process and dynamically generates the nodes and edges based on the existing graph substructures. They used reinforcement learning to optimize the properties of generated graph structures. Recently, another very related work called MolecularRNN (MRNN) proposed to use an autoregressive model for molecular graph generation. The autoregressive based approaches including both GCPN and MRNN have demonstrated very competitive performance in a variety of tasks on molecular graph generation. Recently, besides the aforementioned three types of generative models, normalizing flows have made significant progress and have been successfully applied to a variety of tasks including density estimation , variational inference (; ;), and image generation . Flow-based approaches define invertible transformations between a latent base distribution (e.g. Gaussian distribution) and real-world high-dimensional data (e.g. images and speech). Such an JT-VAE ------RVAE ------GCPN -----MRNN -----GraphNVP ------GraphAF ----- invertible mapping allows the calculation of the exact data likelihood. Meanwhile, by using multiple layers of non-linear transformation between the hidden space and observation space, flows have a high capacity to model the data density. Moreover, different architectures can be designed to promote fast training or fast sampling depending on the requirement of different applications. Inspired by existing work on autoregressive models and recent progress of deep generative models with normalizing flows, we propose a flow-based autoregressive model called GraphAF for molecular graph generation. GraphAF effectively combines the advantages of autoregressive and flow-based approaches. It has a high model capacity and hence is capable of modeling the density of real-world molecule data. The sampling process of GraphAF is designed as an autoregressive model, which dynamically generates the nodes and edges based on existing sub-graph structures. Similar to existing models such as GCPN and MRNN, such a sequential generation process allows leveraging chemical domain knowledge and valency checking in each generation step, which guarantees the validity of generated molecular structures. Meanwhile, different from GCPN and MRNN as an autoregressive model during training, GraphAF defines a feedforward neural network from molecular graph structures to the base distribution and is therefore able to compute the exact data likelihood in parallel. As a , the training process of GraphAF is very efficient. We conduct extensive experiments on the standard ZINC dataset. Results show that the training of GraphAF is significantly efficient, which is two times faster than the state-of-theart model GCPN. The generated molecules are 100% valid by incorporating the chemical rules during generation. We are also surprised to find that even without using the chemical rules for valency checking during generation, the percentage of valid molecules generated by GraphAF can be still as high as 68%, which is significantly higher than existing state-of-the-art GCPN. This shows that GraphAF indeed has the high model capability to learn the data distribution of molecule structures. We further fine-tune the generation process with reinforcement learning to optimize the chemical properties of generated molecules. Results show that GraphAF significantly outperforms previous state-of-the-art GCPN on both property optimization and constrained property optimization tasks. A variety of deep generative models have been proposed for molecular graph generation recently (; ; ;). The RVAE model used a variational autoencoder for molecule generation, and proposed a novel regularization framework to ensure semantic validity. proposed to represent a molecule as a junction tree of chemical scaffolds and proposed the JT-VAE model for molecule generation. For the VAE-based approaches, the optimization of chemical properties is usually done by searching in the latent space with Bayesian Optimization . used Generative Adversarial Networks for molecule generation. The state-of-the-art models are built on autoregressive based approaches (b;). (b) formulated the problem as a sequential decision process by dynamically adding new nodes and edges based on current sub-graph structures, and the generation policy network is trained by a reinforcement learning framework. proposed an autoregressive model called MolecularRNN to generate new nodes and edges based on the generated nodes and edge sequences. The iterative nature of autoregressive model allows effectively leveraging chemical rules for valency checking during generation and hence the proportion of valid molecules generated by these models is very high. However, due to the sequential generation nature, the training process is usually slow. Our GraphAF approach enjoys the advantage of iterative generation process like autoregressive models (the mapping from latent space to observation space) and meanwhile calculates the exact likelihood corresponding to a feedforward neural network (the mapping from observation space to latent space), which can be implemented efficiently through parallel computation. Two recent work-Graph Normalizing Flows (GNF) and GraphNVP -are also flow-based approaches for graph generation. However, our work is fundamentally different from their work. GNF defines a normalizing flow from a base distribution to the hidden node representations of a pretrained Graph Autoencoders. The generation scheme is done through two separate stages by first generating the node embeddings with the normalizing flow and then generate the graphs based on the generated node embeddings in the first stage. By contrast, in GraphAF, we define an autoregressive flow from a base distribution directly to the molecular graph structures, which can be trained end-to-end. GraphNVP also defines a normalizing flow from a base distribution to the molecular graph structures. However, the generation process of GraphNVP is one-shot, which cannot effectively capture graph structures and also cannot guarantee the validity of generated molecules. In our GraphAF, we formulate the generation process as a sequential decision process and effectively capture the sub-graph structures via graph neural networks, based on which we define a policy function to generate the nodes and edges. The sequential generation process also allows incorporating the chemical rules. As a , the validity of the generated molecules can be guaranteed. We summarize existing approaches in Table 1. A normalizing flow defines a parameterized invertible deterministic transformation from a base distribution E (the latent space, e.g., Gaussian distribution) to real-world observational space Z (e.g. images and speech). Let f: E → Z be an invertible transformation where ∼ p E is the base distribution, then we can compute the density function of real-world data z, i.e., p Z (z), via the change-of-variables formula: Now considering two key processes of normalizing flows as a generative model: Calculating Data Likelihood: given a datapoint z, the exact density p Z (z) can be calculated by inverting the transformation f, = f −1 θ (z); Sampling: z can be sampled from the distribution p Z (z) by first sample ∼ p E and then perform the feedforward transformation z = f θ . To efficiently perform the above mentioned operations, f θ is required to be invertible with an easily computable Jacobian determinant. Autoregressive flows (AF), originally proposed in , is a variant that satisfies these criteria, which holds a triangular Jacobian matrix, and the determinant can be computed linearly. Formally, given z ∈ R D (D is the dimension of observation data), the autoregressive conditional probabilities can be parameterized as Gaussian distributions: where g µ and g α are unconstrained and positive scalar functions of z 1:d−1 respectively to compute the mean and deviation. In practice, these functions can be implemented as neural networks. The affine transformation of AF can be written as: The Jacobian matrix in AF is triangular, since ∂zi ∂ j is non-zero only for j ≤ i. Therefore, the determinant can be efficiently computed through D d=1 α d. Specifically, to perform density estimation, we can apply all individual scalar affine transformations in parallel to compute the base density, each of which depends on previous variables z 1:d−1; to sample z, we can first sample ∈ R D and compute z 1 through the affine transformation, and then each subsequent z d can be computed sequentially based on previously observed Following existing work, we also represent a molecule as a graph G = (A, X), where A is the adjacency tensor and X is the node feature matrix. Assuming there are n nodes in the graph, d and b are the number of different types of nodes and edges respectively, then A ∈ {0, 1} n×n×b and X ∈ {0, 1} n×d. A ijk = 1 if there exists a bond with type k between i th and j th nodes. Graph Convolutional Networks (GCN) (; ; ; ; Schütt et al., 2017) are a family of neural network architectures for learning representations of graphs. In this paper, we use a variant of Relational GCN (R-GCN) to learn the node representations (i.e., atoms) of graphs with categorical edge types. Let k denote the embedding dimension. We compute the node embeddings H l ∈ R n×k at the l th layer of R-GCN by aggregating messages from different edge types: where E i = A [:,:,i] denotes the i th slice of edge-conditioned adjacency tensor,Ẽ i = E i + I, and is a trainable weight matrix for the i th edge type. Agg(·) denotes an aggregation function such as mean pooling or summation. The initial hidden node representation H 0 is set as the original node feature matrix X. After L message passing layers, we use the the final hidden representation H L as the node representations. Meanwhile, the whole graph representations can be defined by aggregating the whole node representations using a readout function , e.g., summation. Similar to existing works like GCPN (a) and MolecularRNN , we formalize the problem of molecular graph generation as a sequential decision process. Let G = (A, X) denote a molecular graph structure. Starting from an empty graph G 1, in each step a new node X i is generated based on the current sub-graph structure G i, i.e., p(X i |G i). Afterwards, the edges between this new node and existing nodes are sequentially generated according to the current graph structure, i.e., p(A ij |G i, X i, A i,1:j−1). This process is repeated until all the nodes and edges are generated. An illustrative example is given in Fig. 1 GraphAF is aimed at defining an invertible transformation from a base distribution (e.g. multivariate Gaussian) to a molecular graph structure G = (A, X). Note that we add one additional type of edge between two nodes, which corresponds to no edge between two nodes, i.e., A ∈ {0, 1} n×n×(b+1). Since both the node type X i and the edge type A ij are discrete, which do not fit into a flow-based model, a standard approach is to use Dequantization technique to convert discrete data into continuous data by adding real-valued noise. We follow this approach to preprocess a discrete graph G = (A, X) into continuous data z = (z A, z X): We present further discussions on dequantization techniques in Appendix A. Formally, we define the conditional distributions for the generation as: where where, where g µ X, g µ A and g α X, g α A are parameterized neural networks for defining the mean and standard deviation of a Gaussian distribution. More specifically, given the current sub-graph structure G i, we use a L-layer of Relational GCN (defined in Section 3.2) to learn the node embeddings H L i ∈ R n×k, and the embedding of entire sub-graphh i ∈ R k, based on which we define the mean and standard deviations of Gaussian distributions to generate the nodes and edges respectively: where sum denotes the sum-pooling operation, and H To generate a new node X i and its edges connected to existing nodes, we just sample random variables i and ij from the base Gaussian distribution and convert it to discrete features. More specifically, z where is the element-wise multiplication. In practice, a real molecular graph is generated by taking the argmax of generated continuous vectors, i.e.,, where v p q denotes a p dimensional one-hot vector with q th dimension equal to 1..., n,n−1 }, where n is the number of atoms in the given molecule, GraphAF defines an invertible mapping between the base Gaussian distribution and the molecule structures z = (z A, z X). According to Eq. 9, the inverse process from z = (z A, z X) to can be easily calculated as: In GraphAF, since f: E → Z is autoregressive, the Jacobian matrix of the inverse process f −1: Z → E is a triangular matrix, and its determinant can be calculated very efficiently. Given a minibatch of training data G, the exact density of each molecule under a given order can be efficiently computed by the change-of-variables formula in Eq. 1. Our objective is to maximize the likelihood of training data. During training, we are able to perform parallel computation by defining a feedforward neural network between the input molecule graph G and the output latent variable by using masking. The mask drops out some connections from inputs to ensure that R-GCN is only connected to the sub-graph G i when inferring the hidden variable of node i, i.e., i, and connected to sub-graph G i, X i, A i,1:j−1 when inferring the hidden variable of edge A ij, i.e., ij. This is similar to the approaches used in MADE and MAF . With the masking technique, GraphAF satisfies the autoregressive property, and at the same time p(G) can be efficiently calculated in just one forward pass by computing all the conditionals in parallel. To further accelerate the training process, the nodes and edges of a training graph are re-ordered according to the breadth-first search (BFS) order, which is widely adopted by existing approaches for graph generation (b;). Due to the nature of BFS, bonds can only be present between nodes within the same or consecutive BFS depths. Therefore, the maximum dependency distance between nodes is bounded by the largest number of nodes in a single BFS depth. In our data sets, any single BFS depth contains no more than 12 nodes, which means we only need to model the edges between current atom and the latest generated 12 atoms. Due to space limitation, we summarize the detailed training algorithm into Appendix B. In chemistry, there exist many chemical rules, which can help to generate valid molecules. Thanks to the sequential generation process, GraphAF can leverage these rules in each generation step. Specifically, we can explicitly apply a valency constraint during sampling to check whether current bonds have exceeded the allowed valency, which has been widely adopted in previous models (a;). Let |A ij | denote the order of the chemical bond A ij. In each edge generation step of A ij, we check the following valency constraint for the i th and j th atoms: If the newly added bond breaks the valency constraint, we just reject the bond A ij, sample a new ij in the latent space and generate another new bond type. The generation process will terminate if one of the following conditions is satisfied: 1) the graph size reaches the max-size n, 2) no bond is generated between the newly generated atom and previous sub-graph. Finally, hydrogens are added to the atoms that have not filled up their valencies. So far, we have introduced how to use GraphAF to model the data density of molecular graph structures and generate valid molecules. Nonetheless, for drug discovery, we also need to optimize the chemical properties of generated molecules. In this part, we introduce how to fine-tune our generation process with reinforcement learning to optimize the properties of generated molecules. State and Policy Network. The state is the current sub-graph, and the initial state is an empty graph. The policy network is the same as the autoregressive model defined in Section 4.1, which includes the process of generating a new atom based on the current sub-graph and generating the edges between the new atom and existing atoms, i.e., p (X i |G i) and p (A ij |G i, X i, A i,1:j−1). The policy network itself defines a distribution p θ of molecular graphs G. If there are no edges between the newly generated atom and current sub-graph, the generation process terminates. For the state transition dynamics, we also incorporate the valency check constraint. Reward design. Similar to GCPN You et al. (2018a), we also incorporate both intermediate and final rewards for training the policy network. A small penalization will be introduced as the intermediate reward if the edge predictions violate the valency check. The final rewards include both the score of targeted-properties of generated molecules such as octanol-water partition coefficient (logP) or drug-likeness (QED) and the chemical validity reward such as penalties for molecules with excessive steric strain and or functional groups that violate ZINC functional group filters . The final reward is distributed to all intermediate steps with a discounting factor to stabilize the training. In practice, we adopt Proximal Policy Optimization (PPO) , an advanced policy gradient algorithm to train GraphAF in the above defined environment. Let G ij be the shorthand notation of sub-graph G i ∪ X i ∪ A i,1:j−1. Formally, in the RL process of training GraphAF, the loss function of PPO is written as: where r i (θ) = p θ (Xi|Gi) p θ old (Xi|Gi) and r ij (θ) = p θ (Aij |Gij) p θ old (Aij |Gij) are ratios of probabilities output by old and new policies, and V (state, action) is the estimated advantage function with a moving average baseline to reduce the variance of estimation. More specifically, we treat generating a node and all its edges with existing nodes as one step and maintain a moving average baseline for each step. The clipped surrogate objective prevents the policy network from being updated to collapse for some extreme rewards. Evaluation Tasks. Following existing works on molecule generation (; a;), we conduct experiments by comparing with the state-of-the-art approaches on three standard tasks. Density Modeling and Generation evaluates the model's capacity to learn the data distribution and generate realistic and diverse molecules. Property Optimization concentrates on generating novel molecules with optimized chemical properties. For this task, we fine-tune our network pretrained from the density modeling task to maximize the desired properties. Constrained Property Optimization is first proposed in , which is aimed at modifying the given molecule to improve desired properties while satisfying a similarity constraint. Data. We use the ZINC250k molecular dataset for training. The dataset contains 250, 000 drug-like molecules with a maximum atom number of 38. It has 9 atom types and 3 edge types. We use the open-source chemical software RDkit to preprocess molecules. All molecules are presented in kekulized form with hydrogen removed. Baselines. We compare GraphAF with the following state-of-the-art approaches for molecule generation. JT-VAE is a VAE-based model which generates molecules by first decoding a tree structure of scaffolds and then assembling them into molecules. JT-VAE has been shown to outperform other previous VAE-based models (; Gómez-;). GCPN is a state-of-the-art approach which combines reinforcement learning and graph representation learning methods to explore the vast chemical space. MolecularRNN (MRNN), another autoregressive model, uses RNN to generate molecules in a sequential manner. We also compare our model with GraphNVP , a recently proposed flow-based model. Results of baselines are taken from original papers unless stated. Implementation Details. GraphAF is implemented in PyTorch . The R-GCN is implemented with 3 layers, and the embedding dimension is set as 128. The max graph size is set as 48 empirically. For density modeling, we train our model for 10 epochs with a batch size of 32 and a learning rate of 0.001. For property optimization, we perform a grid search on the hyperparameters and select the best setting according to the chemical scoring performance. We use Adam to optimize our model. Full training details can be found in Appendix C. Density Modeling and Generation. We evaluate the ability of the proposed method to model real molecules by utilizing the widely-used metrics: Validity is the percentage of valid molecules among all the generated graphs. Uniqueness is the percentage of unique molecules among all the generated molecules. Novelty is the percentage of generated molecules not appearing in training set. Reconstruction is the percentage of the molecules that can be reconstructed from latent vectors. We calculate the above metrics from 10,000 randomly generated molecules. Table 2 shows that GraphAF achieves competitive on all four metrics. As a flow-based model, GraphAF holds perfect reconstruction ability compared with VAE approaches. Our model also achieves a 100% validity rate since we can leverage the valency check during sequential generation. By contrast, the validity rate of another flow-based approach GraphNVP is only 42.60% due to its one-shot sampling process. An interesting is that even without the valency check during generation, GraphAF can still achieve a validity rate as high as 68%, while previous state-of-the-art approach GCPN only achieves 20%. This indicates the strong flexibility of GraphAF to model the data density and capture the domain knowledge from unsupervised training on the large chemical dataset. We also compare the efficiency of different methods on the same computation environment, a machine with 1 Tesla V100 GPU and 32 CPU cores. To achieve the in Table 2, JT-VAE and GCPN take around 24 and 8 hours, respectively, while GraphAF only takes 4 hours. To show that GraphAF is not overfitted to the specific data set ZINC250k, we also conduct experiments on two other molecule datasets, QM9 and MOSES . QM9 contains 134k molecules with up to 9 heavy atoms, and MOSES is much larger and more challenging, which contains 1.9M molecules with up to 30 heavy atoms. Table 3 shows that GraphAF can always generate valid, unique and novel molecules even on the more complicated data set MOSES. Furthermore, though GraphAF is originally designed for molecular graph generation, it is actually very general and can be used to model different types of graphs by simply modifying the node and edge generating functions Edge-MLPs and Node-MLPs (Eq. 8). Following the experimental setup of Graph Normalizing Flows (GNF) , we test GraphAF on two generic graph datasets: Table 4: Comparison between different graph generative models on general graphs with MMD metrics. We follow the evaluation scheme of GNF (COMMUNITY-SMALL, which is a synthetic data set containing 100 2-community graphs, and EGO-SMALL, which is a set of graphs extracted from Citeseer dataset . In practice, we use one-hot indicator vectors as node features for R-GCN. We borrow open source scripts from GraphRNN (b) to generate datasets and evaluate different models. For evaluation, we report the Maximum Mean Discrepancy (MMD) between generated and training graphs using some specific metrics on graphs proposed by You et al. (2018b). The in Table 4 demonstrate that when applied to generic graphs, GraphAF can still consistently yield comparable or better compared with GraphRNN and GNF. We give the visualization of generated generic graphs in Appendix D. Property Optimization. In this task, we aim at generating molecules with desired properties. Specifically, we choose penalized logP and QED as our target property. The former score is logP score penalized by ring size and synthetic accessibility, while the latter one measures the druglikeness of the molecules. Note that both scores are calculated using empirical prediction models and we adopt the script used in (a) to make comparable. To perform this task, we pretrain the GraphAF network for 300 epochs for likelihood modeling, and then apply the RL process described in section 4.4 to fine-tune the network towards desired chemical properties. Detailed reward design and hyper-parameters setting can be found in Appendix C. Following existing works, we report the top-3 scores found by each model. As shown in Table 5, GraphAF outperforms all baselines by a large margin for penalized logP score and achieves comparable for QED. This phenomenon indicates that combined with RL process, GraphAF successfully captures the distribution of desired molecules. Note that we re-evaluate the properties of the top-3 molecules found by MolecularRNN, which turn out to be lower than the reported in the original paper. Figure 2 (a) and 2(b) show the molecules with the highest score discovered by our model. More realistic molecules generated by GraphAF with penalized logP score ranging from 5 to 10 are presented in Figure 6 in Appendix E. One should note that, as defined in Sec 4.4, our RL process is close to the one used in previous work GCPN (a). Therefore, the good property optimization performance is believed to come from the flexibility of flow. Compared with the GAN model used in GCPN, which is known to suffer from the mode collapse problem, flow is flexible at modeling complex distribution and generating diverse data (as shown in Table 2 and Table 3). This allows GraphAF to explore a variety of molecule structures in the RL process for molecule properties optimization. Constrained Property Optimization. The goal of the last task is to modify the given molecule to improve specified property with the constraint that the similarity between the original and modified molecule is above a threshold δ. and You et al. (2018a), we choose to optimize penalized logP for 800 molecules in ZINC250k with the lowest scores and adopt Tanimoto similarity with Morgan fingerprint as the similarity metric. Similar to the property optimization task, we pretrain GraphAF via density modeling and then finetune the model with RL. During generation, we set the initial states as sub-graphs randomly sampled from 800 molecules to be optimized. For evaluation, we report the mean and standard deviation of the highest improvement and the corresponding similarity between the original and modified molecules in Table 6. Experiment show that GraphAF significantly outperforms all previous approaches and almost always succeeds in improving the target property. Figure 2 (c) visualizes two optimization examples, showing that our model is able to improve the penalized logP score by a large margin while maintaining a high similarity between the original and modified molecule. We proposed GraphAF, the first flow-based autoregressive model for generating realistic and diverse molecular graphs. GraphAF is capable to model the complex molecular distribution thanks to the flexibility of normalizing flow, as well as generate novel and 100% valid molecules in empirical experiments. Moreover, the training of GraphAF is very efficient. To optimize the properties of generated molecules, we fine-tuned the generative process with reinforcement learning. Experimental show that GraphAF outperforms all previous state-of-the-art baselines on the standard tasks. In the future, we plan to train our GraphAF model on larger datasets and also extend it to generate other types of graph structures (e.g., social networks). 10: end for 19: In this section, we elaborate the network architecture and the implementation details of three tasks. Network architecture. The network architecture is fixed among all three tasks. More specifically, the R-GCN is implemented with 3 layers and the embedding dimension is set as 128. We use batch normalization before graph pooling to accelerate the convergence and choose sum-pooling as the readout function for graph representations. Both node MLPs and edge MLPs have two fullyconnected layers equipped with tanh non-linearity. Density Modeling and Generation. To achieve the in Table 2, we train GraphAF on ZINC250K with a batch size of 32 on 1 Tesla V100 GPU and 32 CPU cores for 10 epochs. We optimize our model with Adam with a fixed learning rate of 0.001. Property Optimization. For both property optimization and constrained property optimization, we first pretrain a GraphAF network via the density modeling task for 300 epochs, and then finetune the network toward desired molecular distribution through RL process. Following are details about the reward design for property optimization. The reward of each step consists of step-wise validity rewards and the final rewards discounted by a fixed factor γ. The step-wise validity penalty is fixed as -1. The final reward of a molecule m includes both property-targeted reward and chemical validation reward. We adopt the same chemical validation rewards as GCPN. We define propertytargeted reward as follows: γ is set to 0.97 for QED optimization and 0.9 for penalized logP optimization respectively. We fine-tune the pretrained model for 200 iterations with a fixed batch size of 64 using Adam optimizer. We also adopt a linear learning rate warm-up to stabilize the training. We perform the grid search to determine the optimal hyperparameters according to the chemical scoring performance. The search space is summarised in Table 7. Constrained Property Optimization. We first introduce the way we sample sub-graphs from 800 ZINC molecules. Given a molecule, we first randomly sample a BFS order and then drop the last m nodes in BFS order as well as edges induced by these nodes, where m is randomly chosen from {0, 1, 2, 3, 4, 5} each time. Finally, we reconstruct the sub-graph from the remaining nodes in the BFS sequence. Note that the final sub-graph is connected due to the nature of BFS order. For reward design, we set it as the improvement of the target score. We fine-tune the pretrained model for 200 iterations with a batch size of 64. We also use Adam with a learning rate of 0.0001 to optimize the model. Finally, each molecule is optimized for 200 times by the tuned model. We present visualizations of graphs generated by GraphAF as well as the training graphs in Figure 3 and Figure 4. The visualizations demonstrate that GraphAF has strong ability to learn different graph structures from generic graph datasets. We present more molecule samples generated by GraphAF in the following pages. Figure 5 presents 50 molecules randomly sampled from multivariate Gaussian, which justify the ability of our model to generate novel, realistic and unique molecules. From Figure 6 we can see that our model is able to generate molecules with high and diverse penalized logP scores ranging from 5 to 10. For constrained property optimization of penalized logP score, as shown by Figure 7, our model can either reduce the ring size, remove the big ring or grow carbon chains from the original molecule, improving the penalized logP score by a large margin. (a) Graphs from training set (b) Graphs generated by GraphAF Figure 3: Visualizations of training graphs and generated graphs of EGO-SMALL. (a) Graphs from training set (b) Graphs generated by GraphAF
A flow-based autoregressive model for molecular graph generation. Reaching state-of-the-art results on molecule generation and properties optimization.
1,170
scitldr
We study two types of preconditioners and preconditioned stochastic gradient descent (SGD) methods in a unified framework. We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher information matrix. Both preconditioners can be derived from one framework, and efficiently estimated on any matrix Lie groups designated by the user using natural or relative gradient descent minimizing certain preconditioner estimation criteria. Many existing preconditioners and methods, e.g., RMSProp, Adam, KFAC, equilibrated SGD, batch normalization, etc., are special cases of or closely related to either the Newton type or the Fisher type ones. Experimental on relatively large scale machine learning problems are reported for performance study. This paper investigates the use of preconditioner for accelerating gradient descent, especially in large scale machine learning problems. Stochastic gradient descent (SGD) and its variations, e.g., momentum BID11 BID9, RMSProp and Adagrad BID5, Adam BID6, etc., are popular choices due to their simplicity and wide applicability. These simple methods do not use well normalized step size, could converge slow, and might involve more controlling parameters requiring fine tweaking. Convex optimization is a well studied field BID2. Many off-the-shelf methods there, e.g., (nonlinear) conjugate gradient descent, quasi-Newton methods, Hessian-free optimizations, etc., can be applied to small and middle scale machine learning problems without much modifications. However, these convex optimization methods may have difficulty in handling gradient noise and scaling up to problems with hundreds of millions of free parameters. For a large family of machine learning problems, natural gradient with the Fisher information metric is equivalent to a preconditioned gradient using inverse of the Fisher information matrix as the preconditioner BID1. Natural gradient and its variations, e.g., Kronecker-factored approximate curvature (KFAC) BID8 and the one in BID10, all use such preconditioners. Other less popular choices are the equilibrated preconditioner BID4 and the one proposed in BID7. Momentum or the heavy-ball method provides another independent way to accelerate converge BID9 BID11. Furthermore, momentum and preconditioner can be combined to further accelerate convergence as shown in Adam BID6. This paper groups the above mentioned preconditioners and preconditioned SGD methods into two classes, the Newton type and the Fisher type. The Newton type is closely related to the Newton method, and is suitable for general purpose optimizations. The Fisher type preconditioner relates to the inverse of Fisher information matrix, and is limited to a large subclass of stochastic optimization problems where the Fish information metric can be well defined. Both preconditioners can be derived from one framework, and estimated on any matrix Lie groups designated by the user with almost the same natural or relative gradient descent methods minimizing specific preconditioner estimation criteria. We consider the minimization of cost function f (θ θ θ) = E z [(θ θ θ, z z z)]where E z takes expectation over random variable z z z, is a loss function, and θ θ θ is the model parameter vector to be optimized. For example, in a classification problem, could be the cross entropy loss, z z z is a pair of input feature vector and class label, vector θ θ θ consists of all the trainable parameters in the classification model, and E z takes average over all samples from the training data set. By assuming second order differentiable model and loss, we could approximate (θ θ θ, z z z) as a quadratic function of θ θ θ within a trust region around θ θ θ, i.e., (θ θ θ, z z z) = b b b T z θ θ θ + 0.5θ θ θ T H H H z θ θ θ + a z, where a z is the sum of higher order approximation errors and constant term independent of θ θ θ, H H H z is a symmetric matrix, and subscript z in b b b z, H H H z and a z reminds us that these three terms depend on z z z. Clearly, these three terms depend on θ θ θ as well, although we do not explicitly show this dependence to simplify our notations since we just consider parameter updates in the same trust region. Now, we may rewrite DISPLAYFORM0 We do not impose any assumption, e.g., positive definiteness, on H H H except for being symmetric. Thus, the quadratic surface in the trust region could be non-convex. To simplify our notations, we no longer consider the higher order approximation errors included in a, and simply assume that f (θ θ θ) is a quadratic function of θ θ θ in the trust region. Let us consider a certain iteration. Preconditioned SGD updates θ θ θ as DISPLAYFORM0 where µ > 0 is the step size,f (θ θ θ) is an estimate of f (θ θ θ) obtained by replacing expectation with sample average, and positive definite matrix P P P could be a fixed or adaptive preconditioner. By letting θ θ θ = P P P −0.5 θ θ θ, we can rewrite as DISPLAYFORM1 where P P P −0.5 denotes the principal square root of P P P. Hence, suggests that preconditioned SGD is equivalent to SGD in a transformed parameter domain. Within the considered trust region, let us write the stochastic gradient, ∂f (θ θ θ)/∂θ θ θ, explicitly as DISPLAYFORM2 whereĤ H H andb b b are estimates of H H H and b b b, respectively. Combining and gives the following linear system θ θ θ ← (I I I − µP P PĤ H H)θ θ θ − µP P Pb b b for updating θ θ θ within the assumed trust region, where I I I is the identity matrix. A properly determined P P P could significantly accelerate convergence of the locally linear system in.We review a few facts shown in BID7 before introducing our main contributions. Let δθ θ θ be a random perturbation of θ θ θ, and be small enough such that θ θ θ + δθ θ θ still resides in the same trust region. Then, suggests the following ant perturbation of stochastic gradient, DISPLAYFORM3 where ε ε ε accounts for the error due to replacingĤ H H with H H H. Note that by definition, δĝ g g is a random vector dependent on both z z z and δθ θ θ. The preconditioner in BID7 is pursued by minimizing criterion c(P P P) = E z,δθ [δĝ g g T P P P δĝ g g + δθ θ θ T P P P −1 δθ θ θ] where subscript δθ in E z,δθ denotes taking expectation over δθ θ θ. Under mild conditions, criterion determines a unique positive definite P P P, which is optimal in the sense that it preconditions the stochastic gradient such that DISPLAYFORM4 which is comparable to relationship H H H −1 δg g gδg g g T H H H −1 = δθ θ θδθ θ θ T, where δg g g = H H Hδθ θ θ is the perturbation of noiseless gradient, and we assume that H H H is invertible, but not necessarily positive definite. Clearly, this preconditioner is comparable to H H H −1. It perfectly preconditions the gradient such that the amplitudes of parameter perturbations match that of the associated preconditioned gradient, regardless of the amount of gradient noise. Naturally, preconditioned SGD with this preconditioner inherits the scale-invariance property of the Newton method. Note that in the presence of gradient noise, the optimal P P P and P P P −1 given by are not unbiased estimates of H H H −1 and H H H, respectively. Actually, even if H H H is positive definite and available, H H H −1 may not always be a good preconditioner when H H H is ill-conditioned since it could significantly amplify the gradient noise along the directions of the eigenvectors of H H H associated with small eigenvalues, and lead to divergence. More specifically, it is shown in BID7 that Preconditioner estimation criterion requires δθ θ θ to be small enough such that θ θ θ and θ θ θ + δθ θ θ reside in the same trust region. In practice, numerical error might be an issue when handling small numbers with floating point arithmetic. This concern becomes more grave with the popularity of single and even half precision math in large scale neural network training. Luckily, relates δĝ g g to the Hessianvector product, which can be efficiently evaluated with automatic differentiation software tools. Let v v v be a random vector with the same dimension as θ θ θ. Then, suggests the following method for Hessian-vector product evaluation, DISPLAYFORM5 DISPLAYFORM6 Now, replacing (δθ θ θ, δĝ g g) in with (v v v,Ĥ H Hv v v) leads to our following new preconditioner estimation criterion, c n (DISPLAYFORM7 where the subscript v in E z,v suggests taking expectation over v v v. We no longer have the need to assume v v v to be an arbitrarily small vector. It is important to note that this criterion only requires the Hessian-vector product. The Hessian itself is not of interest. We call the Newton type preconditioner estimation criterion as the ant preconditioned SGD method is closely related to the Newton method. We consider the machine learning problems where the empirical Fisher information matrix can be well defined by DISPLAYFORM0 whereĝ g g = ∂f (θ θ θ)/∂θ θ θ is a shorthand for stochastic gradient, and λ ≥ 0 is a damping factor. Clearly, v v v is independent ofĝ g g. Let us further assume that v v v is drawn from standard multivariate normal distribution N (0 0 0, DISPLAYFORM1 Then, we could simplify c f (P P P) as DISPLAYFORM2 By letting the derivative of c f (P P P) with respect to P P P be zero, the optimal positive definite solution for c f (P P P) is readily shown to be DISPLAYFORM3 Whenĝ g g is a gradient estimation obtained by taking average over B independent samples, E z [ĝ g gĝ g g T] is related to the Fisher information matrix by DISPLAYFORM4 We call this preconditioner the Fisher type one due to its close relationship to the Fisher information matrix. One can easily modify this preconditioner to obtain an unbiased estimation of F F F −1. Let s s s be an exponential moving average ofĝ g g. Then, after replacing theĝ g g in withĝ g g − s s s + s s s/ √ B and setting λ = 0, P P P 2 /B will be an unbiased estimation of F F F −1. Generally, it might be acceptable to keep the bias term, (B − 1)g g gg g g T /B, in for two reasons: it is nonnegative definite and regularizes the inversion in FORMULA0; it vanishes when the parameters approach a stationary point. Actually, the Fisher information matrix could be singular for many commonly used models, e.g., finite mixture models, neural networks, hidden Markov models. We might not be able to inverse F F F for these singular statistical models without using regularization or damping. A Fisher type preconditioner with λ > 0 loses the scale-invariance property of a Newton type preconditioner. Both P P P and P P P 2 can be useful preconditioners when the step size µ and damping factor λ are set properly. Following the ideas in BID7, we can show that determines a unique positive definite preconditioner if and only if DISPLAYFORM0 has distinct eigenvalues. Other minimum solutions of criterion FORMULA0 are either indefinite or negative definite, and are not interested for our purpose. The proof itself has limited novelty. We omit it here. Instead, let us consider the simplest case, where θ θ θ is a scalar parameter, to gain some intuitive understandings of criterion. For scalar parameter, it is trivial to show that the optimal solutions minimizing are DISPLAYFORM1 whereĤ H H, H H H, P P P, and v v v are replaced with their plain lower case letters, and we have used the fact that H H H −Ĥ H H and v v v are independent. For gradient descent, we choose the positive solution, although the negative one gives the global minimum of. With the positive preconditioner, eigenvalue of the locally linear system in is DISPLAYFORM2 Now, it is clear that this optimal preconditioner damps the gradient noise when E z [(h−ĥ) 2 ] is large, and preconditions the locally linear system in such that its eigenvalue has unitary amplitude when the gradient noise vanishes. Convergence is ensured when a normalized step size, i.e., 0 < µ < 1, is used. For θ θ θ with higher dimensions, eigenvalues of the locally linear system in is normalized into range [−1, 1] as well, in a way similar to. Let us take the Newton type preconditioner as an example to derive its updating rule. Updating rule for the Fisher type preconditioner is the same except for replacing the Hessian-vector product with stochastic gradient. Here, Lie group always refers to the matrix Lie group. It is inconvenient to optimize P P P directly as it must be a symmetric and positive definite matrix. Instead, we represent the preconditioner as P P P = Q Q Q T Q Q Q, and estimate Q Q Q. Now, Q Q Q must be a nonsingular matrix as both c n (P P P) and c f (P P P) diverge when P P P is singular. Invertible matrices with the same dimension form a Lie group. In practice, we are more interested in Lie groups with sparse representations. Examples of such groups are given in the next section. Let us consider a proper small perturbation of Q Q Q, δQ Q Q, such that Q Q Q + δQ Q Q still lives on the same Lie group. The distance between Q Q Q and Q Q Q + δQ Q Q can be naturally defined as dist(Q Q Q, Q Q Q + δQ Q Q) = tr(δQ Q QQ Q Q −1 Q Q Q −T δQ Q Q T) BID1. Intuitively, this distance is larger for the same amount of perturbation when Q Q Q is closer to a singular matrix. With the above tensor metric definition, natural gradient for optimizing Q Q Q has form DISPLAYFORM0 For example, when Q Q Q lives on the group of invertible upper triangular matrices, R R R is given by DISPLAYFORM1 where triu takes the upper triangular part of a matrix. Another way to derive is to let δQ Q Q = EQ Q Q, and consider the derivative with respect to E, where E is a proper small matrix such that Q Q Q + EQ Q Q still lives on the same Lie group. Gradient derived in this way is known as relative gradient BID3. For our preconditioner learning problem, relative gradient and natural gradient have the same form. Now, Q Q Q can be updated using natural or relative gradient descent as DISPLAYFORM2 In practice, it is convenient to use the following updating rule with normalized step size, DISPLAYFORM3 where 0 < µ 0 < 1, and · takes the norm of a matrix. One simple choice for matrix norm is the maximum absolute value of a matrix. Note that natural gradient can take different forms. One should not confuse the natural gradient on the Lie group derived from a tensor metric with the natural gradient for model parameter learning derived from a Fisher information metric. One iteration of the Newton type preconditioned SGD consists of the following steps.1. Evaluate stochastic gradientĝ g g. The two step sizes, µ and µ 0, are normalized. They should take values in range with typical value 0.01. We usually initialize Q Q Q to a scaled identity matrix with proper dimension. The specific form ofR R R depends on the Lie group to be considered. For example, for upper triangular Q Q Q, we havê DISPLAYFORM0, where Q Q Q −T v v v can be efficiently calculated with back substitution. We only need to replaceĤ H Hv v v in the Newton type preconditioned SGD withĝ g g+λv v v to obtain the Fisher type one. Thus, we do not list its main steps here. Note that only its step size for the preconditioner updating is normalized. There is no simple way to jointly determine the proper ranges for step size µ and damping factor λ. Again,R R R may take different forms on different Lie groups. For upper triangular Q Q Q, we haveR DISPLAYFORM0, where v v v ∼ N (0 0 0, I I I). Here, it is important to note that the natural or relative gradient for c f (P P P) with the form given in involves explicit matrix inversion. However, matrix inversion can be avoided by using the c f (P P P) in, which includes v v v as an auxiliary variable. It is highly recommended to avoid explicit matrix inversion for large Q Q Q. There are many ways to modify the above preconditioned SGD methods. Since curvatures typically evolves slower than gradients, one can update the preconditioner less frequently to save computations per iteration. With parallel computing available, one might update the preconditioner and model parameters simultaneously and asynchronously to save wall time per iteration. Combining preconditioner and momentum may further accelerate convergence. For recurrent neural network learning, we may need to clip the norm of preconditioned gradients to avoid excessively large parameter updates. In general, preconditioned gradient clipping relates to the trust region method by θ θ θ[new] − θ θ θ = µP P Pĝ g g/max(1, P P Pĝ g g /Ω) ≤ µΩwhere Ω > 0 is a clipping threshold, comparable to the size of trust region. One may set Ω to a positive number proportional to the square root of the number of model parameters. Most importantly, we can choose different Lie groups for estimating our preconditioners to achieve a good trade off between performance and complexity. In practice, we seldom consider the Lie group consisting of dense invertible matrices for preconditioner estimation when the problem size is large. Lie groups with sparse structures are of more interests. To begin with, let us recall a few facts about Lie group. If A A A and B B B are two Lie groups, then A A A T, A A A ⊗B B B, and A A A ⊕B B B all are Lie groups, where ⊗ and ⊕ denote Kronecker product and direct sum, respectively. Furthermore, for any matrix C C C with compatible dimensions, block matrix DISPLAYFORM0 still forms a Lie group. We do not show proofs of the above statements here as they are no more than a few lines of algebraic operations. These simple rules can be used to design many useful Lie groups for constructing sparse preconditioners. We already know that invertible upper triangular matrices form a Lie group. Here, we list a few useful ones with sparse representations. Diagonal matrices with the same dimension and positive diagonal entries form a Lie group with reducible representation. Preconditioners learned on this group are called diagonal preconditioners. For matrix parameter Θ Θ Θ, we can flatten Θ Θ Θ into a vector, and precondition its gradient using a Kronecker product preconditioner with Q Q Q having form Q Q Q = Q Q Q 2 ⊗ Q Q Q 1. Clearly, Q Q Q is a Lie group as long as Q Q Q 1 and Q Q Q 2 are two Lie groups. Let us check its role in learning the following affine transformation DISPLAYFORM0 where x x x is the input feature vector augmented with 1, and y y y is the output feature vector. After reverting the flattened Θ Θ Θ back to its matrix form, the preconditioned SGD learning rule for Θ Θ Θ is DISPLAYFORM1 Similar to, we introduce coordinate transformation DISPLAYFORM2 2, and rewrite as DISPLAYFORM3 Correspondingly, the affine transformation in FORMULA2 is rewritten as y y y = Θ Θ Θ x x x, where y y y = Q Q Q −T 1 y y y and x x x = Q Q Q 2 x x x are the transformed input and output feature vectors, respectively. Hence, the preconditioned SGD in is equivalent to the SGD in with transformed feature vectors x x x and y y y. We know that feature whitening and normalization could significantly accelerate convergence. A Kronecker product preconditioner plays a similar role in learning the affine transformation in. This is a special Kronecker product preconditioner by constraining Q Q Q 1 to be a diagonal matrix, and Q Q Q 2 to be a sparse matrix where only its diagonal and last column can have nonzero values. Note that Q Q Q 2 with nonzero diagonal entries forms a Lie group. Hence, Q Q Q = Q Q Q 2 ⊗ Q Q Q 1 is a Lie group as well. We call it a scaling and normalization preconditioner as it resembles a preconditioner that scales the output features and normalizes the input features. Let us check the transformed features y y y = Q Q Q −T 1 y y y and x x x = Q Q Q 2 x x x. It is clear that y y y is an element-wisely scaled version of y y y as Q Q Q 1 is a diagonal matrix. To make x x x a "normalized" feature vector, x x x needs to be an input feature vector augmented with 1.Let us check a simple example to verify this point. We consider an input vector with two features, and write down its normalized features explicitly as below, DISPLAYFORM0 where m i and σ i are the mean and standard deviation of x i, respectively. It is straightforward to show that the feature normalization operation in forms a Lie group with four freedoms. For the scaling-and-normalization preconditioner, we have no need to force the last diagonal entry of Q Q Q 2 to be 1. Hence, the group of feature normalization operation is a subgroup of Q Q Q 2. This is another special Kronecker product preconditioner by constraining Q Q Q 1 to be a diagonal matrix, and Q Q Q 2 to be an upper triangular matrix with positive diagonal entries. We call it a scaling-andwhitening preconditioner since it resembles a preconditioner that scales the output features and whitens the input features. Again, the input feature vector x x x must be augmented with 1 such that the whitening operation forms a Lie group represented by upper triangular matrices with 1 being its last diagonal entry. This is a subgroup of Q Q Q 2 as we have no need to fix Q Q Q 2's last diagonal entry to 1.It is not possible to enumerate all kinds of Lie groups suitable for constructing preconditioners. For example, Kronecker product preconditioner with form Q Q Q = Q Q Q 3 ⊗ Q Q Q 2 ⊗ Q Q Q 1 could be used for preconditioning gradients of a third order tensor. The normalization and whitening groups are just two special cases of the groups with the form shown in FORMULA0, and there are numerous more choices having sparsities between that of these two. Regardless of the detailed form of Q Q Q, all such preconditioners share the same form of learning rule shown in FORMULA2, and they all can be efficiently learned using natural or relative gradient descent without much tuning effort. Adagrad, RMSProp and Adam all use the Fisher type preconditioner living on the group of diagonal matrices with positive diagonal entries. This is a simple group. Optimal solution for c f (P P P) has closed-form solution P P P = diag(1 E z [ĝ g g ĝ g g] + λ 2 ), where and denote element wise multiplication and division, respectively. In practice, simple exponential moving average is used to replace the expectation when using this preconditioner. For diagonal preconditioner, the optimal solution minimizing c n (P P P) has closed-form solution DISPLAYFORM0 reduces to a vector with unit entries. Then, this optimal solution gives the equilibration preconditioner in BID4. The preconditioners considered in BID10 and BID8 are closely related to the Fisher type Kronecker product preconditioners. While KFAC approximates the Fisher metric of a matrix parameter as a Kronecker product to obtain its approximated inverse in closedform solution, our method turns to an iterative solution to approximate this same inverse. Theoretically, our method's accuracy is only limited by the expressive power of the Lie group since no intermediate approximation is made. In practice, one distinct advantage of our method over KFAC is that explicit matrix inversion is avoided by introducing auxiliary vector v v v and using back substitution, while KFAC typically requires inversion of symmetric matrices. On graphics processing units (GPU) and with large matrices, parallel back substitution is as computationally cheap as matrix multiplication, and could be several orders of magnitude faster than inversion of symmetric matrix. Another advantage is that our method is derived from a unified framework. There is no need to invent different preconditioner learning rules when we switch the Lie group representations. Batch normalization can be viewed as preconditioned SGD using a specific scaling-andnormalization preconditioner with constraint Q Q Q 1 = I I I and Q Q Q 2 from the feature normalization Lie group. However, we should be aware that explicit input feature normalization is only empirically shown to accelerate convergence, and has little meaning in certain scenarios, e.g., recurrent neural network learning where features may not have any stationary first or second order statistic. Both the Newton and Fisher type preconditioned SGD methods provide a more general and principled approach to find the optimal preconditioner, and apply to a broader range of applications. Generally, a scaling-and-normalization preconditioner does not necessarily "normalize" the input features in the sense of mean removal and variance normalization. We use the square root Fisher type preconditioners in the following experiments since they are less picky on the damping factor, and seem to be more numerically robust on large scale problems. Still, as shown in our Pytorch implementation package, the original Fisher type preconditioners could perform better on small scale problems like the MNIST handwritten digit recognition task. Let us consider the minimization of Rosenbrock function, f (θ θ θ) = 100(θ 2 − θ 2 1) 2 + (1 − θ 1) 2, starting from initial guess θ θ θ = [−1, 1]. This is a well known benchmark problem for mathematical optimization. The compared methods use fixed step size. For each method, the best step size is selected from sequence {. . ., 1, 0.5, 0.2, 0.1, 0.05, 0.02, 0.01, . . .}. For gradient descent, the best step size is 0.002. For momentum method, the moving average factor is 0.9, and the best step size is 0.002. For Nesterov momentum, the best step size is 0.001. For preconditioned SGD, Q Q Q is initialized to 0.1I I I and lives on the group of triangular matrices. For the Fisher type method, we set λ = 0.1, and step sizes 0.01 and 0.001 for preconditioner and parameter updates, respectively. For the Newton type method, we set step sizes 0.2 and 0.5 for preconditioner and parameter updates, respectively. FIG1 summarizes the . The Newton type method performs the best, converging to the optimal solution using about 200 iterations. The Fisher type method does not fit into this problem, and performs poorly as expected. Mathematical optimization is not our focus. Still, this example shows that the Newton type preconditioned SGD works well for mathematical optimization. We consider the ImageNet ILSVRC2012 database for the image classification task. The well known AlexNet is considered. We follow the descriptions in BID0 as closely as possible to set up our experiment. One main difference is that we do not augment the training data. Another big difference is that we use a modified local response normalization (LRN). The LRN function from TensorFlow implementation is not second order differentiable. We have to approximate the local energy used for LRN with a properly scaled global energy to facilitate Hessian-vector product evaluation. Note that convolution can be rewritten as correlation between the flattened input image patches and filter coefficients. In this way, we find that there are eight matrices to be optimized in the AlexNet, and their shapes are: each matrix. Denser preconditioners, e.g., the Kronecker product one, require hundreds of millions parameters for representations, and are expensive to run on our platform. Each compared method is trained with 40 epochs, mini-batch size 128, step size µ for the first 20 epochs, and 0.1µ for the last 20 epochs. We have compared several methods with multiple settings, and only report the ones with reasonably good here. For Adam, the initial step size is set to 0.00005. For batch normalization, initial step size is 0.002, and its moving average factors for momentum and statistics used for feature normalization are 0.9 and 0.99, respectively. The momentum method uses initial step size 0.002, and moving average factor 0.9 for momentum. Preconditioned SGD performs better with the scaling-and-normalization preconditioner. Its Q Q Q is initialized to 0.1I I I, and updated with normalized step size 0.01. For the Fisher type preconditioner, we set λ = 0.001 and initial step size 0.00005. For the Newton type preconditioner, its initial step size is 0.01. FIG0 summarizes the . Training loss for batch normalization is only for reference purpose as normalization alters the L2-regularization term. Batch normalization does not perform well under this setup, maybe due to its conflict with certain settings like the LRN and L2-regularization. We see that the scalingand-normalization preconditioner does accelerate convergence, although it is super sparse. The Newton type preconditioned SGD performs the best, and achieves top-1 validation accuracy about 56% when using only one crop for testing, while the momentum method may require 90 epochs to achieve similar performance. We consider the world level language modeling problem with reference implementation available from https://github.com/pytorch/examples. The Wikitext-2 database with 33278 tokens is considered. The task is to predict the next token from history observations. Our tested network consists of six layers, i.e., encoding layer, LSTM layer, dropout layer, LSTM layer, dropout layer, and decoding layer. For each LSTM layer, we put all its coefficients into a single matrix Θ Θ Θ by defining output and augmented input feature vectors as in DISPLAYFORM0, where t is a discrete time index, x x x is the input, h h h is the hidden state, and c c c is the cell state. The encoding layer's weight matrix is the transpose of that of the decoding layer. Thus, we totally get three matrices to be optimized. With hidden layer size 200, shapes of these three matrices are respectively. For all methods, the step size is reduced to one fourth of the current value whenever the current perplexity on validation set is larger than the best one ever found. For SGD, the initial step size is 20, and the gradient is clipped with threshold 0.25. The momentum method diverges easily without clipping. We set momentum 0.9, initial step size 1, and clipping threshold 0.25. We set initial step size 0.005 and damping factor λ 2 = 10 −12 for Adam and sparse Adam. Sparse Adam updates its moments and model parameters only when the corresponding stochastic gradients are not zeros. We have tried diagonal, scaling-and-normalization and scaling-and-whitening preconditioners for each matrix. The encoding (decoding) matrix is too large to consider KFAC like preconditioner. The diagonal preconditioner performs the worst, and the other two have comparable performance. For both types of preconditioned SGD, the clipping threshold for preconditioned gradient is 100, the initial step size is 0.1, and Q Q Q is initialized to I I I. We set λ = 0 for the Fisher Figure 3 summarizes the when the dropout rate is 0.35. Methods involving momentum, including Adam and sparse Adam, perform poorly. Note that our preconditioners preserve the sparsity property of gradients from the encoding and decoding layers (Appendix A). This saves considerable computations by avoiding update parameters with zero gradients. Again, both preconditioners accelerate convergence significantly despite their high sparsity. Compared with SGD, the Fisher type preconditioned SGD adds limited computational complexity when sparse preconditioners are adopted. The Newton type preconditioned SGD requires Hessianvector product, which typically has complexity comparable to that of gradient evaluation. Thus, using SGD as the base line, the Newton type preconditioned SGD approximately doubles the computational complexity per iteration, while the Fisher type SGD has similar complexity. Wall time per iteration of preconditioned SGD highly depends on the implementations. Ideally, the preconditioners and parameters could be updated in a parallel and asynchronous way such that SGD and preconditioned SGD have comparable wall time per iteration. We have put our TensorFlow and Pytorch implementations on https://github.com/ lixilinx. More experimental comparing different preconditioners and optimization methods on diverse benchmark problems can be found there. For the ImageNet experiment, all compared methods are implemented in Tensorflow, and require two days and a few hours to finish 40 epochs on a GeForce GTX 1080 Ti GPU. The word level language modeling experiment is implemented in Pytorch. We have rewritten the word embedding function to enable second order derivative. For this task, SGD and the Fisher type preconditioned SGD have similar wall time per iteration, while the Newton type method requires about 80% more wall time per iteration than SGD when running on the same GPU. Two types of preconditioners and preconditioned SGD methods are studied. The one requiring Hessian-vector product for preconditioner estimation is suitable for general purpose optimization. We call it the Newton type preconditioned SGD due to its close relationship to the Newton method. The other one only requires gradient for preconditioner estimation. We call it the Fisher type preconditioned SGD as its preconditioner is closely related to the inverse of Fisher information matrix. Both preconditioners can be efficiently learned using natural or relative gradient descent on any matrix Lie groups designated by the user. The Fisher type preconditioned SGD has lower computational complexity per iteration, but may require more tuning efforts on selecting its step size and damping factor. The Newton type preconditioned SGD has higher computational complexity per iteration, but is more user friendly due to its use of normalized step size and built-in gradient noise damping ability. Both preconditioners, even with very sparse representations, are shown to considerably accelerate convergence on relatively large scale problems.
We propose a new framework for preconditioner learning, derive new forms of preconditioners and learning methods, and reveal the relationship to methods like RMSProp, Adam, Adagrad, ESGD, KFAC, batch normalization, etc.
1,171
scitldr
We present EDA: easy data augmentation techniques for boosting performance on text classification tasks. EDA consists of four simple but powerful operations: synonym replacement, random insertion, random swap, and random deletion. On five text classification tasks, we show that EDA improves performance for both convolutional and recurrent neural networks. EDA demonstrates particularly strong for smaller datasets; on average, across five datasets, training with EDA while using only 50% of the available training set achieved the same accuracy as normal training with all available data. We also performed extensive ablation studies and suggest parameters for practical use. Text classification is a fundamental task in natural language processing (NLP). Machine learning and deep learning have achieved high accuracy on tasks ranging from sentiment analysis to topic classification BID24, but high performance is often dependent on the size and quality of training data, which is often tedious to collect. Automatic data augmentation is commonly used in vision BID20 BID22 BID10 and speech (; BID7 and can help train more robust models, particularly when using smaller datasets. However, because it is difficult to come up with generalized rules for language transformation, universal data augmentation techniques in NLP have not been explored. Previous work has proposed techniques for data augmentation in NLP. One popular study generated new data by translating sentences into French and back into English BID28 . Other works have used predictive language models for synonym replacement BID8 and data noising as smoothing BID27 . Although these techniques are valid, they are not often used in practice because they have a high cost of implementation relative to performance gain. In this paper, we present a simple set of universal data augmentation techniques for NLP called EDA (easy data augmentation). To the best of our knowledge, we are the first to comprehensively explore text editing techniques for data augmentation. We systematically evaluate EDA on five benchmark classification tasks, and show that EDA provides substantial improvements on all five tasks and is particularly helpful for smaller datasets. Code will be made publicly available. Operation Sentence None A sad, superior human comedy played out on the back roads of life. SR A lamentable, superior human comedy played out on the backward road of life. RI A sad, superior human comedy played out on funniness the back roads of life. RS A sad, superior human comedy played out on roads back the of life. RD A sad, superior human out on the roads of life. Frustrated by the measly performance of text classifiers trained on small datasets, we tested a number of augmentation operations loosely inspired by those used in vision and found that they helped train more robust models. Here, we present the full details of EDA. For a given sentence in the training set, we perform the following operations: 1. Synonym Replacement (SR): Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random. 2. Random Insertion (RI): Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times. 3. Random Swap (RS): Randomly choose two words in the sentence and swap their positions. Do this n times. 4. Random Deletion (RD): Randomly remove each word in the sentence with probability p. Since long sentences have more words than short ones, they can absorb more noise while maintaining their original class label. To compensate, we vary the number of words changed, n, for SR, RI, and RS based on the sentence length l with the formula n=α l, where α is a parameter that indicates the percent of the words in a sentence are changed (we use p=α for RD). Furthermore, for each original sentence, we generate n aug augmented sentences. Examples of augmented sentences are shown in TAB0. We note that synonym replacement has been used previously BID9 BID26, but to our knowledge, random insertions, swaps, and deletions have not been studied. We conduct experiments on five benchmark text classification tasks: SST-2: Stanford Sentiment Treebank BID21, CR: customer reviews BID3 BID13, SUBJ: subjectivity/objectivity dataset BID15 ), TREC: question type dataset BID11, and PC: Pro-Con dataset BID2. Summary statistics are shown in TAB3 in the Appendix. Furthermore, we hypothesize that EDA is more helpful for smaller datasets, so we delegate the following sized datasets by selecting a random subset of the full training set with N train ={500, 2,000, 5,000, all available data}.We run experiments for two state-of-the-art models in text classification. Recurrent neural networks (RNNs) are suitable for sequential data. We use a LSTM-RNN BID12. Convolutional neural networks (CNNs) have also achieved high performance for text classification. We implement them as described in BID6. Details are in Section 6.1 in the Appendix. We run both CNN and RNN models with and without EDA across all five datasets for varying training set sizes. Average performances (%) are shown in Table 2. Of note, average improvement was 0.8% for full datasets and 3.0% for N train =500. Table 2: Average performances (%) across five text classification tasks for models without and without EDA on different training set sizes. Overfitting tends to be more severe when training on smaller datasets. By conducting experiments using a restricted fraction of the available training data, we show that EDA has more significant improvements for smaller training sets. We run both normal training and EDA training for the following training set fractions (%): {1, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100}. FIG0 shows average performance across all datasets. The best average accuracy without augmentation, 88.3%, was achieved using 100% of the training data. Models trained using EDA surpassed this number by achieving an average accuracy of 88.6% while only using 50% of the available training data. Results for individual datasets are displayed in FIG3 (Appendix). In data augmentation, input data is altered while class labels are maintained. However, if sentences are significantly changed, then original class labels may no longer be valid. We take a visualization approach to examine whether EDA operations significantly change the meanings of augmented sentences. First, we train an RNN on the pro-con classification task (PC) without augmentation. Then, we apply EDA to the test set by generating nine augmented sentences per original sentence. These are fed into the RNN along with the original sentences, and we extract the outputs from the last dense layer. We apply t-SNE to these vectors and plot their 2-D representations FIG1 ). We found that the ing latent space representations for augmented sentences closely surrounded those of the original sentences. 3.4 ABLATION STUDIES So far, we have shown encouraging empirical . In this section, we perform ablation studies to explore the effects of each component in EDA. Synonym replacement has been previously used BID9 BID26, but the other three EDA operations have not yet been explored. One could hypothesize that the bulk of EDA's performance gain is from synonym replacement, so we isolate each of the EDA operations to determine their individual ability to boost performance. For all four operations, we ran models using a single operation while varying the augmentation parameter α={0.05, 0.1, 0.2, 0.3, 0.4, 0.5} FIG2 ).It turns out that all four EDA operations contribute to performance gain. For SR, improvement was good for small α, but high α hurt performance, likely because replacing too many words in a sentence changed the identity of the sentence. For RI, performance gains were more stable for different α values, possibly because the original words in the sentence and their relative order were maintained in this operation. RS yielded high performance gains at α≤0.2, but declined at α≥0.3 since performing too many swaps is equivalent to shuffling the entire order of the sentence. RD had the highest gains for low α but severely hurt performance at high α, as sentences are likely unintelligible if up to half the words are removed. Improvements were more substantial on smaller datasets for all operations, and α=0.1 appeared to be a "sweet spot" across the board. The natural next step is to determine how the number of generated augmented sentences per original sentence, n aug, affects performance. We calculate average performances over all datasets for n aug ={1, 2, 4, 8, 16, 32}, as shown in FIG2 (middle).On smaller training sets, overfitting was more likely, so generating many augmented sentences yielded large performance boosts. For larger training sets, adding more than four augmented sentences per original sentence was unhelpful since models tend to generalize properly when large quantities of real data are available. Based on these , we recommend parameters for practical use in FIG2 (right). Related work is creative but often complex. Back-translation BID18, translational data augmentation BID1, and noising BID27 have shown improvements in BLEU measure for machine translation. For other tasks, previous approaches include task-specific heuristics BID5 and back-translation BID19 BID28. Regarding synonym replacement (SR), one study showed a 1.4% F1-score boost for tweet classification by finding synonyms with k-nearest neighbors using word embeddings BID26 ). Another study found no improvement in temporal analysis when replacing headwords with synonyms BID9, and mixed were reported for using SR in character-level text classification; however, neither work conducted extensive ablation studies. Most studies explore data augmentation as a complementary for translation or in a taskspecific context, so it is hard to directly compare EDA to previous literature. But there are two studies similar to ours that evaluate augmentation techniques on multiple datasets. BID4 proposed a generative model that combines a variational auto-encoder (VAE) and attribute discriminator to generate fake data, demonstrating a 3% gain in accuracy on two datasets. BID8 showed that replacing words with other words that were predicted from the sentence context using a bi-directional language model yielded a 0.5% gain on five datasets. However, training a variational auto-encoder or bidirectional LSTM language model is a lot of work. EDA yields similar but is much easier to use because it does not require training a language model and does not use external datasets. In TAB5 (Appendix), we show EDA's ease of use compared to other techniques. We have shown that simple data augmentation operations can boost performance on text classification tasks. Although improvement is at times marginal, EDA substantially boosts performance and reduces overfitting when training on smaller datasets. Continued work on this topic could include exploring the theoretical underpinning of the EDA operations. We hope that EDA's simplicity makes a compelling case for its widespread use in NLP. This section contains implementation details, dataset statistics, and detailed not included in the main text. All code for EDA and the experiments in this paper will be made available. The following implementation details were omitted from the main text:Synonym thesaurus. All synonyms for synonym replacements and random insertions were generated using WordNet BID14. We suspect that EDA will work with any thesaurus. Word embeddings. We use 300-dimensional Common-Crawl word embeddings trained using GloVe BID16. We suspect that EDA will work with any pre-trained word embeddings. CNN. We use the following architecture: input layer, 1-D convolutional layer of 128 filters of size 5, global 1D max pool layer, dense layer of 20 hidden units with ReLU activation function, softmax output layer. We initialize this network with random normal weights and train against the categorical cross-entropy loss function with the adam optimizer. We use early stopping with a patience of 3 epochs. RNN. The architecture used in this paper is as follows: input layer, bi-directional hidden layer with 64 LSTM cells, dropout layer with p=0.5, bi-directional layer of 32 LSTM cells, dropout layer with p=0.5, dense layer of 20 hidden units with ReLU activation, softmax output layer. We initialize this network with random normal weights and train against the categorical cross-entropy loss function with the adam optimizer. We use early stopping with a patience of 3 epochs. Summary statistics for the five datasets used are shown in TAB3. DISPLAYFORM0 In FIG3, we show performance on individual text classification tasks for both normal training and training with EDA, with respect to percent of dataset used for training. In FIG3, we compare the EDA's ease of use to that of related work.1 BID4 How does using EDA improve text classification performance? While it is hard to identify exactly how EDA improves the performance of classifiers, we believe there are two main reasons. The first is that generating augmented data similar to original data introduces some degree of noise that helps prevent overfitting. The second is that using EDA can introduce new vocabulary through the synonym replacement and random insertion operations, allowing models to generalize to words in the test set that were not in the training set. Both these effects are more pronounced for smaller datasets. Why should I use EDA instead of other techniques such as contextual augmentation, noising, GAN, or back-translation? All of the above are valid techniques for data augmentation, and we encourage you to try them, as they may actually work better than EDA, depending on the dataset. But because these techniques require the use of a deep learning model in itself to generate augmented sentences, there is often a high cost of implementing these techniques relative to the expected performance gain. With EDA, we aim to provide a set of simple techniques that are generalizable to a range of NLP tasks. Is there a chance that using EDA will actually hurt my performance? Considering our across five classification tasks, it's unlikely but there's always a chance. It's possible that one of the EDA operations can change the class of some augmented sentences and create mislabeled data. But even so, "deep learning is robust to massive label noise" BID17.For random insertions, why do you only insert words that are synonyms, as opposed to inserting any random words? Data augmentation operations should not change the true label of a sentence, as that would introduce unnecessary noise into the data. Inserting a synonym of a word in a sentence, opposed to a random word, is more likely to be relevant to the context and retain the original label of the sentence.
Simple text augmentation techniques can significantly boost performance on text classification tasks, especially for small datasets.
1,172
scitldr
We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a \textit{physics-as-inverse-graphics} approach that brings together vision-as-inverse-graphics and differentiable physics engines, where objects and explicit state and velocity representations are discovered by the model. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation. System identification or physical parameter estimation is commonly required for control or state estimation for physical modelling, and typically relies on dedicated sensing equipment and carefully constructed experiments. Current machine learning approaches to physical modeling from video either require training by supervised regression from video to object coordinates before estimating explicit physics (; b;), or are able to discover and segment objects from video in an unsupervised manner, but do not naturally integrate with a physics engine for long-term predictions or generation of interpretable locations and physical parameters for physical reasoning (; van). In this work, we bridge the gap between unsupervised discovery of objects from video and learning the physical dynamics of a system, by learning unknown physical parameters and explicit trajectory coordinates. Our approach, called physics-as-inverse-graphics, solves the physical modeling problem via a novel vision-as-inverse-graphics encoder-decoder system that can render and de-render image components using Spatial Transformers (ST) in a way that makes it possible for the latent representation to generate disentangled interpretable states (position/velocity). These can be used directly by a differentiable physics engine to learn the parameters of a scene where the family of differential equations governing the system are known (e.g. objects connected by a spring), but the corresponding parameters are not (e.g. spring constant). This allows us to to identify physical parameters and learn vision components of the model jointly in an end-to-end fashion. Our contribution is a solution to unsupervised learning of physical parameters from video, without having access to ground-truth appearance, position or velocities of the objects, a task that had so far remained unsolved . In addition to showing that our model can learn physical parameters without object or state supervision (a task with intrinsic scientific interest in and of itself), we show that incorporating dynamics priors in the form of known physical equations of motion with learnable parameters together with learnable vision and graphics can improve model performance in two challenging tasks: long term video prediction and visual model predictive control. We first evaluate physical parameter estimation accuracy and future video frame prediction on 4 datasets with different non-linear interactions and visual difficulty. We then demonstrate the value of our method by applying it for data-efficient learning of vision-based control of an under-actuated pendulum. Notably our unique ability to extract interpretable states and parameters from pixels without supervision enables end-to-end vision-based control to exploit goal-paramaterized policies and physical reasoning for zero-shot adaptation. The ability to build inductive bias into models through structure is a key factor behind the success of modern neural architectures. Convolutional operations capture spatial correlations in images, recurrency allows for temporal reasoning , and spatial transformers provide spatial invariance in learning. However, many aspects of common data generation processes are not yet considered by these simple inductive biases. Importantly, they typically ignore the physical interactions underpinning data generation. For example, it is often the case that the underlying physics of a dynamic visual scene is known, even if specific parameters and objects are not. Incorporation of this information would be beneficial for learning, predicting the future of the visual scene, or control. Physics-as-inverse graphics introduces a framework that allows such high-level physical interaction knowledge to be incorporated into learning, even when ground-truth object appearance, positions and velocities are not available. In recent years there has been increased interest in physical scene understanding from video (; ; ; ; ; ;). In order to learn explicit physical dynamics from video our system must discover and model the objects in a scene, having position as an explicit latent variable. Here we build on the long literature of neural vision-as-inverse-graphics (; ; ; ; ; a), particularly on the use of spatial transformers (ST) for rendering . There are several models that assume knowledge of the family of equations governing system dynamics, but where the individual objects are either pre-segmented or their ground-truth positions/velocities are known (; b;). In terms of learning physical parameters, our work is directly inspired by the Galileo model and Physics 101 dataset (; 2016), which fits the dynamics equations to a scene with interacting objects. However, the Galileo model makes use of custom trackers which estimate the position and velocity of each object of interest, and is incapable of end-to-end learning from video, thus bypasses the difficulty of recognizing and tracking objects from video using a neural system. To the best of our knowledge, our model is the first to offer end-to-end unsupervised physical parameter and state estimation. Within the differentiable physics literature , observed that a multi-layer perceptron (MLP) encoder-decoder architecture with a physics engine was not able to learn without supervising the physics engine's output with position/velocity labels (c.f. Fig. 4 in). While in their case 2% labeled data is enough to allow learning, the transition to no labels causes the model to not learn at all. The key contribution of our work is the incorporation of vision-as-inverse-graphics with physics, which makes the transition possible. Another related area of increasing interest is unsupervised discovery of objects and/or dynamics from video (; van ; ; . Though powerful, such models do not typically use interpretable latent representations that can be directly used by a physics engine, reasoned about for physical problem solving, or that are of explicit interest to model users. For example, and use ST's to locate/place objects in a scene and predict their motion, but this work differs from ours in that our coordinate-consistent design obtains explicit cartesian, angular or scale coordinates, allowing us to feed state vectors directly into a differentiable physics engine. Under a similar motivation as our work, but without an inverse-graphics approach, developed an unsupervised model to obtain consistent object locations. However, this only applies to cartesian coordinates, not angles or scale. High-level view of our architecture. The encoder (top-right) estimates the position of N objects in each input frame. These are passed to the velocity estimator which estimates objects' velocities at the last input frame. The positions and velocities of the last input frame are passed as initial conditions to the physics engine. At every time-step, the physics engine outputs a set of positions, which are used by the decoder (bottom-right) to output a predicted image. If the system is actuated, an input action is passed to the physics engine at every time-step. See Section 3 for detailed descriptions of the encoder and decoder architectures. Despite recent interest in model-free reinforcement learning, model-based control systems have repeatedly shown to be more robust and sample efficient (; ; a). learn a latent dynamics model (PlaNet) that allows for planning from pixels, which is significantly more sample efficient than model-free learning strategies A3C and D4PG . However, when used for control, there is often a desire for visually grounded controllers operating under known dynamics, as these are verifiable and interpretable , and provide transferability and generality. However, system identification is challenging in vision-based control settings. use supervised learning to segment objects, controlling these using known rigid body dynamics. learn feedforward models with REINFORCE to predict physical states used by a known controller and dynamical model, but this is extremely sample inefficient. In contrast, we learn parameter and state estimation modules jointly to perform unsupervised system identification from pixels, enabling data-efficient vision-actuated model-based control. In order to learn explicit physics from video, several components have to be in place. First, the model must be able to learn to identify and represent the objects in an image. In order to perform dynamics prediction with a physics engine, the position and velocity of the object must be represented as explicit latent states (whereas appearance can be represented through some latent vector or, in our case, as a set of learned object templates). Our sequence-to-sequence video prediction architecture consists of 4 modules trained jointly: an encoder, a velocity estimator, a differentiable physics engine, and a graphics decoder. The architecture is shown in Figure 1. Encoder The encoder net takes a single frame I t as input and outputs a vector p t ∈ R N ×D corresponding to the D-dimensional coordinates of each of N objects in the scene, To extract each object's coordinates we use a 2-stage localization approach 1. First, the input frame is passed through a U-Net to produce N unnormalized masks. These masks (plus a learnable mask) are stacked and passed through a softmax to produce N + 1 masks, where each input pixel is softly assigned to a mask. The input image is then multiplied by each mask, and a 2-layer location network produces coordinate outputs from each masked input component. For a 2D system where the coordinates of each object are its (x, y) position (the polar coordinates case is analogous) and the images have dimensions H × H, the encoder output represents (x, y) coordinates with values in [0, H]. To do this, the activation of the encoder's output layer is a saturating non-linearity H/2 · tanh(·) + H/2. Velocity estimator The velocity estimator computes the velocity vector of each object at the L-th input frame given the coordinates produced by the encoder for this object at the first L input frames, v We implement this as a 3 hidden layer MLP with 100 tanh activated units. Differentiable physics engine The physics engine contains the differential equations governing the system, with unknown physical parameters to be learned -such as spring constants, gravity, mass, etc. Given initial positions and velocities produced by the encoder and velocity estimator, the physics engine rolls out the objects' trajectories. In this work we use a simple physics engine with Euler integration, where p t, v t is computed from p t−1, v t−1 by repeating for i ∈ [1..M]: where ∆t is the integration step, θ are the physical parameters and F is the force applied to each object, according to the equations in Appendix A. We use M = 5 in all experiments. In principle, more complex physics engines could be used . Coordinate-Consistent Decoder The decoder takes as input the positions given by the encoder or physics engine, and outputs a predicted imageĨ t. The decoder is the most critical part of this system, and is what allows the encoder, velocity estimator and physics engine to train correctly in a fully unsupervised manner. We therefore describe its design and motivation in greater detail. While an encoder with outputs in the range [0, H] can represent coordinates in pixel space, it does not mean that the decoder will learn to correctly associate an input vector (x, y) with an object located at pixel (x, y). If the decoder is unconstrained, like a standard MLP, it can very easily learn erroneous, non-linear representations of this Cartesian space. For example, given two different inputs, (x 1, y 1) and (x 1, y 2), with y 1 = y 2, the decoder may render those two objects at different horizontal positions in the image. While having a correct Cartesian coordinate representation is not strictly necessary to allow physical parameters of the physics engine to be learned from video, it is critical to ensure correct future predictions. This is because the relationship between position vector and pixel space position must be fixed: if the position vector changes by (∆x, ∆y), the object's position in the output image must change by (∆x, ∆y). This is the key concept that allows us to improve on , in order to learn an encoder, decoder and physics engine without state labels. In order to impose a correct latent-coordinate to pixel-coordinate correspondence, we use spatial transformers (ST) with inverse parameters as the decoder's writing attention mechanism. We want transformer parameters ω to be such that a decoder input of p n t = [x, y] n t, places the center of the writing attention window at (x, y) in the image, or that a decoder input of p n t = θ n t rotates the attention window by θ. In the original ST formulation , the matrix ω represents the affine transformation applied to the output image to obtain the source image. This means that the elements of ω in Eq. 1 of do not directly represent translation, scale or angle of the writing attention window. To achieve this representation, we use a ST with inverse transformation parameters. For a general affine transformation with translation (x, y), angle θ and scale s, we want to modify the source image coordinates according to: 1 Though any other architecture capable of effectively extracting object locations from images would work. This transformation can be obtained with a ST by inverting: Therefore, to obtain a decoder with coordinate-consistent outputs, we simply use a ST with parameters ω as given in Each object is represented by a learnable content c n ∈ H×H×C and mask m n ∈ R H×H×1 tensor, n = 1..N. Additionally, we learn content c bkg ∈ H×H×C and mask m bkg ∈ R H×H×1, that do not undergo spatial transformation. One may think of the content as an RGB image containing the texture of an object and the mask as a grayscale image containing the shape and z-order of the object. In order to produce an output image, the content and mask are transformed according to [ĉ The decoder architecture is shown in Fig. 1, bottom-right. The combined use of ST's and masks provides a natural way to model depth ordering, allowing us to capture occlusions between objects. Auxiliary autoencoder loss Using a constrained decoder ensures the encoder and decoder produces objects in consistent locations. However, it is hard to learn the full model from future frame prediction alone, since the encoder's training signal comes exclusively from the physics engine. To alleviate this and quickly build a good encoder/decoder representation, we add a static per-frame autoencoder loss. Training During training we use L input frames and predict the next T pred frames. Defining the frames produced by the decoder via the physics engine asĨ pred t and the frames produced by the decoder using the output of the encoder directly asĨ ae t, the total loss is: where α is a hyper-parameter. We use mean-squared error loss throughout. During testing we predict an additional T ext frames in order to evaluate long term prediction beyond the length seen for training. Figure 3: Frame prediction accuracy (SSI, higher is better) for the balls datasets. Left of the green dashed line corresponds to the training range, T pred, right corresponds to extrapolation, T ext. We outperform Interaction Networks (IN) , DDPAE and VideoLSTM in extrapolation due to incorporating explicit physics. Dataset 2-balls spring 2-digits spring 3-balls gravity 3-balls gravity Setup To explore learning physical parameters and evaluate long-term prediction we train our model on scenes with 5 different settings: two colored balls bouncing off the image edges; two colored balls connected by a spring; three colored balls with gravitational pull -all on a black ; and to test greater visual complexity, we also use 2 MNSIT digits connected by a spring, on a CIFAR . We train using values of (L, T pred, T ext) set to,,, and, respectively. For the spring systems the physical parameters to be learned are the spring constant k and equilibrium distance l, and for the gravitational system it is the gravity constant g or mass of the objects m (when learning gravity the mass if fixed, and vice versa). In all cases we use objects with mass m = 1. We provide the exact equations of motion used in these systems and other training details in Appendices A and B, respectively. All datasets consist of 5000 sequences for training, 500 for validation, and 500 for testing. We use a learnable ST scale parameter initialized at s = 2 in the balls datasets and s = 1 in the digits dataset. In these datasets we set θ = 0. Baselines We compare our model to 3 strong baselines: DDPAE 2, which is a generative model that uses an inverse-graphics model with black-box dynamics; VideoLSTM , which uses black-box encoding, decoding and dynamics; Interaction Network + Inverse-Graphics, which uses the same encoder and decoder as our Physics-as-Inverse-Graphics model, but where the dynamics module is an Interaction Network . The latter model allows us to compare explicit physics with relational dynamics networks, in terms of their ability to correctly capture object interactions 3. Results Table 1 shows that our model finds physical parameters close to the ground-truth values used to generate the datasets, and Figure 4 shows the contents and masks learned by the decoder. This highlights the fact that the proposed model can successfully perform unsupervised system identification from pixels. Future frame predictions for two of the systems are shown in Figure 2, and per-step Structural Similarity Index (SSI) 4 of the models on the prediction and extrapolation range are shown in Figure 3. While all models obtain low error in the prediction range (left of the green dashed line), our model is significantly better in the extrapolation range. Even many steps into the future, our model's predictions are still highly accurate, unlike those of other black-box models (Figure 2). This shows the value of using an explicit physics model in systems where the dynamics are non-linear yet well defined. Further rollouts are shown in Appendix C, and we encourage the reader to watch the videos for all the datasets at https://sites.google.com/ view/physicsasinversegraphics. This difference in performance is explained in part by the fact that in some of these systems the harder-to-predict parts of the dynamics do not appear during training. For example, in the gravitational system, whiplash from objects coming in close contact is seldom present in the first K + T pred steps given in the training set, but it happens frequently in the T ext extrapolation steps evaluated during testing. We do not consider this to be a failure of black-box model, but rather a consequence of the generality vs specificity tradeoff: a model without a sufficiently strong inductive bias on the dynamics is simply not able to correctly infer close distance dynamics from long distance dynamics. Table 2: Test loss under different training conditions. Separate gradients: Train encoder/decoder on L rec, and velocity estimator and physics engine on L pred. Black-box decoder, joint: Joint training using a standard MLP network as the decoder. Only joint training using our coordinate-consistent decoder succeeds. Ablation studies Since the encoder and decoder must discover the objects present in the image and the corresponding locations, one might assume that the velocity estimator and physics engine could be learned using only the prediction loss, and encoder/decoder using only the static autoencoder loss, i.e., without joint training. In Table 2 we compare the performance of four variants on the 3-ball gravity dataset: joint training using only the prediction loss; joint training using the prediction and autoencoder losses; training the encoder/decoder on the autoencoder loss and the velocity estimator and physics engine on the prediction loss; and joint training, but using an MLP black-box decoder. We can see that only joint prediction and autoencoder loss obtain satisfactory performance, and that the use of the proposed coordinate-consistent decoder is critical. The prediction loss is essential in order for the model to learn encoders/decoders whose content and masks can be correctly used by the physics engine. This can be understood by considering how object interaction influences the decoder. In the gravitational system, the forces between objects depend on their distances, so if the objects swap locations, the forces must be the same. If the content/mask learned for each object are centered differently relative to its template center, rendering the objects at positions [x, y] and [w, z], or [w, z] and [x, y] will produce different distances between these two objects in image space. This violates the permutation invariance property of the system. Learning the encoder/decoder along with the velocity estimator and physics engine on the prediction loss allows the encoder and decoder to learn locations and contents/masks that satisfy the characteristics of the system and allows the physics to be learned correctly. In Appendix D we perform further ablations on the decoder architecture and its ability to correctly render objects in regions of the image not seen during training. in terms of learning sample efficiency (left). Explicit physics allows reasoning for zero-shot adaptation to domain-shift in gravity (center) and goal-driven control to balance the pendulum in any position (right). DDPG (VAE) corresponds to a DDPG agent trained on the latent space of an autoencoder (trained with 320k images) after 80k steps. DDPG (proprio) corresponds to an agent trained from proprioception after 30k steps. Bottom: The first 3 rows show a zero-shot counterfactual episode with a gravity multiplier of 1.4 for an oracle, our model and planet, with vertical as the target position (as trained). The last row shows an episode using a goal image to infer the non-vertical goal state. Tasks One of the main applications of our method is to identify the (actuated) dynamical parameters and states of a physical system from video, which enables vision-based planning and control. Here we apply it to the pendulum from OpenAI Gym -one typically solved from proprioceptive state, not pixels. For training we collect 5000 sequences of 14 frames with random initialization (θ 0 ∼ Unif(−6, 6)) and actions (u t ∼ Unif(−2, 2)). The physical parameters to learn are gravity g = 10.0 and actuation coefficient a = 1.0. We use K = 4 and T pred = 10. We use the trained MPC model as follows. At every step, the previous 4 frames are passed to the encoder and velocity nets to estimate [θ t,θ t]. This is passed to the physics engine with learned parameters g and a. We perform 100-step model-predictive control using the cross entropy method , exactly as described in , setting vertical position and zero velocity as the goal. Baselines We compare our model to an oracle model, which has the true physical parameters and access to the true pendulum position and velocity (not vision-based), as well as a concurrent state-of-the art model-based RL method (PlaNet ), and a model-free 5 deep deterministic policy gradient (DDPG) agent. To provide an equivalent comparison to our model, we train PlaNet on random episodes. Results In terms of system identification, our model recovers the correct gravity (g = 9.95) and force coefficient (a = 0.99) values from vision alone, which is a prerequisite to perform correct planning and control. Figure 5 (top-left) highlights the data efficiency of our method, which is comparable to PlaNet, while being dramatically faster than DDPG from pixels. Importantly, the interpretability of the explicit physics in our model provides some unique capabilities. We can perform simple counter-factual physical reasoning such as' How should I adapt my control policy if gravity was increased?', which enables zero-shot adaptation to new environmental parameters. Figure 5 (top-middle) shows that our model can exploit such reasoning to succeed immediately over a wide range of gravities with no re-training. Similarly, while the typical inverted pendulum goal is to balance the pendulum upright, interpretable physics means that this is only one point in a space of potential goals. Figure 5 (top-right) evaluates the goal-paramaterized control enabled by our model. Any feasible target angle specified can be directly reached by the controller. There is extrapolative generalisation across the space of goals even though only one goal (vertical) was seen during training. Importantly, these last two capabilities are provided automatically by our model due to its disentangled interpretable representation, but cannot be achieved without further adaptive learning by alternatives that are reward-based or rely on implicit physics . Alhough the approach presented here shows promising in terms of physical parameter estimation, long-term video prediction and MPC, a number of limitations need to be overcome for real-world application. Templates as object representation Though the assumption that every scene in a dataset is a combination of learnable templates is a common one in the literature (c.f. for an extensive study on this), this is insufficient to model real-world scenes. For example, applying physics-as-inverse-graphics to the Physics101 dataset would require representing objects using a latent appearance representation that could be used by the decoder . This would introduce new modelling challenges, requiring object tracking to keep correct object identity associations . In this work we simplify this problem by assuming that objects are visually distinct throughout the dataset, though this does not detract from the essential contributions of the paper. Rigid sequence to sequence architecture In this work we used a sequence-to-sequence architecture, with a fixed number of input steps. This architectural choice (inspired by), prevents the model from updating its state beliefs if given additional input frames later in the sequence. Formulating the current model in a probabilistic manner that would allow for state/parameter filtering and smoothing at inference time is a promising direction of future work. Static assumption Many scenes of interest do not follow the assumption that the only moving objects in the scene are the objects of interest (even though this assumption is widely used). Adapting our model to varying scene s would require additional components to discern which parts of the scene follow the dynamics assumed by the physics engine, in order to correctly perform object discovery. This is a challenging problem, but we believe it would greatly increase the range of applications of the ideas presented here. Physics-as-inverse graphics provides a valuable mechanism to integrate inductive bias about physical data generating processes into learning. This allows unsupervised object tracking and system identification, in addition to sample efficient, generalisable and flexible control. However, incorporating this structure into lightly supervised deep learning models has proven challenging to date. We introduced a model that accomplishes this, relying on a coordinate-consistent decoder that enables image reconstruction from physics. We have shown that our model is able to perform accurate long term prediction and that it can be used to learn the dynamics of an actuated system, allowing us to perform vision-based model-predictive control. In this section we describe the equations of motion used for each system. 2-balls and 2-digits spring The force applied on object i by object j follows Hooke's law: Each step corresponds to an interval ∆t = 0.3. 3-balls gravity The force applied on object i by object j follows Newton's law of gravity: where the masses are set to 1. Each step corresponds to an interval ∆t = 0.5. Pendulum The pendulum follows the equations used by the OpenAI Gym environment: where u is the action. Each step corresponds to an interval ∆t = 0.05. In the physics engine used by the model we introduce an extra actuation coefficient a to be learned along with g: For all datasets we use with an initial learning rate of 3 × 10 −4. For the balls and digits datasets we train for 500 epochs with α = 2, and divide the learning rate by 5 after 375 epochs. For the pendulum data we train for 1000 epochs using α = 3, but divide the learning rate by 5 after 500 epochs. The image sizes are 32 × 32 for the 2-balls bouncing and spring, 36 × 36 for the 3-balls gravity, 64 × 64 for the 2-digits spring, and 64 × 64 grayscale for the pendulum. The content and mask variables are the output of a neural network with a constant array of 1s as input and 1 hidden layer with 200 units and tanh activation. We found this easier to train rather than having the contents and masks as trainable variables themselves. One limitation of standard fully-connected or deconvolutional decoders is the inability to decode states corresponding to object poses or locations not seen during training. For example, if in the training set no objects appear in the bottom half of the image, a fully-connected decoder will simply learn to output zeros in that region. If in the test set objects move into the bottom half of the image, the decoder lacks the inductive bias necessary to correctly extrapolate in image space. To test this hypothesis, we replaced our model's decoder with a Deconv and Spatial Broadcast (b) decoder, and compared them in a spatial extrapolation experiment. In this experiments, objects never enter the bottom half of the image in the input and prediction range, though in the extrapolation range in the test set objects move to this region of the scene. In the rollouts shown in Fig. D, Broadcast performs better than Deconv, but they both fail to maintain object integrity when the balls move to the bottom half of the image in the extrapolation steps, validating our hypothesis that a black-box decoder has insufficient extrapolation ability. In contrast, our rendering decoder is be able to correctly decode states not seen during training. In the limit that our renderer corresponds to a full-blown graphics-engine, any pose, location, color, etc. not seen during training can still be rendered correctly. This property gives models using rendering decoders, such as ours and , an important advantage in terms of dataefficiency. We note, however, that in general this advantage does not apply to correctly inferring the states from images whose objects are located in regions not seen during training. This is because the encoders used are typically composed simply of convolutional and fully-connected layers, with limited de-rendering inductive biases. Figure 6: Comparison between graphics decoder and two black-box decoders, trained on data where objects only appear in the top half of the scene. Only the graphics decoder is able to correctly render the objects in the bottom half of the scene at test time. Broadcast: spatial broadcast decoder (b); Deconv: standard deconvolutional network. The model proposed assumes we know the number of objects present in the scene. Here we briefly explore how to the model behaves when we use an incorrect number of slots N. We use the gravitational system, since interaction forces between objects are easy to generalize for any N. Fig. 7, left, shows that when using only 2 object slots, two of the objects are found, since the model does not have capacity to find more. Fig. 7, right, shows that when using more slots than the number of objects in the scene, all objects are discovered, and extra slots are left empty. However, in both cases we found predictive performance to be subpar, since in one case there are objects missing to correctly infer interactions, and in the other there are interactions between object slots and empty slots, confusing the dynamics. Results for incorrect number of object slots in the physics engien for the 3-body gravitational system Left: Contents and masks learned for 2 object slots. Right: Contents and objects learned for 4 object slots.
We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available.
1,173
scitldr
This paper proposes ASAL, a new pool based active learning method that generates high entropy samples. Instead of directly annotating the synthetic samples, ASAL searches similar samples from the pool and includes them for training. Hence, the quality of new samples is high and annotations are reliable. ASAL is particularly suitable for large data sets because it achieves a better run-time complexity (sub-linear) for sample selection than traditional uncertainty sampling (linear). We present a comprehensive set of experiments on two data sets and show that ASAL outperforms similar methods and clearly exceeds the established baseline (random sampling). In the discussion section we analyze in which situations ASAL performs best and why it is sometimes hard to outperform random sample selection. To the best of our knowledge this is the first adversarial active learning technique that is applied for multiple class problems using deep convolutional classifiers and demonstrates superior performance than random sample selection. The goal of active learning (AL) algorithms is to train a model most efficiently, i.e. achieving the best performance with as few labelled samples as possible. Typical AL algorithms operate in an iterative fashion, where in each AL-cycle a query strategy selects samples that the oracle should annotate. These samples are expected to improve the model most effectively when added to the training set. This procedure continues until a predefined stopping criteria is met. In this paper we will mainly focus on pool based active learning, because a pool of unlabelled samples is often available beforehand or can easily be build. Furthermore, annotating all pool samples serves as an ideal evaluation environment for active learning algorithms. It enables to train a fullysupervised model that establishes a performance upper bound on this data set. Similarly, randomly selecting instead of actively choosing samples establishes a lower bound. Then, the goal of an active learning algorithm is to approximate the performance of the fully supervised model with as few labelled samples as possible, while exceeding the performance of random sampling. Uncertainty sampling is an effective query strategy that identifies samples that are more informative than random ones. The heuristic is, that samples for which the model is most uncertain contain new information and improve the model. However, to identify such samples an exhaustive search over the full pool is required and the uncertainty score needs to be recomputed as soon as the model is updated (each AL cycle). Thus, uncertainty sampling has a linear run-time complexity such that scanning very large unlabelled data sets is impractical even for inexpensive score functions. Our contributions are as follows:• We propose Adversarial Sampling for Active Learning (ASAL) that allows to approximate the performance of uncertainty sampling with a sub-linear run-time complexity.• We conduct an extensive set of experiments using four different benchmarks (two and ten classes) and discuss the limitations of ASAL and how to overcome them.• We demonstrate ASAL with different CNN based classifiers and three different feature sets to compare samples: raw pixel values, compressed representations of an auto-encoder and the features used to discriminate between real and fake samples in GANs. We review related work on active learning especially on pool based uncertainty sampling and methods attempting to improve the run-time complexity of these active learning methods. Pool-based active learning methods select new training samples from a predefined unlabelled data set BID6;; ). A common query strategy to identify new samples is uncertainty sampling . and BID2 use minimum-distance sampling to train Support Vector Machines (SVMs). Minimum distance sampling is a well known uncertainty sampling strategy, it assumes that the classifier is uncertain about samples in the vicinity of the separating hyper-plane. This strategy is mainly used for two class but can be extended to multiple class problems by using the SVM in one vs. one or one vs. all settings (Jain et al. FORMULA1). use information entropy to measure the uncertainty of the classifier for a particular sample. Computing uncertainty with information entropy is equally suitable for two or multiple classes. propose two hashing based method to accelerate minimum distance sampling by selecting new samples in sub-linear time. These methods are designed to select the closest point (approximately) to a hyper-plane in a k-dimensional feature space, where the positions of the data points are fixed but the hyper-plane is allowed to move. Thus, these methods are limited to SVMs with fixed feature maps, because, if the feature map changes, the position of the samples become obsolete and need to be recomputed. Hence, the run time complexity is sub-linear for constant feature maps and linear otherwise. Unfortunately, CNN based methods update their feature maps during training. Thus, their methods are as efficient as exhaustive uncertainty sampling if CNNs are involved. propose Generative Adversarial Active Learning (GAAL) that uses a Generative Adversarial Network (GAN), that is trained on the pool samples, to generate synthetic samples in each AL cycle. Generating instead of selecting uncertain samples leads to a constant run-time complexity because producing a new sample is independent of the pool size. use the traditional minimal distance optimization problem (see Eq. 1) but replace the variable x (denoting a pool sample) with the trained generator. Then, they use gradient descent to minimize the objective. The latent variable minimizing the objective in a synthetic image close to the separating hyper-plane. They annotate the synthetic sample and use it for training. demonstrate GAAL on subsets of MNIST and CIFAR-10 (two classes) using linear SVMs and DCGANs (; BID3). However, GAAL performs worse than random sampling on both data sets, because it suffers from sampling bias and annotating is arbitrarily hard caused by sometimes poor quality of the synthetic uncertain samples. Note, that the GAAL requires visually distinct classes (horse & automobile) to enable manual annotations at all. We propose ASAL that reuses the sample generation idea of but we use information entropy as uncertainty score and directly extend it to multiple classes. Additionally, ASAL uses CNN based classifiers instead of linear SVMs. For the generator we train Wasserstein GANs beforehand ). We avoid annotating synthetic images by selecting the most similar ones from the pool with a newly developed sample matching method. We propose three different feature maps that we compute for each pool sample to fit a fast nearest neighbour model beforehand. During active learning, we compute the feature map of the synthetic sample and retrieve the most similar one from the pool in sub-linear time. In this section we introduce the two uncertainty query strategies: minimum distance and maximum entropy sampling. We use the following notation: The set describing the pool is denoted by P, the classifier at each AL cycle k is denoted by θ k. For uncertainty sampling where the model consists of a SVM the query strategy is based on the assumption that the model is least certain for samples that are in the vicinity of the separating hyper- DISPLAYFORM0 with (x, y) is the training set at cycle k, θ is the classifier, z the latent variable, G the generator,x the synthetic samples, F the feature extractor, f the features, P the pool and NN the nearest neighbour method.plane. Thus, newly selected samples are close to the decision boundary, are ideally support vectors that improve the decision boundary. Minimal distance sampling using SVM reads as DISPLAYFORM1 where w and b define the separating hyper-plane and φ(·) is a feature map, e.g. induced by a SVM kernel or a neural network. Instead of considering the distance to the separating hyper-plane, information entropy computes the information content in each sample for the current classifier. Thus, the classifier is uncertain for samples with a high entropy and these samples have a high information content for the task of improving the classifier. Maximum entropy sampling read as follows: DISPLAYFORM2 where DISPLAYFORM3 and m is the number of categories. Solving the optimization problems requires an exhaustive search over the whole pool P that requires computing the uncertainty score for each sample. Furthermore, we need to recompute the uncertainty score in each AL cycle because updating the classifier invalidates the previous score. Thus, classical uncertainty sampling has a linear run time complexity O(|P|) with respect to the pool size |P|. ASAL adapts the sample generation idea of to pool based active learning using multiple classes and information entropy to measure uncertainty. Fig. 1 shows the main components of the proposed ASAL. We use a labelled data set (X k, Y k) to train the classifier θ k. Then, we use the trained classifier θ k and the generator G to produce uncertain samplesx. The feature extractor F computes features that the nearest neighbour model uses to retrieve the most similar real samples from the pool. Finally, an oracle annotates the new samples and adds them to the training set. Then, a new AL cycle starts. In the remainder of this section, we introduce the adversarial sample generation and the sample matching method. Instead of selecting uncertain samples from the pool, we follow Figure 2: The rows show either generated or matched samples using different feature sets for MNIST -ten classes. The brackets denote (label id / sample id).drawn from the generator G are indistinguishable from real samples. At convergence, the generator produces the function G: R n → X that maps the latent space variable z ∼ N (0 n, I n) to the image domain X. Including the generator G(·) in Eq. equation 2 leads to the following optimization problem with respect to x DISPLAYFORM0 Removing the constraint x ∈ P by including the generator simplifies the problem but changes its solution. New samples are no longer selected from the pool but are visually indistinguishable from these samples. We solve the optimization problem in two steps: (i) we use the chain rule and gradient descent to minimize the objective with respect to z and (ii) we use G to recover a synthetic sample x from z. Thus, solving problem equation 3 has a constant run-time complexity O because it is independent of the pool size. The goal of the sample matching method is retrieving the most similar sample from the pool for a given synthetic sample. Thus, we need (i) representative features for comparison, (ii) a distance measure and (iii) a fast nearest neighbour method. The ideal features would group the samples with similar entropy in features space close together. This guarantees that the nearest real neighbour of a synthetic sample with high entropy has a high entropy as well. However, updating the model, changes the entropy of each sample in the pool and destroys the previous structure in feature space. Thus, keeping a similar grouping in feature space, requires updating the features and recomputing the position of each sample. This leads to a linear run-time complexity. Hence, for a more efficient method we require fixed features for sample matching. To design such features, we use the fact that they are not required to structure the samples according to their entropy. Indeed it is sufficient that the features identify one sample in the pool that is very similar to the synthetic sample. Then, the two samples will not only share properties the classifier is comfortable with, but also the features that lead to high entropy. Thus, the features should be representative for the data set, be diverse and allow to discriminate the main properties of different samples. The raw pixel values are a simple representation that allows to differentiate between different samples but is close for images with similar scene. Auto-encoders extract more representative features for a specific data set than the raw pixel values and lead to a compressed set of core features representing the images. Additionally, we study the features extracted from the discriminator that was used in the training of the GAN. We expect that the features used to differentiate between real and synthetic samples allow to compute representative sample properties. We use the Euclidean distance measure to compute the similarity between two samples in feature space. Furthermore, we use a multidimensional binary search tree (k-d tree) BID1 ) for efficient nearest neighbour selection. The run-time complexity to search a nearest neighbour is sub-linear O(log(|P|) with respect to the pool size |P|. Additionally, we use Principal Component Analysis (PCA) to reduce the number of dimensions of the feature space to achieve a small absolute run-time and to ensure similar run-times when using different features set with different number of dimensions. For the experiments we use two different dataset: MNIST and CIFAR-10 (Krizhevsky FORMULA2). The MNIST data set contains ten different digits 0 to 9 unevenly distributed. Each image has a resolution of 28 × 28 gray-scale pixels. The data set consists of 50k training, 10k validation and 10k testing samples. The CIFAR-10 consists of 50k training and 10k validation 32 × 32 color images with uniformly distributed label categories. We use the validation set for testing. For close comparison we follow and construct two class data sets, consisting of the MNIST digits 5 & 7 and the CIFAR-10 classes automobile & horse. First, we produce different references to assess the performance of ASAL. The classification accuracy for the fully supervised model establishes a performance upper bound that any active learning strategy attempt to approximate with as few training samples as possible. Furthermore, random sampling establishes the baseline that we want to exceed or at least perform equally. Additionally, we report the performance of traditional pool-based maximum entropy sampling that ASAL tries to approximate with sub-linear run-time complexity. We examine three different versions of ASAL using the previously introduced set of features: ASALGray/RGB, ASAL-Autoencoder, and ASAL-Discriminator. We reduce the dimension of the feature space to 50 using PCA. We experimentally verified that larger dimensions only increase the runtime but do not lead to better classification accuracy. To synthesize new samples we use the Adam optimizer and apply 100 gradient steps to minimize the negative entropy with respect to the latent space variable (see Eq. equation 3). Note, that we directly optimize for multiple latent space variables at the same time, embedding them in one batch with random initialization. We always draw samples from the pool without replacement. We do not use data augmentation for any experiment and train all models from scratch in each AL cycle. We run all experiments for five different runs with different random seeds except the computationally demanding experiments on CIFAR-10 with ten classes that we run for three random seeds. We report the training iterations for the GANs on each data sets in Tab. 3 in the appendix. We use the default values for all other parameters given by BID5 and Wei et al. (2018a) in the papers and code BID4 Wei et al. (2018b) ). We describe the different architectures and training settings of the auto-encoders in Sec. B in the appendix. Additionally, we report further insights such as label distribution, entropy of newly added samples and additional experiments using other GANs or uncertainty scores in the appendix. For binary digit classification we train a linear model with cross entropy loss. We train the model for 10 epochs using the Adam optimizer with a batch size of 10 and learning rate of 0.001. We train the Wasserstein GAN BID5 ) with gradient penalty to synthesize only the digits 5 & 7. FIG0 shows that for a budget of 500 samples only the aggressive active learner reaches the performance of the fully supervised model. However, ASAL performs clearly superior to random sampling and converges faster to the accuracy of the fully supervised model. We want to emphasize, that all ASAL strategies outperform random sampling. Furthermore, FIG0 verifies that the entropy of newly added samples to the training set is higher for ASAL than for random sampling. On average, all versions of ASAL select samples with 63% higher entropy than randomly selected samples. report worse performance than random sampling when training and testing on MNIST -two classes. For comparison we re-implement their GAAL with DCGAN. For a fairer comparison, we use an improved version of GAAL using the same Wasserstein GAN as ASAL and additionally test ASAL with the DCGAN. FIG0 shows that our implementation of GAAL performs similarly as random sampling and suffers less from sampling bias than reported by. Possible reasons are the different human annotators or slightly different design choices. Furthermore, we observe that both methods outperform random sampling when using Wasserstein GANs and perform worse when using DCGAN. However, ASAL exceeds the performance of GAAL especially when using the Wasserstein GAN. Furthermore, using Wasserstein GAN leads to less stable performance of GAAL than ASAL(higher variance between 250 and 300 training samples for GAAL, note the red spikes in negative direction). FIG0 shows only the for ASAL-Gray, for additional , see Fig. 10 in the appendix. For binary classification on CIFAR-10, we reuse the experimental setup presented in Sec. 5.3 but change the batch size to 50. We run the active learning strategies until the budget of 1000 samples is reached. Again we use a Wasserstein GAN BID5 ) with gradient penalty that synthesizes only two classes (automobile & horse). FIG1 shows that especially ASAL-Autoencoder exceeds the performance of random sampling and achieves similar or slightly better than exhaustive uncertainty sampling, for a training set containing more than 500 samples. For ten digit classification we use LeNet with cross entropy. We train the model for 10 epochs using the Adam optimizer with a batch size of 50 and learning rate of 0.001. For active learning, we start with an initial data set containing 10 samples for each class and add 50 samples each AL cycle. We run our experiments until the training set contains 10k samples. We synthesize samples for all ten classes using a Wasserstein GAN with gradient penalty BID5. FIG1 shows that the proposed ASAL strategies also tackle multiple class problems and exceed the quality of random sampling. Using all classes of CIFAR-10 complicates the classification task and we require a deep model to achieve close to state-of-the-art . Therefore, we use the All-CNN model proposed by with a reported classification error of 9.08%. We use the proposed architectures and training strategies and use stochastic gradient descent with constant momentum of 0.9 and a learning rate of 0.01 that we decay by a factor of 10 at the 130th and the 140th epoch. We train the model for 150 epochs with a batch size of 128 without data augmentation and report a classification Figure 5: The rows show either generated or matched samples using different feature sets for CIFAR-10 -ten classes. The brackets denote (label id / sample id).error of 11.8%. The All-CNN contains ∼1.4 million different parameters. Hence, we require larger initial training sets than for the previous models. Thus we include 100 randomly selected images per class. We add 1000 samples to the data set every AL cycle until the budget of 30k samples is reached. We generate ten times a batch containing 100 samples because optimizing for all samples at the same time is unfeasible. In contrast to the previous experiments we use a residual Wasserstein GAN (Wei et al. (2018a) ) with gradient penalty and soft consistency term. We observed an Inception score of 7.8 without and 8.3 with adding the soft consistency term. We use the publicly available TensorFlow implementation of Wei et al. (2018b). FIG1 shows the for different ASALs using the promising residual GAN, that achieves the highest Inception score. Unfortunately, FIG1 shows that the performance of ASAL follows random sampling or is slightly worse, whereas maximum entropy sampling converges to the quality of the fully supervised model but uses 60% of all pool samples. Figs. 26 and 27 in the appendix show the label distribution for each AL cycle. It reveals that maximum entropy sampling selects most frequently cat exactly one of the classes that are least frequent in most of the training set of ASAL. Furthermore, FIG1 in the appendix reports the same experiments using different GANs but none of them leads to superior performance than random sampling. Our experiments and show that ASAL clearly outperforms random sampling and approximates exhaustive uncertainty sampling on three out of four benchmarks. Compared to GAAL, ASAL outperforms random sampling, enables annotating real samples, handles multiple class problems and uses CNN based classifiers. ASAL allows to update the feature maps of a classifier in each AL cycle and still achieves sub-linear run-time complexity whereas the hashing based methods of Jain et al. FORMULA1 has a linear run-time complexity if the feature maps are updated. Updating the classifier and keeping the features for matching fixed, leads to sub-linear run-times but without guaranteeing that newly added samples have the highest entropy of all samples available in the pool. To achieve a sub-linear run-time complexity, ASAL requires to train a GAN and potentially an autoencoder beforehand. Nonetheless, this initial cost pays off for extremely large data sets. Although, it might be impractical to consider each sample during training of the GAN, it can generate representative samples and ASAL allows to select samples from the pool that were not used to train the GAN. Thus, ASAL favours large data sets with similar samples, where it is only possible to train the GAN for a fixed number of iterations but contains a close match for any synthetic sample. Conversely, small data sets with diverse samples allow to train the GANs for many epochs such that it is align to the data distribution. However, real samples are sparsely distributed in feature space such that even the closest matches of a synthetic sample are significantly different. We observed in FIG1 that ASAL performs similar as random sampling. Although ASAL enables to generate uncertain samples, it fails to select similar samples from the pool that have high entropy. One explanation is the aforementioned situation, where the images are diverse but the data set is comparatively small. Note, that CIFAR-10 is clearly more diverse than MNIST but has the same amount of samples. Furthermore, the top row in Fig. 5 shows that synthetic images still look unrealistic and identifying a similar real sample is a challenging problem. Another reason for poor performance is using low level features to compare different samples. To achieve state-of-the-art on CIFAR-10, we had to use a much deeper network than for all other experiments but kept the architectures of the feature extractors almost identical. This can lead to a mismatch where the entropy of a sample mainly depends on high-level features but the matching method uses only low-level features to compare samples. FIG2 in the appendix shows for example that exhaustive uncertainty sampling selects most frequently images with the category cat exactly a class that ASAL selects least frequently. This is a sign that ASAL considers low-level features to find similar samples instead of more complex properties that characterize class information. Fig. 5 provides again such an indication. The last column shows a synthetic image with a white horse on a gray and ASAL proposes matches with white object on a gray but contain either a ship or an airplane. This means, that the classifier requires samples of a specific class it is uncertain about, ASAL generates these samples but fails to retrieve matches showing theses categories. On CIFAR-10 -two classes we reported for ASAL-Autoencoder similar or slightly higher accuracy than for exhaustive uncertainty sampling. Although we consider uncertainty sampling as a performance reference that we try to approximate, it is always possible to exceed its performance. Note, entropy is one particular property that can identify informative samples. Nonetheless, it is possible that samples with lower entropy are more effective for training the classifier. We proposed and evaluated a new pool-based active learning method that uses sample generation and matching. However, the sub-linear run-time complexity requires relaxing the guarantee, that selected samples have the highest entropy of all pool samples. We showed, that the success of ASAL depends on different factors: the structure of the data set, the quality of the trained GAN and the relevance of the feature used to compare samples. A poor GAN can generate high entropy samples but poor quality samples are impractical to match. Small data sets that contain very different samples complicate both, training GANs and finding similar matches. Less representative features might not contain the properties needed to find similar samples, where both have a high entropy. Nonetheless, we demonstrated that ASAL outperforms random sample selection and approximates exhaustive uncertainty sampling in three out of four cases. Furthermore, the sub-linear run-time complexity makes ASAL suitable for large data set. We pointed out that ASAL uses low-level feature but there are signs that high-level features might be more suitable to match samples. Thus, one particular direction of future research includes identifying such high-level features. Possible candidates are VGG (Simonyan & Zisserman FORMULA1) or AlexNet (Krizhevsky et al. FORMULA1 features. Training the model on the unlabelled pool and the small initial data set might lead to features covering the needed properties. In addition, sample generation allows adding other scores beside information entropy. Thus, an interesting direction of future research is designing other scores that will be used during sample generation i.e. measuring sample diversity (Zhu et al. FORMULA2 We keep the suggested splitting into training, validation and testing for each benchmark. We train a Wasserstein GAN with gradient penalty and an auto-encoder for ASAL beforehand (see Tabs. 1 and 2 for the architectures). We use a Nvidia GeForce GTX TITAN X GPU to train the models. The 100k training iterations for the GAN take roughly 25 h and the 50k iterations of the auto encoder 1.6 h. We use for both the Adam optimizer with a learning rate of 0.0001 and batch size 64. The number of compressed features of the auto-encoder is 128. For sample matching we decrease the number of features to 50 using PCA.For classification we use the CNN presented in Tab. 1. We use the Adam optimizer with a learning rate of 0.001 and a batch size of 50 and train for 30 epochs. We start active learning with 100 labelled samples, where the number of samples per class corresponds to the data distribution. We select and label ten new samples in each active learning cycle until we exhaust the budget of 2000 samples. We run all experiments for three different random seeds. For sample generation we apply 100 gradient descent steps using the Adam optimizer with a step size of 0.01. We optimize for all ten samples at the same time. A.3 AND TIMINGS FIG2 shows the test accuracy of random sampling, maximum entropy sampling and ASAL for three different benchmarks on CelebA. We conclude, that ASAL clearly outperforms random sampling and approaches the accuracy of maximum entropy sampling but requires less time for sample selection, see FIG4. Thus, the proposed idea of sample generation and matching is simple but effective. FIG4 reports the time required to select ten new samples in each active learning cycle with respect to different data set sizes. We randomly augmented the data set containing 160k samples to report the timings of maximal entropy sampling and ASAL (only nearest-neighbor search depends on data set size) for data sets with up to 16M samples. Whereas it it possible to keep all images in memory for 160k (1.98GB) this is hardly possible for 16M images (198GB). Nonetheless, we did not add additional I/O-time for reading the images from disk to the maximum entropy timings in each AL cycle. The sample matching proposed in ASAL reduces the memory consumption because it requires to keep only 50 features per images (32MB for 160k and 3.2GB for 16M images). This saving allows to keep the features in memory and to build the nearest neighbor model even for huge data sets. We use a Nvidia GeForce GTX TITAN X GPU to report all timings. ASAL has a sub-linear run-time complexity to select new samples. However, it requires several pre-processing steps such as training the GAN (∼ 25h), training the auto-encode (∼ 1.6h), extracting the features (∼ 32s per 160k samples) and fitting the nearest-neighbor model (∼ 5min for 16M samples). Note, that the number of iterations depends mainly on the difficulty of the data set than on its size. As sample selection for ASAL is almost negligible (44s for 16M samples per AL cycle) compared to the pre-processing time, the transition point is approximately then, when uncertainty sampling takes more time than preparing ASAL. Thus, maximal uncertainty sampling is more efficient when using small data sets or running active learning only for a few samples. Nonetheless, for all settings and data set sizes there exists a transition point, see Fig. 8. For example, ASAL is already more efficient after only 30 cycles (300 added samples) than maximum entropy sampling for a data set containing 16 million samples for the given setup. Figure 8: ASAL is more efficient for selecting new samples than maximal entropy sampling. However, it requires pre-processing time. These diagrams show the transition point (number of AL cycles when ASAL gets more efficient than maximum entropy sampling) with respect to the data set size. For MNIST -two classes we use the following auto-encoder settings: the encoder consists of three convolution layers with a stride of two, each followed by an activation leading to 64 compressed features. The decoder uses three deconvolution layers each with an activation. We train the autoencoder for 40 epochs with a batch size of 100 using the Adam optimizer with a learning rate of 0.001. For MNIST -ten classes we reuse the same settings but with three times more channels ing in 192 compressed features.64 × 64 × 3 3 × 3 conv: 16 linear: 128 × 4096 5 × 5 conv: 128, stride=2 ReLU, Maxpool 2 × 2 Batch norm, ReLU leakyReLU 3 × 3 conv: 32 5 × 5 deconv: 256 5 × 5 conv: 256, stride=2 ReLU, Maxpool 2 × 2 Batch norm, ReLU leakyReLU 3 × 3 conv: 64 5 × 5 deconv: 128 5 × 5 conv: 512, stride=2 ReLU, Maxpool 2 × 2 Batch norm, ReLU leakyReLU linear: 4096 × 1024 5 × 5 deconv: 64 5 × 5 conv: 512, stride=2 ReLU, Dropout 0.5 Batch norm, ReLU leakyReLU linear: 1024 × 1 5 × 5 deconv: 3 linear: 8192 × 1 TanhThe Encoder for CIFAR-10 consists of three layers, each with a convolution followed by batch normalization BID7 ), activation and max pooling (stride of two and window size 2 × 2). The number of compressed features is 256. The decoder uses first a layer consisting of a convolution, batch normalization and activation followed by three deconvolution layers each with batch normalization and activation. We train the auto-encoder for 100 epochs with a batch size of 128 using the Adam optimizer with a learning rate of 0.001. We use the same settings for CIFAR-10 -two classes and CIFAR-10 -ten classes. Label distribution for uncertainty sampling using maximum entropy and random sampling for MNIST -two classes using different uncertainty measures and loss functions. The tick on the right show the true label distribution in the pool. The label distribution of the training set, assembled with random sampling (top), converges to the true label distribution of the pool. Conversely, uncertainty sampling leads to a training set that contains more frequently the label 5 than 7 compared to the pool that contains 7 more frequently. Apparently, images with the digit 5 lead to higher uncertainty of the used classifier. Figure 10: Test accuracy on MNIST -two classes of a fully supervised model, for random sampling, uncertainty sampling and different ASALs using different GANs, uncertainty measures and loss functions. ASAL with WGAN-GP (bottom) clearly exceed the performance of ASAL using DCGAN (top). Maximum entropy sampling and using the cross entropy loss lead to the setup (10d) that approaches the fully-supervised model with the fewest samples and reaches the smallest gap for all ASAL using 500 labelled samples. Figure 11: Label distribution for active learning using different matching strategies, uncertainty measures and GANs for MNIST -two classes. The ticks on the right show the true label distribution in the pool. ASAL using WGAN-GP (third and fourth row) reaches a label distribution of the training data that is similar to the true label distribution in the pool. Conversely, ASAL using DCGAN (first and second row) leads to a training set that contains almost three times as many images with the digit 7 than digit 5. Most likely, the DCGAN is responsible for this behaviour because we already observed that it produces the digit 7 more frequently than the digit 5, see FIG0. Figure 12: Average entropy of images that are selected and added to the training set for MNIST -two classes using different GANs, uncertainty measures and loss functions. All figures show that ASAL selects samples from the pool that have a higher entropy than randomly sampled images. However, maximum entropy sampling and WGAN-GP (12d) lead to the largest entropy gap between selected and randomly sampled images. Maximum entropy sampling (right column) in smaller average entropy of the classifier than minimum distance sampling (left column) because we use the cross-entropy loss that directly optimizes for small entropy, opposed to the hinge loss that minimizes the distance to the separating hyper-plane. Instead of manually annotating images we propose to select similar images from the pool and ask for labels of these images. Similar images might show an object of the same class, have similar surrounding, colors, size or share other features. Thus, we compare the agreement of the manual class annotations of the generated images with the matched images, using the three different strategies. We use 1300 generated samples for each GAN, annotate the images manually and retrieve the closest match with the corresponding label from the pool. We assume that the final model will be measured on an almost evenly distributed test set similar to MNIST and USPS. However, the test set for this experiment contains the generated samples with manual annotations and the GAN may generate samples with unevenly distributed label frequency. Thus, we compute the accuracy for each class independently and average these values subsequently to obtain the final score. FIG0 shows that the agreement is higher for ASAL strategies using WGAN-GP than DCGAN. Furthermore, we observe that the matching based on gray values achieves the highest agreement. Similarly, Figs. 10a and 10b show best performance for ASAL-Gray. The matching strategies employed in ASAL allow to select similar images from the pool and compare these labels to the manual annotations. For MNIST -two classes the agreement for WGAN-GP is higher than for DCGAN. Figure 15: Average entropy of images that are selected and added to the training set for MNIST -ten classes using different GANs. Both figures show that at the beginning ASAL selects images with higher entropy than random sampling. In average WGAN-GP leads to a larger gap than DCGAN. However, this gap rapidly shrinks when increasing the training set.: Label distribution for uncertainty sampling using maximum entropy, random sampling and active learning using different matching strategies and GANs for MNIST -ten classes. The tick on the right show the true label distribution in the pool. Note the different scaling of the yaxis. Random sampling converges to the true label distribution in the pool and maximum entropy sampling leads to a training set with a higher ration of certain digits or lower than the pool. Similarly, ASAL using WGAN-GP (bottom row) selects certain digits more frequently than others. Conversely, ASAL using DCGAN (top row) leads to a training set that contains 30% images with the digit 1. Most likely, the DCGAN is responsible for this behaviour because we already observed that it produces the digit 1 more frequently than any other digit, see FIG0. E report the accuracy of GAAL when trained on MNIST and tested on USPS for two classes. They report best performance on USPS and outperform the fully supervised model. However, it is unclear how they up-sample the 16 × 16 USPS images to test on the 28 × 28 model trained on MNIST. We redo the experiments using ASAL and up-sample the USPS images as follows: padding the images with three pixels at each side, up-sampling the images to 30 and cropping the images to 28 × 28 to remove boundary artifacts. Following this strategy, we report an average test accuracy of 0.91 for the fully supervised model compared to 0.70 reported by. FIG4 shows that ASAL outperforms aggressive uncertainty and random sampling. We repeat the experiment for MNIST -ten classes using DCGAN and WGAN-GP. This time, uncertainty sampling clearly outperforms all other strategies and the fully supervised model. Nonetheless, ASAL-Auto and ASAL-Disc lead to a better training performance than passive learning for WGAN-GP. Figure 17: Test accuracy on USPS -two classes but trained on MNIST -two classes of a fully supervised model, for random sampling, uncertainty sampling and different ASALs using different GANs, uncertainty measures and loss functions. Uncertainty sampling performs worse than any other strategy because it aggressively trains the classifier for the samples present in the pool and generalizes less. Random sampling and ASAL tend to generalize better by respecting the true data distribution either through random sampling or using a pretrained GAN on the data set to find new samples. Figure 18: Test accuracy on USPS -ten classes but trained on MNIST -ten classes of a fully supervised model, for random sampling, uncertainty sampling and different ASALs using two different GANs. Maximum entropy sampling for ten classes exceeds the quality of any other method compared to binary classification where it performed worst, see FIG4. The more elaborate LeNet and using more classes and samples to train lead to a classifier that generalizes well. The active learning strategies using WGAN-GP exceed the quality of random sampling. ASAL-Disc. even outperforms the fully supervised mode. ASAL using DCGAN performs comparable to random sampling. For CIFAR-10, we do not indicate the true label distribution by a tick because the validation set contains the same number of samples for each class. Figure 19: Label distribution for uncertainty sampling using maximum entropy and random sampling for CIFAR-10 -two classes using different uncertainty measures and loss functions. The label distribution of the training set of all strategies converges to the true label distribution of the pool. However, in average over all active learning iterations the training set of the uncertainty sampling strategies most frequently contained the images with the label horse.: Average entropy of images that are selected and added to the training set for CIFAR-10 -two classes using different GANs. The mean entropy of the random sampling and the proposed method show hardly any difference. However, for maximum entropy sampling at least at the beginning ASAL selects images with higher entropy than random sampling. For CIFAR-10, we do not indicate the true label distribution by a tick because the validation set contains the same number of samples for each class. Figure 25: Average entropy of images that are selected and added to the training set for CIFAR-10 -ten classes using different GANs. There is hardly any difference for random sampling and ASAL in the entropy of newly added samples. Only at the beginning, random sampling retrieves samples with slightly higher entropy. Figure 26: Label distribution for uncertainty sampling using maximum entropy and random sampling for CIFAR-10 -ten classes. Random sampling converges to the true label distribution in the pool. Maximum entropy sampling selects most frequently cat, dog, bird, deer and least frequently automobile, ship, truck to exceed the classification quality of random sampling.: Label distribution for active learning using different matching strategies, uncertainty measures and GANs for CIFAR-10 -ten classes. Exactly the classes cat, dog that are most common in the training set of uncertainty sampling are less common in the data sets of most setups. Conversely, frog is for many setups the most common class but is not particularly frequent in the uncertainty sampling data set. FIG0: The rows show generated and matched images for CIFAR-10 -ten classes using WGAN-GP. Most of the generated images achieve only a moderate quality and even the closest samples from the pool have a high perceptual visual distance or assign images that show non matching classes, see last column where the images have a similar appearance but an appropriate label for the generated images would be horse but the selected samples show airplane and ship. To produce the images displayed in Figs. 32, 33, 34 and 35 we trained the classifier using the initial training set. Then we used maximum entropy sample generation to produce samples with a high entropy. Figure 32: Comparison of random and uncertain MNIST -two classes. The samples are generated using different GANs. The random samples are visually more appealing and identifying the label is easier than for the uncertain samples. WGAN-GP generate images for both digits equally likely, whereas DCGAN most frequently generates images showing the digit 7. Figure 33: Comparison of random and uncertain MNIST -ten classes samples. The samples are generated using different GANs. The random samples are visually more appealing and identifying the label is easier than for the uncertain samples. WGAN-GP uniformly generates images for all digits, whereas DCGAN mainly generates images showing the digit 1. Comparison of random and uncertain samples for CIFAR-10 -two classes using maximum entropy. The samples are generated using different GANs. The residual GANs (bottom row) produce more visually appealing samples than the other GANs. For most of these images it would be possible to identify whether the image shows a horse or automobile. Comparison of random and uncertain samples for CIFAR-10 -ten classes using maximum entropy. The samples are generated using different GANs. The residual GANs (bottom row) produce more visually appealing samples than the other GANs. Although, the quality of the random images is higher than of the uncertain images, annotating with high confidence is still very difficult.
ASAL is a pool based active learning method that generates high entropy samples and retrieves matching samples from the pool in sub-linear time.
1,174
scitldr
We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-n performance (Boo_n) as a way to correct these problems. Replicating in deep learning research is often hard. This harms their usefulness to industry, leads to a waste of effort by other researchers, and limits the scientific value of such . One reason is that many papers provide information insufficient for replication. Details of the experimental setup can significantly influence the BID13 BID10 BID23, so the details should be provided at least in appendices, ideally alongside the source code, as was strongly emphasized e.g. by BID17.However, an important second factor hinders replicability: most deep learning training methods are inherently stochastic. This randomness usually comes from random data ordering in stochastic gradient descent and from random parameter initialization, though there can be additional sources of randomness such as dropout or gradient noise. Consequently, even if we fix the model architecture and the experimental setup (including the hyperparameters), we obtain a different each time we run an experiment. Statistical techniques are needed to handle this variability. However, in deep learning research, they are heavily underused. What is usually done instead?Most empirical deep learning papers simply report the performance of the best single model (sometimes calling it just "single model" performance). We will later show this is the case at least for some sub-domains. Given the stochasticity, such method is statistically flawed. The best model performance is not robust under experiment replication, and its expected value improves with an increasing number of experiments performed, among other problems. Since many deep learning publications largely ignore these issues, we dedicate the first part of this article to explaining them in some detail, and later run experiments to quantify them. Appropriate statistical techniques are hence necessary for evaluating (and comparing) the performance of machine learning (ML) architectures. Some well-developed methods exist for such comparisons (a great introduction is given for instance by BID5). However, most existing methods focus on comparing the mean performance. This may be one of the reasons why statistical methods are being underused, since mean may be unattractive to researchers in certain situations. There are multiple possible reasons for this. The one that we do consider sound 1 is that when deploying models in practice, it is often natural to train multiple instances of a model and then deploy the best one to production based on a validation set evaluation.2 Underperforming models can be discarded, so the final deployed model does come from 1 Other reasons why researchers resort to the best performance as opposed to the mean may come from the current highly competitive atmosphere in the field with (possibly excessive) focus on performance on standard datasets (see BID4 or BID26 for further discussion), which may motivate researchers to publish only their best . Also, statistically sound estimation of performance does require repeatedly re-running experiments, which does incur additional cost, which researchers may prefer to invest in additional model tuning, especially in the present situation where reviewers seem not to require statistically sound evaluation of models and on the other hand may favour high-performing models. Of course, these motives should instead give place to effort to do good science, as opposed to a race on standard benchmarks. 2 In some applications there is focus on speed of training and on reducing computational costs -there it does make sense to focus on the performance of the typical model as opposed to the best out of n, so the use of mean or median is appropriate. the higher tier of the model performance population, and the use of mean may be inappropriate. Hence, rather than to completely abandon reporting the performance of the best model, we propose a way to fix its flaws. We do this by estimating the expected best-out-of-n (Boo n) performance by running more than n experiments, which gives the estimate statistical validity if a sufficient number of experiments are run. We discuss how this measure relates to the performance distribution of the model, and we also give a method to empirically estimate Boo n.The paper proceeds as follows: First, we give a high-level explanation of why reporting performance of the best single model is problematic. We also give some evidence that it is widely used in the deep learning community, which is why this explanation may be needed. We proceed by presenting Boo n as a way to fix the above problems. We then give some experimental evidence for the flaws of best-singlemodel reporting and show that Boo n does not suffer from them. We wrap up by discussing the place of Boo n in a ML researcher's toolbox alongside traditional measures such as mean or median. In articles presenting new deep learning architectures, the performance is often reported as the score of the "best single model" or simply "single model". In practice, this usually means that the researchers train multiple instances of the proposed architecture (often with different sets of hyperparameters), evaluate these instances on some validation set, and select the best-performing model. This best model is evaluated on a test set, and the ing test score is then reported as the metric characterizing the architecture and used for comparing it to previous models. If the score is better than those reported in previous work, the architecture is presented as superior. This practice in several issues:Population variance Since of experiments are stochastic, the performance of a single model is just a single instance drawn from a possibly disparate population. If others train the model on their own, they get another sample from the architecture's performance distribution, which may substantially differ from the one listed in the original paper. Such paper thus gives insufficient information about what to expect from the new architecture, which should be one of the article's main purposes. One may object that the published in the paper is not chosen from the population at random -it is selected using a validation . However, the correlation between the validation and test is generally imperfect; in fact, in some of our experiments, it is almost zero, as we show in Section 4. Furthermore, if we indeed do have a strong correlation, we get another problem:Expectation of best increases with the number of experiments Simply put, the more samples from a population we draw, the more extreme the best among them is likely to be. In other words, the expected value of the best depends on the number of experiments that the researchers run. There are three closely related problems with this: Firstly, this makes the number of experiments run an important explanatory variable; however, this variable is usually unreported, which is a severe methodological flaw in itself. It also leads to the second problem: since each research team runs a different number of experiments, the are not directly comparable. Thirdly, this motivates researchers to run more experiments and gives an advantage to those who are able to do so. This pushes publishing quantitative towards a race in computational power rather than a fair comparison of architectures themselves. Best model performance is not a meaningful characteristic of the performance distribution Even if we knew the underlying theoretical performance distribution -that is, if we had perfect information about the architecture's performance -it would not be clear what we would mean by "best model performance" without specifying the size of the pool from which we are choosing the best model. Imagine some architecture having a Gaussian performance distribution. Asking what is the best possible performance does not make sense in such a case, since the support of the distribution is unbounded. Even for capped metrics such as accuracy, where the performance distribution necessarily has bounded support, the best (possible) model 3 may be so unlikely, that it would be of no practical importance. Hence, best model performance is not a meaningful characteristic of the performance distribution. Generality / Falsifiability Finally, there is the question of what the authors are trying to express. Using "best single model performance", they are essentially claiming: "There once existed an instance of our model that once achieved a X on dataset Y". Such fact is not of that much interest to the scientific community, which would rather need to know how the architecture behaves generally. Relatedly, a frequently given characteristic of science is falsifiability of theories BID22. A theory claiming that there are invisible unicorns running among us is not science, since we cannot think of any potential empirical evidence that could prove the theory false. Similarly, any number of replication experiments that produce substantially worse cannot prove the above performance claim wrong. If, for instance, a confidence interval were given, replications could very quickly show the published at least extremely implausible, if not false. We will quantify the former two problems for two concrete architectures in Section 4. Despite all these problems, reporting the performance of the best model is still the main way of reporting in some areas of ML, especially in empirically oriented deep learning papers, and, alarmingly, such practice seems to be tolerated by reviewers even at prime conferences. For instance, what concerns models published on the popular Children's Book Test dataset for reading comprehension (on which we run experiments later), none of the (more than ten) papers used any form of statistical testing or confidence intervals, and most reported only performance of the best single model without even mentioning the total number of experiments run. These include papers published at NIPS BID14, ICLR BID15 BID21, ACL BID3 BID7 BID6, or EMNLP BID28 ).The same is true for the recently popular SQuAD dataset: for instance, none of the four papers BID32 BID30 BID27 BID31 that published on this dataset at ICLR 2017 has used any statistical testing or confidence intervals nor published mean (or otherwise aggregated) across multiple runs. Let us look more generally at the example of ICLR 2017 (chosen as a deep-learning-focused conference featuring many empirical -as a rough guide, 174 out of 194 ICLR papers have "experiment" in a (sub-)section heading). Only 11 papers mention terms related to hypothesis testing 4, and 11 contain the string "confidence interval". Further details can be found in Appendix B.While this is a rough and limited survey, it does suggest that while deep learning research is to a large extent an empirical science, statistical methods are often underused.3. Expected Best-out-of-n (Boo n) PerformanceThe issues outlined above point to desiderata for a more suitable method of reporting an architecture's performance. It should provide information about general behaviour of the architecture under specified conditions, well characterizing the associated random performance distribution. It should also be invariant under the number of experiments run and be as robust as possible to random noise. Given these requirements, traditional statistical measures, such as mean or median, probably come to mind of many readers. They do indeed fix the above issues; still, they express only the performance of a typical member of the population. However, in many ML applications, it may be the best model from a pool that is of interest. When practitioners are choosing a model for deployment, they train multiple models and deploy the best-performing one 5. This gives some justification to reporting the performance of the best model and gives us a reason to attempt to fix its problems rather than completely dismiss it. Such corrected best-model measure would be more informative than mean or median in these outlined situations. A natural way to improve comparability between models, each evaluated in a different number of experiments, is to normalize the to the expected if the number of experiments were the same, say n, which can be easily estimated if we run m experiments, m ≥ n. The greater the number of experiments m, the more robust the estimate of the expected best, which also helps us eliminate the problem of statistical robustness. We are proposing the expected best-out-of-n performance, Boo n, to be used where the performance of the best model from a pool seems as an appropriate measure. Let us first examine how the expected best-out-of-n (Boo n) performance relates to the (theoretical) performance distribution we are trying to characterize; we will then proceed with empirical estimation, which is of value in practice. The calculations are not particularly innovative from statistical point of view and are close to many standard from the field of Order Statistics (see for instance BID1 for more context). Showing how to calculate Boo n of a known theoretical probability distribution will serve two purposes: Firstly, since we are proposing Boo n as a way to characterize the performance distribution, this will make the relation between Boo n and the performance distribution explicit. Secondly, in some cases we may be able to make an assumption about the family to which the theoretical distribution belongs (e.g. 5 This would usually be the case when a model is trained once and then deployed for longer-term usage, which may be the case for instance for Machine Translation systems. In other cases, when it is practical to train only as single model instance due to hardware constraints (either because training is extremely costly, or because it needs to be done repeatedly, e.g. for individual customers), we may indeed be interested in a typical model and hence in mean or median performance.we could assume it is approximately Gaussian). The analytic calculation below will allow us to leverage this information when empirically estimating Boo n by deducing a parametric estimator, which may be especially useful when our sample size m is small thanks to its lower variance due to added prior information. Let us first look at the simpler case of validation performance (that is, the case where we are choosing the best model with respect to the metric we are reporting) as it is easier to grasp: How do we calculate an expected best Boo n (P) 6 of independent identically distributed (i.i.d.) random variables X 1,..., X n with probability distribution P (the performance distribution of an architecture) with a probability density function (p.d.f.) f and a cumulative distribution function (c.d.f.) F? In the case where best means maximal (minimum can be calculated similarly), the maximum max{X 1, ..., X n} has a c.d.f. equal to DISPLAYFORM0 using the independence of the X i s in the last step. In the case of a continuous distribution, we can obtain the p.d.f. of the maximum by simply differentiating with respect to x: DISPLAYFORM1 Using the p.d.f., we can now calculate the expected value of the maximum as DISPLAYFORM2 We can get a precise numerical estimate of the above integrals in any major numerical computation package such as numpy. For illustration, for the standard normal distribution we have Boo 5 (N) ≈ 1.163, Boo 10 (N) ≈ 1.539. More generally, Boo n N (µ, σ 2) can then be expressed as µ + σBoo n (N). Thanks to this form we can get numerical estimates of Boo n N (µ, σ 2) just by estimating the two usual parameters of the Gaussian, Boo n (N) becoming just a constant coefficient if we fix n. The full details of calculation for the Gaussian distribution can be found in Appendix A.In the case of a discrete performance distribution, which will be useful for empirical estimation below, we get a prob-ability mass function DISPLAYFORM3 so if p j is the probability weight associated with value x j, i.e. P[X i = x j] = p j for all i, this gives us DISPLAYFORM4 In the previous part, we were choosing the best model with respect to the metric whose expectation we were calculating. Hence, that method can be used to calculate the expected best validation performance of n models. In practice, the best model is usually chosen with respect to the validation performance, while the primary interest is in the corresponding test performance. To calculate the expected test performance of the best-validation model, we need to substitute the direct value of x in Equations 2 and 4, with the expectation of the test performance X test conditional on the validation performance x val, DISPLAYFORM0 yielding an expression for the expected test performance of the best-validation model chosen from a pool of size n where P best val is the marginal probability distribution of the best-out-of-n validation performance. Similar simple substitution can be done in the discrete case. DISPLAYFORM1 Expanding the expression for the bivariate Gaussian distribution having marginal test performance with mean µ test, standard deviation σ test, and test-validation correlation ρ as in Appendix A.2 yields a convenient expression DISPLAYFORM2 which can again be used for parametric estimation. We usually do not know the exact performance distribution of the model; we only have samples from this distributionthe of our experiments. In such case, we can estimate the expected maximum empirically, and in fact it is the empirical estimates that are likely to be used in practice to compare models. To get a non-parametric estimator, for which we do not make any assumption about the family of the performance distribution, we take the empirical distribution arising from our sample as an approximation of the architecture's true performance distribution, similarly to Bootstrap methods. The empirical performance distribution P assigns a probability weight of 1 m to each of our m samples. We approximate Boo n of the true performance distribution by Boo n of this empirical distribution. For the uniform empirical distribution, all the p i in Equation 4 are equal to 1 m. Hence, if we rank our samples from worst-validation to best-validation DISPLAYFORM0 This is in fact a weighted average of the test . In case of a tie in validation , i.e. if x DISPLAYFORM1, one should assign an equal weight of This estimator does not make any assumption about the performance distribution from which our observations are drawn. If we do use such an assumption (e.g. we know that the performance distribution of our architecture usually belongs to a certain family, e.g. Gaussian), we can add information to our estimator and possibly get an even better estimate (i.e. one with lower sampling error). For the Gaussian distribution, we can use the standard estimators of the parameters in Equation 6 to get a parametric estimator DISPLAYFORM2 where µ test, ρ and σ test are standard estimators of mean, correlation, and standard deviation respectively. A similar parametric estimator could be calculated for other distributions. Boo n eliminates the problem of dependence on the number of experiments run, m. However, we still need to choose n, the number of experiments to which we normalize. This is similar to the choice one is facing when using a quantileshould one use the 75% one, the 95% one, or some other?The choice of n most useful to a reader is when n is the number of candidate models that a practitioner would train 7 You can get an optimized Python implementation of the nonparametric estimator via pip install boon. before choosing the best one for some target application. Such number will differ domain to domain and will heavily depend on the computational cost of training the specific architecture on the specific domain. The n of interest may differ for each reader -ideally, researchers should characterize the architecture's performance distribution as fully as possible and the readers may be able to deduce the value of Boo n for whichever n they choose (up to a limit).Leaving an additional degree of freedom in the choice of metric creates a risk of cherry picking. However, in many areas of machine learning, there already are many available metrics. Still, the main reporting metric seems to quickly converge on each task. The first published paper makes a choice; the subsequent ones usually follow suit (else they risk a suspicion that the architecture is not competitive on the previous metric). We believe similar convergence is likely for Boo n on each task. In our experiments, we decided to use n = 5 -the AS Reader model which we use for our experiments takes about 2 hours to train on a single GPU, so someone replicating the Boo 5 performance could expect to achieve it overnight, which seems a to be a reasonable requirement. Even Boo n is just a single number whose estimate can be noisy. Hence, with Boo n, as well as with mean and other ways of aggregating of a wider population, we should always use appropriate statistical methods when trying to compare the quantitative performance of a new model against a baseline. This can be done using significance testing (such as the t-test), or with the help of confidence intervals, which seems to be the method preferred by a significant part of the scientific community (e.g. BID11 or BID2), since it allows us to disentangle the effect size from the uncertainty associated with noise and sample size. For some theoretical distributions, there exist ways to calculate the hypothesis test or confidence interval analytically (e.g. using the t-test or standard normal quantiles for the Gaussian). However, in cases where the family of the performance distribution or of the estimator is not known, we need to resort to computational methods -usually Monte Carlo (if we do know at least the family of the performance distribution) or the Bootstrap BID8 ) (if we do not). A brief description of how to calculate confidence intervals using the Bootstrap is provided in the Appendix. Note: The data and code for their analysis can be found at http://gitlab.com/obajgar/boon, along with Python functions you can use to calculate Boo n. We have run several experiments to quantify the scope of the problems outlined in Section 2. We just briefly summarize the main here for illustration; a more detailed description of the experiments and analysis in the form of an iPython notebook can be found in the Gitlab repository or in Appendix C.Performance variation To estimate the random variation of , we repeatedly 8 trained models from two domains of deep learning: the ResNet BID12 on the CIFAR-100 dataset BID20 ) to represent Image Recognition and the Attention Sum Reader (AS Reader) BID19 on the Children's Book Test Common Nouns (CBT CN) BID15 to represent Reading Comprehension. Each of these trainings generated a pair of a validation and test performances. The ing empirical performance distributions are illustrated in FIG2.If we fix all hyperparameters, the interquartile ranges of the models' accuracies are 0.98(±0.09)% 9 and 1.20(±0.12)% (absolute). This is comparable to the median differences between published on these datasets: 0.86% and 1.15% respectively 10. Hence, random variation in performance cannot be considered negligible as is now often done. Furthermore, if we allow the hyperparameters to vary (in our case by random search), the variance further increases, which further amplifies the outlined effects. In the 8 Specifically, 74 times for Resnet, 370 times for the AS Reader with fixed hyperparameters, and 197 times for the AS Reader with random hyperparameters.9 ± standard deviation computed using 99999 Bootstrap samples. 10 We looked at of successive architectures evaluated on the two tasks as listed in BID16 BID21. We sorted the with respect to the test performance and then calculated the differences between successive models. From these we calculated the median. Full details can be found in the Gitlab repository.case of the AS Reader the interquartile range increased to 2.9(±0.3)% when we randomly picked hyperparameters from a range applicable to training the model on a single GPU. However, note that the problem of incomensurability due to hyperparameter optimization is not the focus of this work. The method that we present here is still applicable to the problem in the case of random hyperparameter sampling for which we include , however we aim to compensate mainly for randomness due to parameter initialization and data shuffling -which is significant in itself, as we have just demonstrated. Several other articles confirm significant variation in model performance due to different random seeds: e.g van den BID29 in Speech Recognition, BID13 in Deep Reinforcement Learning, or BID24 in Named Entity Recognition. They all agree that reporting performance scores of single models is insufficient to characterize architecture performance. FIG3 shows the 95% confidence intervals of best single model compared to the Boo 5 performance for a range of -pool sizes m. This is shown for the cases of both strong and weak test-validation correlation. In both cases Boo 5 is significantly less noisy than the best-single-model . In fact in the case of random hyperparameter search, Boo n shows even smaller variation than the mean (due to the negative skew of the performance distribution). Best-model performance improves with the number of experiments We also mentioned that if only the performance of the best model is reported, the more experiments are run, the better the expected . FIG3 illustrates that this effect can indeed be fairly strong, if the validation performance is a good predictor of the test performance, as is the case of the AS Reader with random hyperparameter search, where the expectation of the best single model performance increases from 61.3% if we train it once, to 63.3% if we train it 5 times, to 63.5% for 20 times. This effect is nicely explained in more detail e.g. by BID18. It gives a further argument for refraining from using this method and certainly also for publishing the number of experiments run, which is often not done. Boo n is not subject to this effect. Validation-test correlation However, note that the assumption that validation performance is a good predictor of test performance is sometimes not true. In the two cases with fixed hyperparameters that we looked at, the Spearman correlation between validation and test was only 0.10 and 0.18 respectively for the two models. The correlation significantly increases if we allow the hyperparameters to vary -to 0.83 for the AS Reader. These are also illustrated in FIG2. Larger validation sets are also likely to improve this correlation, which can be understood as the degree of generalization from validation to test. Note that the problem of increasing expected performance mentioned above is relevant only in the case of higher correlation between validation and test . The effect becomes very strong in the case where the performance we are reporting is also used for choosing the best model, which emphasizes the need for honest separation of validation and test data.11 While Boon and mean could be sampled using vanilla Bootstrap, best-validation is influenced only by a single value from the sample and hence uses only few values from the upper tier of our pool, which makes our pool size insufficient. Hence we use Gaussian kernel smoothing BID25 to expand our pool. Boo n does fix the main flaws of reporting the best single model performance. However, let us have a look at some of its limitations. Hyperparameter tuning This work does not fully compensate for improved expected due to hyperparameter tuning, nor was it its primary aim. Boo n is appropriate in the case of random hyperparameter sampling, where the performances in different runs are independent. However, this is not the case for more advanced hyperparameter optimization methods. The primary focus of this work was on tackling variability due to random initialization, data shuffling, and similar sources, which we have shown to be significant in itself. Compensation for more advanced hyperparameter tuning (and ensuring the comparability of models in that case) is certainly a worthwhile area for future research. Mean, median, and other alternatives We do not claim our method to be strictly superior to traditional ways of aggregating , such as mean or quantiles. However, we have outlined a case where using Boo n is justified -situations where a final model to be deployed can be chosen from a pool of trained candidates. In such case, Boo n is easily interpretable and more informative than a performance of a typical model, expressed by mean or median. Hence, we think Boo n is a useful addition to the methodological toolbox along existing methods. Just a single number Boo n is still just a single number whose ability to characterize the performance distribution is limited by its single dimension. Paper authors should try to characterise the performance distribution as fully as possible, which may involve a histogram, mean, standard deviation, ideally along a dataset containing the of all experiments, from which an interested reader may be able to deduce whichever characteristic she finds interesting. Unfortunately, such characterization is usually lacking. However, alongside this detailed characterization, describing an architecture's performance by a single number still has its appeal, especially for the purpose of comparison among architectures and choosing the best one according to some criterion (in fact, each quantitative score can be understood as a proxy for ordering architectures with respect to such criterion of interest, such as expected performance of the best model out of n). We have explained why, in some cases, Boo n may be useful for such purpose. Computational cost Some may deem Boo n impractical due to its requirement to train architectures many times, which may be very expensive in some cases. However, stochasticity needs to be addressed to produce reliable , and it is hard to imagine a general method to do so without repeated evaluation 12. Researchers should focus on architectures which they can evaluate properly given their resources. However, the main target of our criticism is not projects whose resources are stretched by a single training; it is projects that do have the necessary resources for multiple evaluations but use them to produce better-looking rather than that are more informative and robust. Reporting just the best single model performance is not statistically sound. This practice in machine learning research needs to change if the research is to have lasting value. Reviewers can play an important role in bringing this change. Still, asking for the performance of a best model out of n can have valid reasons. For the situations where the best-model performance is indeed a good metric, we are suggesting Boo n as a way to evaluate it properly. DISPLAYFORM0 where Φ is the c.d.f. of a standard normal random variable. DISPLAYFORM1 (the first integrand has the form of the p.d.f. found above and hence integrates to one) so the expected maximum is neatly expressed in terms of a maximum of a standard normal and is linearly proportional to both the mean and the standard deviation. Once n is fixed for comparison purposes, Boo n (N) is just a constant, e.g. Boo 5 FIG2 ) ≈ 1.163, Boo 10 (N) ≈ 1.539. Let us turn to the case of reporting the expected test set performance of a best-validation model. If we model the validation and test performances by a Bivariate Normal Distribution with valid-test correlation ρ, means µ val, µ test, and variances σ 2 val, σ 2 test, then given a validation performance x val, the test performance is distributed normally with conditional expectation DISPLAYFORM0 Using the same two tricks as above, this can be simplified to DISPLAYFORM1 where Boo n FIG2 ) is the single-evaluation expected maximum of the standard normal distribution as defined above. We downloaded the pdfs of all papers accepted to ICLR 2017 13, extracted text from them using the OpenSource Xpdf package 14 and then searched the ing text documents using the grep command as follows. Firstly, to roughly estimate the usage of experiments in the papers, we searched for the capitalized string "EXPERI-MENT" in the documents, since all (sub-)section headings are capitalized in the ICLR format. This was matched in 174 documents. Further 6 contained the string "EVALUA-TION" yielding a total of 180 out of 194 papers containing one of the two strings, which suggests that many ICLR papers indeed have an empirical component, though our rough method is only very approximate. We then searched for the string "confidence interval", which was matched in only 11 papers, and further 11 documents matched one of expressions related to hypothesis testing (curiously, a set completely disjoint from the "confidence interval" set). These terms were: "hypothesis test", "p-value", "t-test" "confidence level", "significance level", "ANOVA", "analysis of variance", "Wilcoxon", and "sign test". This may actually be only an upper bound since mentioning the term somewhere in the paper does not necessarily mean that the method was employed in the experimental procedure. Note: The data and code for their analysis are available at http://gitlab.com/obajgar/boon.Here we provide further details of our experiments quantifying the extent of stochasticity and the ing effects. To run our experiments we have chosen Open Source implementations 15 of models from two popular domains of deep learning, namely ResNet BID12 on the CIFAR-100 dataset BID20 for Image Classification and the AS Reader BID19 on the CBT CN dataset BID15 for Reading Comprehension. We believe these two models are representative of models in their respective areas -Resnet is based on a deep convolutional network architecture as most recent models in machine vision, while the AS Reader is based on a bidirectional GRU network with attention, as is the case for many models in natural language processing. To collect the data for our experiments, we repeatedly trained the two models. Each training instance had a different random parameter initialization and random data shuffling. We saved the model at least once per epoch and we then included the validation and test from the best-validation evaluation as a single data point in our metadataset. All training was done on Ubuntu 14.04 on a single GPU per training, either Nvidia Tesla K80 or GTX 1080. Resnet was trained with a single set of hyperparameters, the default ones for the above Open Source implementation. That means 5 residual units ing in a 32-layer Resnet. The model was trained using the 0.9 momentum optimizer, with batch size 128, initial learning rate of 0.1 lowered to 0.01 after 40,000 steps and to 0.001 after 60,000 steps. Data augmentation included padding to 36x36 and then random cropping, horizontal flipping and per-image whitening. L2 regularization weight was set 0.002. The training ran for 300 epochs. Training was done using Tensorflow 1.3. The AS Reader was trained in two different settings. Firstly 370 times with hyperparameters fixed to embedding dimension of 128 and 384 hidden dimensions in the GRU units, with all other hyperparameters as used in the original AS Reader paper BID19.In the second setting, the hyperparameters for each training instance were chosen randomly from the following ranges: The batch size was chosen from the range, and the embedding size and hidden state size were each chosen from the range with the log 2 value of the parameter being distributed uniformly in the interval. The upper bounds of these intervals matched maximum training size feasible on our hardware using this implementation. Training was done using Theano 0.9.0 and Blocks 0.2.0. FIG2 plots the histograms of test performances of the evaluated models. The mean test accuracy for Resnet was 68.41% with standard deviation of 0.67% (absolute), the range was 67.31% − 69.41%. For AS reader with fixed hyperparameters the mean was 63.16% with standard deviation 0.94% and range of 61.52% − 64.60%. In the case of random hyperparameter search the mean was 61.26%, standard deviation 2.48%, and values ranged from 56.61% to 64.01%. In both cases with fixed hyperparameters the collected are consistent with coming from a Gaussian distribution according to the Anderson-Darling test 16 BID0; the histograms also make it appear plausible that the performance distribution is approximately Gaussian. This is not the case for the random hyperparameter search where the distribution has a clear negative skew. To put the above numbers into context, we also examined the margin of improvement of successive architectures published on the corresponding datasets, as listed in BID21 BID16. We sorted the with respect to the test performance and then calculated the differences between successive models. The median difference was for 0.86% for CIFAR-100 and 1.15% for CBT CN.Note that the median differences are smaller than two standard deviations for each model. Two standard deviations from the mean approximately give the 95% confidence interval for a Gaussian distribution -hence we could typically fit three successive published within the width of one such confidence interval. The magnitude of the performance variation due to random initialization and data shuffling is therefore not negligible compared to the improvements in performance, which often hold an important place within articles in which they are presented. We hence think it is 16 That is, despite the relatively large sample sizes, gaussianity cannot be ruled out at 0.05 significance level based on collected evidence.inappropriate to completely ignore this random variation in evaluation protocols, which is currently the usual practice. The best model is usually selected using validation performance 17. This practice is based on the assumption that the validation accuracy is a reasonably good predictor of test accuracy. The of our experiments, illustrated also in FIG2, suggest that this assumption holds for performance variation due to hyperparameter choice. However, if we fix the hyperparameters, the correlation almost disappears. To some extent, this implies that selecting the best validation model means we are picking randomly with respect to the test performance. Since we are picking from a random test performance distribution, this further calls for better characterization of the distribution than a single instance drawn from it. On the other hand if the correlation is strong, as seems to be the case if we do perform hyperparameter search, we face the second problem with reporting the best-validation performance: If the validation performance is a good predictor of the test performance, then the more models we train the better the best-validation model is likely to be even on the test set since we are able to select models high up the right tail of the performance distribution. This effect has been described in more detail in BID18, though with focus on induction algorithms; here we present an estimate of its effect in the case of Resnet and AS Reader. To test this effect we took the pool of trained models. For each m in the range from 1 to 50 (or 100 for the AS Reader), we randomly sampled 100, 000 samples of size m from the pool, and selected the best-validation model from each sample. The mean test performance across the 100, 000 samples for each m is plotted in FIG3.The show that when there is suitable correlation between validation and test performances, increasing the number of experiments does increase the expected performance of the best-validation model. This makes the number of experiments an important explanatory variable, which, however, usually goes unreported. Furthermore, it makes reported by different research teams not directly comparable. Finally, it gives an advantage to those that can run more experiments. We believe that this again makes the practice of reporting the performance of the best single model unsuitable. 17 Or at least should be. If an estimator characterizing a performance distribution, say Boo n or average, is calculated from experimental observations, it is subject to random variation, so if another research team tries to reproduce the experiments, they generally get a different estimate. The more observations are collected, the more precise the estimate generally is. Confidence intervals provide a natural way to express this uncertainty. Their usage also gives a sense whether the number of performed experiments was sufficient to reduce the uncertainly to a reasonable level, which is again not frequently addressed in machine learning papers. The construction of the confidence interval would be trivial if we knew the distribution from which our estimate was drawn (as opposed to the distribution of the performance!) -it is simply the interval between the appropriate quantiles, e.g. the 2.5th and 97.5th quantiles in the case of the 95% confidence interval. Such distribution has been studied extensively for instance in the case of a mean of Gaussian random variables. However, in other cases, it is not known. If we know at least the distribution from which the individual observations were drawn, we can use Monte Carlo methods to precisely estimate the confidence interval; however, if we are not able to make an assumption about the underlying distribution, we need to use only what we have: our samples from the distribution. In such case the variability of our estimator can be approximated using the Bootstrap BID8 or similar methods. The Bootstrap consists of repeatedly sampling with replacement m random observations from our pool of m observations, say B times. Each such sample is then used to calculate an estimate of our quantity of interest, say Boo n or mean. This creates a sample of B values of the estimator. The confidence interval can then be easily estimated taking the appropriate quantiles from this ing Bootstrap distribution of the estimator, which should be approximating the unknown underlying sampling distribution. The Bootstrap distribution has been shown to converge to the true underlying performance distribution. If we know the underlying distribution (up to some parameters), we can estimate its parameters and then generate a simulated Monte Carlo sample from the distribution, which can be used to calculate a sample of the estimator and the corresponding confidence interval in a similar way as above with the advantage of the distribution being smoother. Beside estimating the confidence interval for the value of Boo n or mean itself, either re-sampling method can be used to construct a confidence interval for the relative improvement of the newly proposed architecture compared to a baseline. The improvement can then be considered significant if zero is not included in the confidence interval. More details on constructing Bootstrap confidence intervals can be found in many standard texts on computational statistics, for instance in BID9.For illustration, we calculated the Bootstrap confidence interval for several sample sizes m for Resnet and the AS Reader. Each was constructed using B = 100, 000. The are plotted in FIG3. FIG4 shows the comparison of the non-parametric and Gaussian parametric estimators of Boo n, both introduced in Section 3.2, in terms of their variance for various sample sizes. The parametric estimator shows a somewhat lower variance. This is an advantage if the performance distribution is indeed approximately Gaussian, which is the case for both cases with fixed hyperparameters that we tested in our experiments. However, this can introduce bias if the true performance distribution differs from the theoretical distribution assumed by a parametric estimator, so one should be prudent to use it.
We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws.
1,175
scitldr
Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn domain-invariant embeddings for both domains. In this work, we study, theoretically and empirically, the effect of the embedding complexity on generalization to the target domain. In particular, this complexity affects an upper bound on the target risk; this is reflected in experiments, too. Next, we specify our theoretical framework to multilayer neural networks. As a , we develop a strategy that mitigates sensitivity to the embedding complexity, and empirically achieves performance on par with or better than the best layer-dependent complexity tradeoff. Domain adaptation is critical in many applications where collecting large-scale supervised data is prohibitively expensive or intractable, or where conditions at prediction time can change. For instance, self-driving cars must be robust to different weather, change of landscape and traffic. In such cases, the model learned from limited source data should ideally generalize to different target domains. Specifically, unsupervised domain adaptation aims to transfer knowledge learned from a labeled source domain to similar but completely unlabeled target domains. One popular approach to unsupervised domain adaptation is to learn domain-invariant representations (; ;), by minimizing a divergence between the representations of source and target domains. The prediction function is learned on these "aligned" representations with the aim of making it domain-independent. A series of theoretical works justifies this idea (; ; ;). Despite the empirical success of domain-invariant representations, exactly matching the representations of source and target distribution can sometimes fail to achieve domain adaptation. For example, show that exact matching may increase target error if label distributions are different between source and target domain, and propose a new divergence metric to overcome this limitation. establish lower and upper bounds on the risk when label distributions between source and target domains differ. point out the information lost in non-invertible embeddings, and propose different generalization bounds based on the overlap of the supports of source and target distribution. In contrast to previous analyses that focus on changes in the label distributions or joint support, we study the effect of embedding complexity. In particular, we show a general bound on the target risk that reflects a tradeoff between embedding complexity and the divergence of source and target domains. A too powerful class of embeddings can in overfitting the source data and the matching of source and target distributions, ing in arbitrarily high target risk. Hence, a restriction is needed. We observe that indeed, without appropriately constraining the embedding complexity, the performance of state-of-the-art methods such as domain-adversarial neural networks can deteriorate significantly. Next, we tailor the bound to multilayer neural networks. In a realistic scenario, one may have a total depth budget and divide the network into an encoder (embedding) and predictor by aligning the representations of source and target in a chosen layer, which defines the division. In this case, a more complex encoder necessarily implies a weaker predictor, and vice versa. This tradeoff is reflected in the bound and, we see that, in practice, there is an "optimal" division. To better optimize the tradeoff between encoder and predictor without having to tune the division, we propose to optimize the tradeoffs in all layers jointly via a simple yet effective objective that can easily be combined with most current approaches for learning domain-invariant representations. Implicitly, this objective restricts the more powerful deeper encoders by encouraging a simultaneous alignment across layers. In practice, the ing algorithm achieves performance on par with or better than standard domain-invariant representations, without tuning of the division. Empirically, we examine our theory and learning algorithms on sentiment analysis (Amazon review dataset), digit classification (MNIST, MNIST-M, SVHN) and general object classification (Office-31). In short, this work makes the following contributions: • General upper bounds on target error that capture the effect of embedding complexity when learning domain-invariant representations; • Fine-grained analysis for multilayer neural networks, and a new objective with implicit regularization that stabilizes and improves performance; • Empirical validation of the analyzed tradeoffs and proposed algorithm on several datasets. For simplicity of exposition, we consider binary classification with input space X ⊆ R n and output space Y = {0, 1}. Define H to be the hypothesis class from X to Y. The learning algorithm obtains two datasets: labeled source data X S from distribution p S, and unlabeled target data X T from distribution p T. We will use p S and p T to denote the joint distribution on data and labels X, Y and the marginals, i.e., p S (X) and p S (Y). Unsupervised domain adaptation seeks a hypothesis h ∈ H that minimizes the risk in the target domain measured by a loss function (here, zero-one loss): We will not assume common support in source and target domain, in line with standard benchmarks for domain adaptation such as adapting from MNIST to MNIST-M. A common approach to domain adaptation is to learn a joint embedding of source and target data . The idea is that aligning source and target distributions in a latent space Z in a domain-invariant representations, and hence a subsequent classifier f from the embedding to Y will generalize from source to target. Formally, this in the following objective function on the hypothesis h = f g:= f • g, where G is the class of embedding functions g to Z, and we minimize a divergence d between the distributions p ) of source and target after mapping to Z: The divergence d could be, e.g., the Jensen-Shannon or Wasserstein distance . Ben - introduced the H∆H-divergence to bound the worst-case loss from extrapolating between domains. Let R D (h, h) = E x∼D [(h(x), h (x))] be the expected disagreement between two hypotheses. The H∆H-divergence measures whether there is any pair of hypotheses whose disagreement (risk) differs a lot between source and target distribution. Definition 1. (H∆H-divergence) Given two domain distributions p S and p T over X, and a hypothesis class H, the H∆H-divergence between p S and p T is The H∆H-divergence is determined by the discrepancy between source and target distribution and the complexity ofthe hypothesis class H. For a hypothesis class H: X → {0, 1}, the disagreement between two hypotheses is equivalent to the exclusive or function. Hence, one can interpret the H∆H-divergence as finding a classifier in function space H∆H = H ⊕ H which attempts to maximally separate one domain from the other . A restrictive hypothesis space may in small H∆H-divergence even if the source and target domain do not share common support. This divergence allows us to bound the risk on the target domain: Theorem 2. For all hypotheses h ∈ H, the target risk is bounded as where λ H is the best joint risk Similar exist for continuous labels . Theorem 2 is an influential theoretical in unsupervised domain adaptation, and motivated work on domain invariant representations. For example, recent work applied Theorem 2 to the hypothesis space F that maps the representation space Z induced by an encoder g to the output space: where λ F (g) is the best hypothesis risk with fixed g, i.e., λ The F∆F divergence implicitly depends on the fixed g and can be small if g provides a suitable representation. However, if g induces a wrong alignment, then the best hypothesis risk λ F (g) is large with any function class F. The following example will illustrate such a situation, motivating to explicitly take the class of embeddings into account when bounding the target risk. We begin with an illustrative toy example. Figure 1 shows a binary classification problem in 2D with disjoint support and a slight shift in the label distributions from source to target: p S (y = 1) = p T (y = 1) + 2. Assume the representation space Z is one dimensional, so the embedding g is a function from 2D to 1D. If we allow arbitrary, nonlinear embeddings, then, for instance, the embedding in Figure 1(b), together with an optimal predictor, achieves zero source loss and a zero divergence which is optimal according to the objective in equation. But the target risk of this combination of embedding and predictor is maximal: If we restrict the class G of embeddings to linear maps g(x) = Wx where W ∈ IR 1×2, then the embeddings that are optimal with respect to the objective are of the form W = [a, 0]. Together with an optimal source classifier f, they achieve a non-zero value of 2 for objective due to the shift in class distributions. However, these embeddings retain label correspondences and can thus minimize target risk. This example illustrates that a too rich class of embeddings can "overfit" the alignment, and hence lead to arbitrarily bad solutions. Hence, the complexity of the encoder class plays an important role in learning domain invariant representations. Motivated by the above example, we next expose how the bound on the target risk depends on the complexity of the embedding class. To do so, we apply Theorem 2 to the hypothesis h = f g: This bound differs in two ways from the previous bound (equation), which was based only on F: the best in-class joint risk now minimizes over both F and G, i.e., which is smaller than λ F (g) and reflects the fact that we are learning both f and g. In return, the divergence term d F G∆F G (p S, p T) becomes larger than the one in equation. To better understand these tradeoffs, we will reformulate bound to be more interpretable. To this end, we define a version of the H∆H-divergence that explicitly measures variation of the embeddings in G: For two domain distributions p S and p T over X, an encoder class G, and predictor class F, the F G∆G -divergence between p S and p T is Importantly, the F G∆G -divergence is smaller than the FG∆FG-divergence, since the two hypotheses in the supremum, f g and f g, share the same predictor f. Theorem 4. For all f ∈ F and g ∈ G, where λ F G (g) is the best in-class joint risk defined as We prove all theoretical in the Appendix. This target generalization bound is small if (C1) the source risk is small, (C2) the latent divergence is small (because the domains are well-aligned and/or F is restricted), (C3) the complexity of G is restricted to avoid overfitting of alignments, and (C4) good source and target risk is in general achievable with F and G. Comparison to Previous Bounds. The last two terms in Theorem 2 express a similar complexity tradeoff, but with respect to the overall hypothesis class H, which here combines encoder and predictor. Directly applying Theorem 2 to the composition H = FG (equation) treats both jointly and does not make the role of the embedding as explicit as Theorem 4. The recent bound assumes a fixed embedding g and focuses on the predictor class F. As a , it captures embedding complexity even less explicitly: the first two terms in bound and Theorem 4 are the same. The last term in, λ F (g), contains the target risk with the given g. Hence, bound replaces (C3) and (C4) above by saying F and the specific g (which is much harder to control since in practice it is also optimized) can achieve good source and target risk. In contrast, Theorem 4 states an explicit complexity penalty on the variability of the embeddings, and uses the fixed g only in the source risk, which can be better estimated empirically. If F is not too rich, the latent divergence can be empirically minimized by finding a well-aligned embedding. Hence, we can minimize the upper bound in Theorem 4 by minimizing the usual source loss and domain-invariant loss and by choosing F and G appropriately to tradeoff the complexity penalty d F G∆G, the latent divergence (which increases with complexity of F and decreases with complexity of G), and the best in-class joint risk (which decreases with complexity of F and G). To empirically verify the embedding complexity tradeoff, we keep the predictor class F fixed, vary the embedding class G, and minimize the source loss and alignment objective. Concretely, we train domain adversarial neural networks (DANNs) on the Amazon reviews dataset (Book → Kitchen). Our hypothesis class is a multi-layer ReLU network, and the divergence is minimized against a discriminator. For more experimental details and , please refer to section 6. We train different models by varying the number of layers in the encoder while fixing the predictor to 4 layers. Figure 3.2(a) shows that, when increasing the number of layers in the encoder, the target error decreases initially and then increases as more layers are added. This supports our theory: the smaller encoders are not rich enough to allow for good alignments and λ F G (g), but overly expressive encoders may overfit. Predictor Complexity. Theoretically, the complexity of the predictor class F also affects the generalization bound in Theorem 4. Empirically, we found that the predictor complexity has much weaker influence on the target risk (see experiments in Appendix B). Indeed, theoretically, while the complexity of F affects the latent divergence, if the alignment via g is very good, this divergence can still be small. In addition, the F G∆G -divergence is more sensitive to the embedding complexity than the predictor complexity. This offers a possible explanation for our observations. In the remainder of this paper, we focus on the role of the embedding. Discussion. The in this section indicate that, without constraining the embedding complexity, we may overfit the distribution alignment and thereby destroy label consistency as in Figure 1. The bound suggests to choose the minimal complexity encoder class G that is is still expressive enough to minimize the latent space divergence. Practically, this can be done by regularizing the encoder, e.g., restricting Lipschitz constants or norms of weight matrices. More explicitly, one may limit the number of layers of a neural network, or apply inductive biases via network architectures. For instance, compared to fully connected networks, convolutional neural networks (CNNs) restrict the output representations to be spatially consistent with respect to the input. Due to their wide empirical success, multilayer neural networks have been adopted for learning domain-invariant representations. Next, we adapt the bound in Theorem 4 to multilayer networks. Specifically, we consider the number of layers as an explicit measurement of complexity. This will lead to a simple yet effective algorithm to mitigate the negative effect of very rich encoders. Assume we have an N -layer feedforward neural network h ∈ H. The model h can be decomposed as h = f i g i ∈ F i G i = H for i ∈ {1, 2, . . ., N − 1} where the embedding g i is formed by the first layer to the i-th layer and the predictor f i is formed by the i + 1-th layer to the last layer. We can then rewrite the bound in Theorem 4 in layer-specific form: Latent Divergence in i-th layer This yields N − 1 layer-specific upper bounds. Importantly, minimizing the domain-invariant loss in different layers leads to different tradeoffs between fit and complexity penalties. This is reflected by the following inequalities that relate different layer divisions. Proposition 5. (Monotonicity) In an N -layer feedforward neural network h = f i g i ∈ F i G i = H for i ∈ {1, 2, . . ., N − 1}, the following inequalities hold for all i ≤ j: Proposition 5 states that the latent divergence is monotonically decreasing and the complexity penalty is monotonically increasing with respect to the embedding's depth. This is a tradeoff within the fixed combined hypothesis class H. A deeper embedding allows for better alignments and simultaneously reduces the depth (power) of F; both reduce the latent divergence. At the same time, it incurs a larger F G∆G -divergence. This suggests that there might be an optimal division that minimizes the bound on the target risk. In practice, this translates into the question: in which intermediate layer should we optimize the domain-invariant loss? Figure 3.2(b) shows how the target error changes as a function of the layer division, with a total of n = 8 layers. Indeed, empirically there is an optimal division with minimum target error, suggesting that for a fixed H, i.e., total network depth, not all divisions are equal. If the exact layer-specific bounds could be computed, one could simply select the layer division with the lowest bound. But, this is in general computationally nontrivial. Instead, we take a different perspective. In fact, the layer-specific bounds all hold simultaneously, independent of the layer we selected for distribution alignment. Corollary 6. Let h be an N -layer feedforward neural network h = f i g i ∈ F i G i = H for i ∈ {1, 2, . . ., N − 1}, we have the layer-agnostic bound where λ F G (g) is the best in-class joint risk defined in Theorem 4. The corollary implies that at least one of these bounds should be small. Recall that the bounds depend on how well we can minimize the source risk and align the distributions via a sufficiently powerful embedding, while, at the same time, limiting the complexity of F and G. Corollary 6 points to various algorithmic ideas: Simultaneously optimizing several bounds may in approximately minimizing at least one of them, without having to select an optimal one. We may attain small latent divergence with a deeper encoder, if we achieve to restrict the complexity of G appropriately. It turns out that these two ideas are related. Optimizing the domain-invariant loss with alignment in a specific layer may in large bounds for the other layers, due to the monotonicity of the two divergences (Proposition 5) and potentially non-aligned embeddings in lower layers. Hence, we propose to instead solve a multi-objective optimization problem where we jointly align source and target distributions in multiple layers. Let L ⊆ {1, 2, . . ., N − 1} be a subset of layers. We minimize the weighted sum of divergences, and refer to this objective as Multilayer Divergence Minimization (MDM): This objective encourages alignment throughout the layer-wise embeddings in the network. First, a good alignment minimizes the latent divergence, if F is not too rich. For the lower layers (shallow embeddings), this comes together with a very restricted class of embeddings, and hence limits both latent divergence and complexity penalty. Without the optimization across layers, the embeddings in lower layers are not driven towards alignment. Second, enforcing alignment in lower layers implicitly restricts the deeper embeddings in higher layers, since the embeddings are such that alignment happens early on. This effect may be viewed as an implicit regularization. By this perspective, the bounds for higher layers profit from low latent divergences (deeper embeddings and shallow predictors) and restricted complexity of G. In general, one can simply set L = {1, 2, . . ., N − 1}. To improve computational efficiency, we can sub-sample layers or exclude the first and the last few layers. MDM is simple and general, and can be combined with most algorithms for learning domain-invariant representations. For DANN, for instance, we minimize the divergence in multiple layers by adding discriminators. Existing approaches for learning domain-invariant representations may be distinguised, e.g., by which divergence they measure between source and target domain. Examples include domain adver- sarial learning approaches (; ;), maximum mean discrepancy (MMD) (; 2015; and Wasserstein distance (; ;). Other works improve performance by combining the domain-invariant loss with other objectives. penalize the violation of the cluster assumption. In addition to the shared feature encoder between source and target domain, include private encoders for each domain to capture domain-specific information. propose a domain discriminator that is conditioned on the cross-covariance of domain-specific embeddings and classifier predictions to leverage discriminative information. Besides the usual distribution alignment, further align the input space with a generative model that maps the target input distribution to the source distribution. These previous works can be interpreted as adding additional regularization via auxiliary objectives, and thereby potentially reducing the complexity penalty. Some previous works also optimize the domain-invariant loss in multiple layers. fuse the representations from a bottleneck layer and a classifier layer by a tensor product and minimize the domain divergence based on the aggregated representations. Joint adaptation networks (JADs) minimize the MMD in the last few layers to make the embeddings more transferable. MDM can be seen as a generalization of JADs that minimizes domain divergence in nearly every layer, driven by a strong theoretical motivation. Importantly, minimizing the divergence only in the last few layers could still be suboptimal, since the embeddings may not be sufficiently regularized. We test our theory and algorithm on several standard benchmarks: sentiment analysis (Amazon reviews dataset), digit classification (MNIST, MNIST-M, SVHN) and general object classification (Office-31). In all experiments, we train DANN , which measures the latent divergence via a domain discriminator (Jensen Shannon divergence). A validation set from the source domain is used as an early stopping criterion during learning. In all experiments, we use the Adam optimizer and a progressive training strategy for the discriminator . We primarily consider three types of complexity: number of layers, number of hidden neurons, and inductive bias (CNNs). In all experiments, we retrain each model for 5 times and plot the mean and standard deviation of the target error. For evaluating MDM, we consider three weighting schemes: uniform weights (α i = α 0), linearly decreasing (α i = α 0 −c×i), and exponentially decreasing (α i = α 0 exp(−c×i)) where c ≥ 0. The decreasing weights encourage the network to minimize the latent divergence in the first few layers, where the embedding complexity is low. This may also further restrict the deeper embeddings. More experimental details can be found in Appendix C. Sentiment Classification. We first examine complexity tradeoffs on the Amazon reviews data, which has four domains (books (B), DVD disks (D), electronics (E), and kitchen appliances (K)) with binary labels (positive / negative review). Reviews are encoded into 5000 dimensional feature vectors of unigrams and bigrams. The hypothesis class are multi-layer ReLU networks. We show the on B→K, K→B, B→D, and D→B in Figure 3. To probe the effect of embedding complexity by itself, we fix the predictor class to 4 layers and vary the number of layers of the embedding. In agreement with the in Section 3.2, the target error decreases initially, and then increases as more layers are added to the encoder. Next, we probe the tradeoff when the total number of layers is fixed to 8. The bottom row of Figure 3 shows that there exists an optimal setting for all tasks. For MDM, we optimize alignment in all intermediate layers. The suggest that MDM's performance is comparable to the hypothesis with the optimal division, without tuning the division. The three weighting schemes perform similarly, suggesting that MDM is robust to weight selection. Digit Classification. We next verify our findings on standard domain adaptation benchmarks: MNIST→MNIST-M (M→M-M) and SVHN→MNIST (S→M). We use standard CNNs as the hypothesis class; architecture details are in Appendix C. Number of Layers in Encoder To analyze the effect of the embedding complexity, we augment the original two-layer CNN encoders with 1 to 6 additional CNN layers for M→M-M and 1 to 24 for S→M, leaving other settings unchanged. Figure 4(a) shows the . Again, the target error decreases initially and increase as the encoder becomes more complex. Notably, the target error increases by 19.8% in M→M-M and 8.8% in S→M compared to the optimal case, when more layers are added to the encoder. We also consider the width of hidden layers as a complexity measure, while fixing the depth of both encoder and predictor. The are shown in Figure 4 (b). This time, the decrease in target error is not significant compared to increasing encoder depth. This suggests that depth plays a more important role than width in learning domain-invariant representations. Next, we fix the total number of CNN layers of the neural network to 7 and 26 for M→M-M and S→M, respectively, and optimize the domain-invariant loss in different intermediate layers. The in Figure 4 (c) again show a "U-curve", indicating the existence of an optimal division. Even with fixed total size of the network (H), the performance gap between different divisions can still reach 19.5% in M→M-M and 10.4% in S→M. For MDM, L contains all the augmented CNN layers for M→M-M. For S→M, we sub-sample a CNN layer every four layers to form L. We also observe that MDM with all weighting schemes consistently achieves comparable performance with the best division in S→M and even better performance in M→M-M. To investigate the importance of inductive bias in domain-invariant representations, we replace the CNN encoder by an MLP encoder. The for M→M-M are shown in Figure 5. Comparing to CNNs, which encode invariance via pooling and learned filters, MLPs do not have any inductive bias and lead to worse performance. In fact, the target error with MLP-based domain adaptation is higher than merely training on the source: without an appropriate inductive bias, learning domain invariant representations can even worsen the performance. Object Classification. Office-31 , one of the most widely used benchmarks in domain adaptation, contains three domains: Amazon (A), Webcam (W), and DSLR (D) with 4,652 images and 31 categories. We show for A→W, A→D, W→A, and D→A in Figure 6. To overcome the lack of training data, similar to , we use ResNet-50 pretrained on ImageNet for feature extraction. With the extracted features, we adopt multi-layer ReLU networks as hypothesis class. Again, we increase the depth of the encoder while fixing the depth of the predictor to 2 and show the Figure 6. Even with a powerful feature extractor, the embedding complexity tradeoff still exists. Second, we fix the total network depth to 14 and optimize MDM, with L containing all even layers in the network. MDM achieves comparable performance to the best division for most of the tasks, albeit slightly worse performance in D→A. In this paper, we theoretically and empirically analyze the effect of embedding complexity on the target risk in domain-invariant representations. We find a complexity tradeoff that has mostly been overlooked by previous work. In fact, without carefully selecting and restricting the encoder class, learning domain invariant representations might even harm the performance. We further develop a simple yet effective algorithm to approximately optimize the tradeoff, achieving performance across tasks that matches the best network division, i.e., complexity tradeoff. Interesting future directions of work include other strategies for model selection, and a more refined analysis and exploitation of the effect of inductive bias. A.1 PROOF OF THEOREM 4 Theorem 4. For all f ∈ F and g ∈ G, where λ F G (g) is the best in-class joint risk defined as Proof. We first define the optimal composition hypothesis f * g * with respect to an encoder g to be the hypothesis which minimizes the following error By the triangle inequality for classification error , The second term in the R.H.S of Eq. 13 can be bounded as The third term in the R.H.S of Eq. 13 can be bounded as We investigate the effect of predictor complexity on MNIST→MNIST-M. Follow the procedure in section 6, we augment the original predictor with 1 to 7 additional CNN layers and fix the number of layers in encoder to 4 or vary the hidden width. The are shown in Figure 7. The target error slightly decreases as the number of layers in the predictor increases. Even we augment 7 layers to the predictor, the target error only decrease 0.9% which is nearly ignorable. Therefore, we focus on the embedding complexity in the main paper which is both theoretically and empirically interesting. Target Error Target The learning rate of Adam optimizer is set to 1 × e −3 and the model are trained for 50 epochs. We adopt the original progressive training strategy for discriminator where the weight α for domain-invariant loss in equation is initiated at 0 and is gradually changed to 1 using the following schedule: where p is the training progress linearly changing from 0 to 1. The architecture of the hypothesis and discriminator are as follows: Encoder nn. Linear nn. ReLU nn. Linear nn. ReLU ×n (depends on the number of layers) Predictor nn. Linear nn. ReLU ×n (depends on the number of layers) nn. Linear nn. Softmax Discriminator nn. Linear nn. ReLU nn. Linear nn. ReLU ×5 nn. Linear nn. Softmax The learning rate of Adam optimizer is set to 1 × e −3 and the model are trained for 100 epochs. The weight α for domain-invariant loss in equation is initiated at 0 and is gradually changed to 0.1 using the same schedule in section C.1. The architecture of the hypothesis and discriminator are as follows: Encoder nn. Conv2d(3, 64, kernel size=5) nn. BatchNorm2d nn. MaxPool2d nn. ReLU nn. Conv2d(64, 128, kernel size=5) nn. BatchNorm2d nn. Dropout2d (only added for MNIST→MNIST-M) nn. MaxPool2d nn. ReLU nn. Conv2d(128, 128, kernel size=3, padding=1) nn. BatchNorm2d nn. ReLU ×n (depends on the number of layers) Predictor nn. Conv2d(128, 128, kernel size=3, padding=1) nn. BatchNorm2d nn. ReLU ×n (depends on the number of layers) flatten nn. Linear nn. BatchNorm1d nn. ReLU nn. Linear nn. Softmax Discriminator nn. Conv2d(128, 256, kernel size=3, padding=1) nn. ReLU nn. Conv2d(256, 256, kernel size=3, padding=1) nn. ReLU ×4 Flatten nn. Linear nn. ReLU nn. Linear nn. ReLU nn. Linear nn. Softmax In the hidden width experiments, we treat the architectures above as the pivot and multiply their hidden width with the ratios. We exploit the feature after average pooling layer of the ResNet-50 pretrained on ImageNet for feature extraction. The learning rate of Adam optimizer is set to 3 × e −4 and the model are trained for 100 epochs. The weight α for domain-invariant loss in equation is initiated at 0 and is gradually changed to 1 using the same schedule in section C.1. The architecture of the hypothesis and discriminator are as follows: Encoder nn. Linear nn. ReLU nn. Linear nn. ReLU ×n (depends on the number of layers) Predictor nn. Linear nn. ReLU ×n (depends on the number of layers) nn. Linear nn. Softmax Discriminator nn. Linear nn. ReLU ×6 nn. Linear nn. Softmax In all the experiments, we minimize the divergence in multiple layers by augmenting additional discriminators for each layer-specific representations where the discriminators share the same architecture as the standard setting. For uniform weighting scheme (α i = α 0), α i is set to the normalized same value α in the stand setting. For linear decreasing scheme (α i = α 0 − c × i), α i decreases from α 0 = α to 0 linearly. For exponentially decreasing scheme (α i = α 0 exp(−c × i)), α 0 is set to α and c increases from 0 to 2 linearly.
We study the effect of the embedding complexity in learning domain-invariant representations and develop a strategy that mitigates sensitivity to it.
1,176
scitldr
We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER). Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are proposed to explore effective feature fusion between high and low resource. To address the noisy and imbalanced training data, we propose a novel Generalized Resource-Adversarial Discriminator (GRAD). Additionally, adversarial training is adopted to boost model generalization. We examine the effects of different components in DATNet across domains and languages and show that significant improvement can be obtained especially for low-resource data. Without augmenting any additional hand-crafted features, we achieve new state-of-the-art performances on CoNLL and Twitter NER---88.16% F1 for Spanish, 53.43% F1 for WNUT-2016, and 42.83% F1 for WNUT-2017. Named entity recognition (NER) is an important step in most natural language processing (NLP) applications. It detects not only the type of named entity, but also the entity boundaries, which requires deep understanding of the contextual semantics to disambiguate the different entity types of same tokens. To tackle this challenging problem, most early studies were based on hand-crafted rules, which suffered from limited performance in practice. Current methods are devoted to developing learning based algorithms, especially neural network based methods, and have been advancing the state-of-the-art consecutively BID7 BID23 BID6 BID33. These end-to-end models generalize well on new entities based on features automatically learned from the data. However, when the annotated corpora is small, especially in the low resource scenario BID56, the performance of these methods degrades significantly since the hidden feature representations cannot be learned adequately. Recently, more and more approaches have been proposed to address low-resource NER. Early works BID5 BID24 primarily assumed a large parallel corpus and focused on exploiting them to project information from high-to low-resource. Unfortunately, such a large parallel corpus may not be available for many low-resource languages. More recently, cross-resource word embedding BID9 BID0 BID52 was proposed to bridge the low and high resources and enable knowledge transfer. Although the aforementioned transferbased methods show promising performance in low-resource NER, there are two issues deserved to be further investigated on: 1) Representation Difference -they did not consider the representation difference across resources and enforced the feature representation to be shared across languages/domains; 2) Resource Data Imbalance -the training size of high-resource is usually much larger than that of low-resource. The existing methods neglect such difference in their models, ing in poor generalization. In this work, we present an approach termed Dual Adversarial Transfer Network (DATNet) to address the above issues in a unified framework for low-resource NER. Specifically, to handle the representation difference, we first investigate on two architectures of hidden layers (we use bidirectional long-short term memory (BiLSTM) model as hidden layer) for transfer. The first one is that all the units in hidden layers are common units shared across languages/domains. The second one is composed of both private and common units, where the private part preserves the independent language/domain information. Extensive experiments are conducted to show their advantages over each other in different situations. On top of common units, the adversarial discriminator (AD) loss is introduced to encourage the resource-agnostic representation so that the knowledge from high resource can be more compatible with low resource. To handle the resource data imbalance issue, we further propose a variant of the AD loss, termed Generalized Resource-Adversarial Discriminator (GRAD), to impose the resource weight during training so that low-resource and hard samples can be paid more attention to. In addition, we create adversarial samples to conduct the Adversarial Training (AT), further improving the generalization and alleviating over-fitting problem. We unify two kinds of adversarial learning, i.e., GRAD and AT, into one transfer learning model, termed Dual Adversarial Transfer Network (DATNet), to achieve end-to-end training and obtain the state-of-the-art performance on a series of NER tasks-88.16% F1 for CoNLL-2002 Spanish, 53.43% and 42.83% F1 for WNUT-2016. Different from prior works, we do not use additional hand-crafted features and do not use cross-lingual word embeddings while addressing the cross-language tasks. NER is typically framed as a sequence labeling task which aims at automatic detection of named entities (e.g., person, organization, location and etc.) from free text BID35. The early works applied CRF, SVM, and perception models with handcrafted features BID44 BID42 BID31. With the advent of deep learning, research focus has been shifting towards deep neural networks (DNN), which requires little feature engineering and domain knowledge BID23 BID57. BID7 proposed a feed-forward neural network with a fixed sized window for each word, which failed in considering useful relations between long-distance words. To overcome this limitation, BID6 presented a bidirectional LSTM-CNNs architecture that automatically detects word-and character-level features. BID33 further extended it into bidirectional LSTM-CNNs-CRF architecture, where the CRF module was added to optimize the output label sequence. proposed task-aware neural language model termed LM-LSTM-CRF, where character-aware neural language models were incorporated to extract characterlevel embedding under a multi-task framework. Transfer Learning for NER Transfer learning can be a powerful tool to low resource NER tasks. To bridge high and low resource, transfer learning methods for NER can be divided into two types: the parallel corpora based transfer and the shared representation based transfer. Early works mainly focused on exploiting parallel corpora to project information between the high-and low-resource language BID53 BID5 BID24 BID10. For example, BID5 and BID10 proposed to jointly identify and align bilingual named entities. On the other hand, the shared representation methods do not require the parallel correspondence BID46. For instance, BID9 proposed cross-lingual word embeddings to transfer knowledge across resources. BID52 presented a transfer learning approach based on a deep hierarchical recurrent neural network (RNN), where full/partial hidden features between source and target tasks are shared. BID38 BID39 utilized the Wikipedia entity type mappings to improve low-resource NER. BID2 built massive multilingual annotators with minimal human expertise by using language agnostic techniques. BID36 created a cross-language NER system, which works well for very minimal resources by translate annotated data of high-resource into low-resource. BID8 proposed character-level neural CRFs to jointly train and predict low-and high-resource languages. BID40 proposes a large-scale cross-lingual named entity dataset which contains 282 languages for evaluation. In addition, multi-task learning BID51 BID32 BID45 BID1 BID15 BID28 shows that jointly training on multiple tasks/languages helps improve performance. Different from transfer learning methods, multi-task learning aims at improving the performance of all the resources instead of low resource only. Adversarial Learning Adversarial learning originates from Generative Adversarial Nets (GAN) BID12, which shows impressing in computer vision. Recently, many papers have tried to apply adversarial learning to NLP tasks. BID30 presented an adversarial multi-task learning framework for text classification. BID14 applied the adversarial discriminator to POS tagging for Twitter. BID18 proposed a language discriminator to enable language-adversarial training for cross-language POS tagging. Apart from adversarial discriminator, adversarial training is another concept originally introduced by BID49 BID13 to improve the robustness of image classification model by injecting malicious perturbations into input images. Recently, BID37 proposed a semi-supervised text classification method by applying adversarial training, where for the first time adversarial perturbations were added onto word embeddings. BID54 applied adversarial training to POS tagging. Different from all these adversarial learning methods, our method integrates both the adversarial discriminator and adversarial training in an unified framework to enable end-to-end training. In this section, we introduce DATNet in more details. We first describe a base model for NER, and then discuss two proposed transfer architectures for DATNet. We follow state-of-the-art models for NER task BID23 BID6 BID33, i.e., LSTM-CNNs-CRF based structure, to build the base model. It consists of the following pieces: character-level embedding, word-level embedding, BiLSTM for feature representation, and CRF as the decoder. The character-level embedding takes a sequence of characters in the word as atomic units input to derive the word representation that encodes the morphological information, such as root, prefix, and suffix. These character features are usually encoded by character-level CNN or BiLSTM, then concatenated with word-level embedding to form the final word vectors. On top of them, the network further incorporates the contextual information using BiLSTM to output new feature representations, which is subsequently fed into CRF layer to predict label sequence. Although both of the word-level layer and the character-level layer can be implemented using CNNs or RNNs, we use CNNs for extracting character-level and RNNs for extracting word-level representation. FIG0 shows the the architecture of the base model. Previous works have shown that character features can boost sequence labeling performance by capturing morphological and semantic information BID28. For low-resource dataset to obtain high-quality word features, character features learned from other language/domain may provide crucial information for labeling, especially for rare and out-of-vocabulary words. Character-level encoder usually contains BiLSTM BID23 and CNN BID6 BID33 approaches. In practice, BID47 observed that the difference between the two approaches is statistically insignificant in sequence labeling tasks, but character-level CNN is more efficient and has less parameters. Thus, we use character-level CNN and share character features between high-and low-resource tasks to enhance the representations of low-resource. To learn a better word-level representation, we concatenate character-level features of each word with a latent word embedding as DISPLAYFORM0 ], where the latent word embedding w emb i is initialized with pre-trained embeddings and fixed during training. One unique characteristic of NER is that the historical and future input for a given time step could be useful for label inference. To exploit such a characteristic, we use a bidirectional LSTM architecture BID16 ) to extract contextualized word-level features. In this way, we can gather the information from the past and future for a particular time frame t as follows, DISPLAYFORM1 After the LSTM layer, the representation of a word is obtained by concatenating its left and right context representation as follows, DISPLAYFORM2 To consider the resource representation difference on word-level features, we introduce two kinds of transferable word-level encoder in our model, namely DATNet-Full Share (DATNet-F) and DATNetPart Share (DATNet-P). In DATNet-F, all the BiLSTM units are shared by both resources while word embeddings for different resources are disparate. The illustrative figure is depicted in the FIG0 Different from DATNet-F, the DATNet-P decomposes the BiLSTM units into the shared component and the resource-related one, which is shown in the FIG0. In order to make the feature representation extracted from the source domain more compatible with those from the target domain, we encourage the outputs of the shared BiLSTM part to be resourceagnostic by constructing a resource-adversarial discriminator, which is inspired by the LanguageAdversarial Discriminator proposed by BID18. Unfortunately, previous works did not consider the imbalance of training size for two resources. Specifically, the target domain consists of very limited labeled training data, e.g., 10 sentences. In contrast, labeled training data in the source domain are much richer, e.g., 10k sentences. If such imbalance was not considered during training, the stochastic gradient descent (SGD) optimization would make the model more biased to high resource BID27. To address this imbalance problem, we impose a weight α on two resources to balance their influences. However, in the experiment we also observe that the easily classified samples from high resource comprise the majority of the loss and dominate the gradient. To overcome this issue, we further propose Generalized Resource-Adversarial Discriminator (GRAD) to enable adaptive weights for each sample (note that the sample here means each sentence of resource), which focuses the model training on hard samples. To compute the loss of GRAD, the output sequence of the shared BiLSTM is firstly encoded into a single vector via a self-attention module BID3, and then projected into a scalar r via a linear transformation. The loss function of the resource classifier is formulated as: DISPLAYFORM0 where I i∈D S, I i∈D T are the identity functions to denote whether a sentence is from high resource (source) and low resource (target), respectively; α is a weighting factor to balance the loss contribution from high and low resource; the parameter (1 − r i) γ (or r γ i) controls the loss contribution from individual samples by measuring the discrepancy between prediction and true label (easy samples have smaller contribution); and γ scales the contrast of loss contribution from hard and easy samples. In practice, the value of γ does not need to be tuned much and usually set as 2 in our experiment. Intuitively, the weighting factors α and (1 − r i) γ reduce the loss contribution from high resource and easy samples, respectively. Note that though the resource classifier is optimized to minimize the resource classification error, when the gradients originated from the resource classification loss are back-propagated to the other model parts than the resource classifier, they are negated for parameter updates so that these bottom layers are trained to be resource-agnostic. The label decoder induces a probability distribution over sequences of labels, conditioned on the word-level encoder features. In this paper, we use a linear chain model based on the first-order Markov chain structure, termed the chain conditional random field (CRF) BID22, as the decoder. In this decoder, there are two kinds of cliques: local cliques and transition cliques. Specifically, local cliques correspond to the individual elements in the sequence. And transition cliques, on the other hand, reflect the evolution of states between two neighboring elements at time t − 1 and t and we define the transition distribution as θ. Formally, a linear-chain CRF can be written as p(y|h 1: DISPLAYFORM0 W yt h t, where Z(h 1:T) is a normalization term and y is the sequence of predicted labels as follows: y = y 1:T. Model parameters are optimized to maximize this conditional log likelihood, which acts as the objective function of the model. We define the loss function for source and target resources as follows, S = − i log p(y|h 1:T), T = − i log p(y|h 1:T). So far our model can be trained end-to-end with standard back-propagation by minimizing the following loss: DISPLAYFORM0 Recent works have demonstrated that deep learning models are fragile to adversarial examples BID13. In computer vision, those adversarial examples can be constructed by changing a very small number of pixels, which are virtually indistinguishable to human perception BID43. Recently, adversarial samples are widely incorporated into training to improve the generalization and robustness of the model, which is so-called adversarial training (AT) BID37. It emerges as a powerful regularization tool to stabilize training and prevent the model from being stuck in local minimum. In this paper, we explore AT in context of NER. To be specific, we prepare an adversarial sample by adding the original sample with a perturbation bounded by a small norm to maximize the loss function as follows: DISPLAYFORM1 where Θ is the current model parameters set. However, we cannot calculate the value of η exactly in general, because the exact optimization with respect to η is intractable in neural networks. Following the strategy in BID13, this value can be approximated by linearizing it as follows, DISPLAYFORM2 where can be determined on the validation set. In this way, adversarial examples are generated by adding small perturbations to the inputs in the direction that most significantly increases the loss function of the model. We find such η against the current model parameterized by Θ, at each training step, and construct an adversarial example by x adv = x + η x. Noted that we generate this adversarial example on the word and character embedding layer, respectively, as shown in the FIG0 (b) and 1(c). Then, the classifier is trained on the mixture of original and adversarial examples to improve the generalization. To this end, we augment the loss in Eqn. 2 and define the loss function for adversarial training as: DISPLAYFORM3 where (Θ; x), (Θ; x adv) represents the loss from an original example and its adversarial counterpart, respectively. Note that we present the AT in a general form for the convenience of presentation. For different samples, the loss and parameters should correspond to their counterparts. For example, for the source data with word embedding w S, the loss for AT can be defined as follows, AT = (Θ; w S) + (Θ; w S,adv) with w S,adv = w S + η w S and = GRAD + S. Similarly, we can compute the perturbations η c for char-embedding and η w T for target word embedding. In order to evaluate the performance of DATNet, we conduct the experiments on following widely used NER datasets: CoNLL-2003 English NER BID20, CoNLL-2002 Spanish & Dutch NER BID19, WNUT-2016 English Twitter NER (. The statistics of these datasets are described in TAB1 . We use the official split of training/validation/test sets. Since our goal is to study the effects of transferring knowledge from high-resource dataset to low-resource dataset, unlike previous works BID7 BID6 BID52 to append one-hot gazetteer features to the input of the CRF layer, and the works BID41 BID25 BID1 to introduce orthographic feature as additional input for learning social media NER in tweets, we do not experiment with hand-crafted features and only consider words and characters embeddings as the inputs of our model. To be noted, we used only train set for model training for all datasets except the WNUT-2016 NER dataset. Since in this dataset, all the previous studies merged the training and validation sets together for training, we followed the same way for fair comparison. In addition to the CoNLL and WNUT datasets, we also experiment on the cross-language named entity dataset described in BID40, which contains datasets for 282 languages, to evaluate our methods and investigate the transferability of different linguistic families and branches in both low-and high-resource scenarios. We choose 9 languages in our experiment, where Galician (gl), West Frisian (fy), Ukrainian (uk) and Marathi (mr) are target languages, the corresponding source languages are Spanish (es), Dutch (nl), Russian (ru) and Hindi (hi), and Arabic (ar) is also a source language, which is from different linguistic family. Following the setting in BID8, we also simulate the low-and high-resource scenarios by creating 100 and 10,000 sentences split for training target language datasets, respectively. Then we create 1,000 sentences split for validation and test, respectively. For source languages, we create 10,000 sentence split for training only. For high-resource scenario, we only conduct experiments on Galician (gl-high) and Ukrainian (uk-high).The list of selected datasets are described in TAB2. BID28. For the named entity datasets selected from BID40, we use 300-dimensional pre-trained word embeddings trained by fastText package 3 on Wikipedia BID4, and the 30-dimensional randomly initialized character embeddings are used for all the datasets. We set the filter number as 20 for char-level CNN and the dimension of hidden states of the word-level LSTM as 200 for both base model and DATNet-F. For DATNet-P, we set 100 for source, share, and target LSTMs dimension, respectively. Parameters optimization is performed by Adam optimizer BID21 with gradient clipping of 5.0 and learning rate decay strategy. We set the initial learning rate of β 0 = 0.001 for all experiments. At each epoch t, learning rate β t is updated using β t = β 0 /(1 + ρ × t), where ρ is decay rate with 0.05. To reduce over-fitting, we also apply Dropout BID48 to the embedding layer and the output of the LSTM layer, respectively. In this section, we compare our approach with state-of-the-art (SOTA) methods on CoNLL and WNUT benchmark datasets. In the experiment, we exploit all the source data (i.e., CoNLL-2003 English NER) and target data to improve performance of target tasks. The averaged with standard deviation over 10 repetitive runs are summarized in TAB3, and we also report the best on each task for fair comparison with other SOTA methods. From , we observe that incorporating the additional resource is helpful to improve performance. TAB4 summarizes the of our methods under different cross-language transfer settings as well as the comparison with BID8. In this experiment, we study the transferability between languages not only from same linguistic family and branch, but also from different linguistic families or branches. According to the , DATNets outperform the transfer method of BID8 for both low-and high-resource scenarios within the same linguistic family and branch (i.e., in-family in-branch) transfer case. We also observe that: 1) For the low-resource scenario, transfer learning is significantly helpful for improving the performance of target datasets within both same and different linguistic family or branch (i.e., in/cross-family in/cross-branch) transfer cases, while the improvements are more prominent under the in-family in-branch case. 2) For the high-resource scenario, say, when the target language data is sufficient, the improvements of transfer learning are not very distinct compared with that for low-resource scenario under in-family in-branch case. We also find that there is no effect by transferring knowledge from Arabic to Galician and Ukrainian. We suspect that it is caused by the great linguistic differences between source and target languages, since, for example, Arabic and Galician are from totally different linguistic families. In this section, we investigate on improvements with transfer learning under multiple low-resource settings with partial target data. To simulate a low-resource setting, we randomly select subsets of target data with varying data ratio at 0. 05, 0.1, 0.2, 0.4, 0.6, and 1.0. For example, 20, 748 training tokens are sampled from the training set under a data ratio of r = 0.1 for the dataset CoNLL-2002 Spanish NER (Cf. TAB1). The for cross-language and cross-domain transfer are shown in FIG2 (a) and 2(b), respectively, where we compare the with each part of DATNet under various data ratios. From those figures, we have the following observations: 1) both adversarial training and adversarial discriminator in DATNet consistently contribute to the performance improvement; 2) the transfer learning component in the DATNet consistently improve over the base model and the improvement margin is more substantial when the target data ratio is lower. For example, when the data ratio is 0.05, DATNet-P model outperforms the base model by more than 4% absolutely in F1-score on Spanish NER and DATNet-F model improves around 13% absolutely in F1-score compared to base model on WNUT-2016 NER.In the second experiment, we further investigate DATNet on the extremely low resource cases, e.g., the number of training target sentences is 10, 50, 100, 200, 500 and 1,000. The setting is quite challenging and fewer previous works have studied before. The are summarized in TAB5. We have two interesting observations 5: 1) DATNet-F outperforms DATNet-P on cross-language transfer when the target resource is extremely low, however, this situation is reversed when the target dataset size is large enough (here for this specific dataset, the threshold is 100 sentences); 2) DATNet-F is always superior to DATNet-P on cross-domain transfer. For the first observation, it is because DATNet-F with more shared hidden units is more efficient to transfer knowledge than DATNet-P when data size is extremely small. For the second observation, because cross-domain transfer are in the same language, more knowledge is common between the source and target domains, requiring more shared hidden features to carry with these knowledge compared to cross-language transfer. Therefore, for cross-language transfer with an extremely low resource and cross-domain transfer, we suggest using DATNet-F model to achieve better performance. As for cross-language transfer with relatively more training data, DATNet-P model is preferred. In the proposed DATNet, both GRAD and AT play important roles in low resource NER. In this experiment, we further investigate how GRAD and AT help transfer knowledge across language/domain. In the first experiment 6, we used t-SNE BID34 to visualize the feature distribution of BiLSTM outputs without AD, with normal AD (GRAD without considering data imbalance), and with the proposed GRAD in FIG3. From this figure, we can see that the GRAD in DATNet makes the distribution of extracted features from the source and target datasets 5 For other tasks/languages we have the similar observation, we only report CoNLL-2002 Spanish and WNUT-2016 Twitter due to the page limit. 6 We used data ratio ρ = 0.5 for training model and randomly selected 10k testing data for visualization. much more similar by considering the data imbalance, which indicates that the outputs of BiLSTM are resource-invariant. To better understand the working mechanism, TAB6 further reports the quantitative performance comparison between models with different components. We observe that GRAD shows the stable superiority over the normal AD regardless of other components. There are no always winner between DATNet-P and DATNet-F on different settings. DATNet-P architecture is more suitable to cross-language transfer while DATNet-F is more suitable to cross-domain transfer. From the previous , we know that AT helps enhance the overall performance by adding perturbations to inputs with the limit of = 5, i.e., η 2 ≤ 5. In this experiment, we further investigate how target perturbation w T with fixed source perturbation w S = 5 in AT affects knowledge transfer and the on Spanish NER are summarized in TAB7. The generally indicate that less training data require a larger to prevent over-fitting, which further validates the necessity of AT in the case of low resource data. Finally, we analyze the discriminator weight α in GRAD and are summarized in TAB8. From the , it is interesting to find that α is directly proportional to the data ratio ρ, basically, which means that more target training data requires larger α (i.e., smaller 1 − α to reduce training emphasis on the target domain) to achieve better performance. In this paper we develop a transfer learning model DATNet for low-resource NER, which aims at addressing two problems remained in existing work, namely representation difference and resource data imbalance. We introduce two variants of DATNet, DATNet-F and DATNet-P, which can be chosen for use according to the cross-language/domain user case and the target dataset size. To improve model generalization, we propose dual adversarial learning strategies, i.e., AT and GRAD. Extensive experiments show the superiority of DATNet over existing models and it achieves new state-of-the-art performance on CoNLL NER and WNUT NER benchmark datasets.
We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER) and achieve new state-of-the-art performances on CoNLL and Twitter NER.
1,177
scitldr
Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images. Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model the target distribution. We introduce Coulomb GANs, which pose the GAN learning problem as a potential field, where generated samples are attracted to training set samples but repel each other. The discriminator learns a potential field while the generator decreases the energy by moving its samples along the vector (force) field determined by the gradient of the potential field. Through decreasing the energy, the GAN model learns to generate samples according to the whole target distribution and does not only cover some of its modes. We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution. We show the efficacy of Coulomb GANs on LSUN bedrooms, CelebA faces, CIFAR-10 and the Google Billion Word text generation. Generative adversarial networks (GANs) excel at constructing realistic images BID28 BID24 BID3 and text BID18. In GAN learning, a discriminator network guides the learning of another, generative network. This procedure can be considered as a game between the generator which constructs synthetic data and the discriminator which separates synthetic data from training set data BID16. The generator's goal is to construct data which the discriminator cannot tell apart from training set data. GAN convergence points are local Nash equilibria. At these local Nash equilibria neither the discriminator nor the generator can locally improve its objective. Despite their recent successes, GANs have several problems. First (I), until recently it was not clear if in general gradient-based GAN learning could converge to one of the local Nash equilibria BID38 BID15. It is even possible to construct counterexamples BID16. Second (II), GANs suffer from "mode collapsing", where the model generates samples only in certain regions which are called modes. While these modes contain realistic samples, the variety is low and only a few prototypes are generated. Mode collapsing is less likely if the generator is trained with batch normalization, since the network is bound to create a certain variance among its generated samples within one batch. However batch normalization introduces fluctuations of normalizing constants which can be harmful BID16. To avoid mode collapsing without batch normalization, several methods have been proposed BID5 BID38. Third (III), GANs cannot assure that the density of training samples is correctly modeled by the generator. The discriminator only tells the generator whether a region is more likely to contain samples from the training set or synthetic samples. Therefore the discriminator can only distinguish the support of the model distribution from the support of the target distribution. Beyond matching the support of distributions, GANs with proper objectives may learn to locally align model and target densities via averaging over many training examples. On a global scale, however, GANs fail to equalize model and target densities. The discriminator does not inform the generator globally where probability mass is missing. Consequently, standard GANs are not assured to capture the global sample density and are prone to neglect large parts of the target distribution. The next paragraph gives an example of this. Fourth (IV), the discriminator of GANs may forget previous modeling errors of the generator which then may reappear, a property that leads to oscillatory behavior instead of convergence BID16.Recently, problem (I) was solved by proving that GAN learning does indeed converge when discriminator and generator are learned using a two time-scale learning rule BID20. Convergence means that the expected SGD-gradient of both the discriminator objective and the generator objective are zero. Thus, neither the generator nor the discriminator can locally improve, i.e., learning has reached a local Nash equilibrium. However, convergence alone does not guarantee good generative performance. It is possible to converge to sub-optimal solutions which are local Nash equilibria. Mode collapse is a special case of a local Nash equilibrium associated with suboptimal generative performance. For example, assume a two mode real world distribution where one mode contains too few and the other mode too many generator samples. If no real world samples are between these two distinct modes, then the discriminator penalizes to move generated samples outside the modes. Therefore the generated samples cannot be correctly distributed over the modes. Thus, standard GANs cannot capture the global sample density such that the ing generators are prone to neglect large parts of the real world distribution. A more detailed example is listed in the Appendix in Section A.1.In this paper, we introduce a novel GAN model, the Coulomb GAN, which has only one Nash equilibrium. We are later going to show that this Nash equilibrium is optimal, i.e., the model distribution matches the target distribution. We propose Coulomb GANs to avoid the GAN shortcoming (II) to (IV) by using a potential field created by point charges analogously to the electric field in physics. The next section will introduce the idea of learning in a potential field and prove that its only solution is optimal. We will then show how learning the discriminator and generator works in a Coulomb GAN and discuss the assumptions needed for our optimality proof. In Section 3 we will then see that the Coulomb GAN does indeed work well in practice and that the samples it produces have very large variability and appear to capture the original distribution very well. Related Work. Several GAN approaches have been suggested for bringing the target and model distributions in alignment using not just local discriminator information: Geometric GANs combine samples via a linear support vector machine which uses the discriminator outputs as samples, therefore they are much more robust to mode collapsing BID31. Energy-Based GANs BID41 and their later improvement BEGANs BID3 ) optimize an energy landscape based on auto-encoders. McGANs match mean and covariance of synthetic and target data, therefore are more suited than standard GANs to approximate the target distribution BID34. In a similar fashion, Generative Moment Matching Networks BID30 and MMD nets BID12 directly optimize a generator network to match a training distribution by using a loss function based on the maximum mean discrepancy (MMD) criterion BID17. These approaches were later expanded to include an MMD criterion with learnable kernels and discriminators. The MMD criterion that these later approaches optimize has a form similar to the energy function that Coulomb GANs optimize (cf. Eq.). However, all MMD approaches end up using either Gaussian or Laplace kernels, which are not guaranteed to find the optimal solution where the model distribution matches the target distribution. In contrast, the Plummer kernel which is employed in this work has been shown to lead to the optimal solution BID22. We show that even a simplified version of the Plummer kernel, the low-dimensional Plummer kernel, ensures that gradient descent convergences to the optimal solution as stated by Theorem 1. Furthermore, most MMD GAN approaches use the MMD directly as loss function though the number of possible samples in a mini-batch is limited. Therefore MMD approaches face a sampling problem in high-dimensional spaces. The Coulomb GAN instead learns a discriminator network that gradually improves its approximation of the potential field via learning Figure 1: The vector field of a Coulomb GAN. The basic idea behind the Coulomb GAN: true samples (blue) and generated samples (red) create a potential field (scalar field). Blue samples act as sinks that attract the red samples, which repel each other. The superimposed vector field shows the forces acting on the generator samples to equalize potential differences, and the color shows the potential at each position. Best viewed in color.on many mini-batches. The discriminator network also tracks the slowly changing generator distribution during learning. Most importantly however, our approach is, to the best of our knowledge, the first one for which optimality, i.e., ability to perfectly learn a target distribution, can be proved. The use of the Coulomb potential for learning is not new. Coulomb Potential Learning was proposed to store arbitrary many patterns in a potential field with perfect recall and without spurious patterns BID35. Another related work is the Potential Support Vector Machine (PSVM), which minimizes Coulomb potential differences BID21 BID23. BID22 also used a potential function based on Plummer kernels for optimal unsupervised learning, on which we base our work on Coulomb GANs. We assume data samples a ∈ R m for a model density p x and a target density p y . The goal of GAN learning is to modify the model in a way to obtain p x = p y . We define the difference of densities ρ(a) = p y (a) − p x (a) which should be pushed toward zero for all a ∈ R m during learning. In the GAN setting, the discriminator D(a) is a function D: R m → R that learns to discriminate between generated and target samples and predicts how likely it is that a is sampled from the target distribution. In conventional GANs, D(a) is usually optimized to approximate the probability of seeing a target sample, or ρ(a) or some similar function. The generator G(z) is a continuous function G: R n → R m which maps some n-dimensional random variable z into the space of target samples. z is typically sampled from a multivariate Gaussian or Uniform distribution. In order to improve the generator, a GAN uses the gradient of the discriminator ∇ a D(a) with respect to the discriminator input a = G(z) for learning. The objective of the generator is a scalar function D(G(z)), therefore the gradient of the objective function is just a scaled version of the gradient ∇ a D(a) which would then propagate further to the parameters of G. This gradient ∇ a D(a) tells the generator in which direction ρ(a) becomes larger, i.e., in which direction the ratio of target examples increases. The generator changes slightly so that z is now mapped to a new a = G (z), moving the sample generated by z a little bit towards the direction where ρ(a) was larger, i.e., where target examples were more likely. However, ρ(a) and its derivative only take into account the local neighborhood of a, since regions of the sample space that are distant from a do not have much influence on ρ(a). Regions of data space that have strong support in p y but not in p x will not be noticed by the generator via discriminator gradients. The restriction to local environments hampers GAN learning significantly.The theoretical analysis of GAN learning can be done at three different levels: in the space of distributions p x and p y regardless of the fact that p x is realized by G and p z, in the space of functions G and D regardless of the fact that G and D are typically realized by a parametric form, i.e., as neural networks, or in the space of the parameters of G and D. use to prove convergence of GAN learning in their Proposition 2 in a hypothetical scenario where the learning algorithm operates by making small, local moves in p x space. In order to see that level and should both be understood as hypothetical scenarios, remember that in all practical implementations, p x can only be altered implicitly by making small changes to the generator function G, which in turn can only be changed implicitly by small steps in its parameters. Even if we assume that the mapping from a distribution p x to the generator G that induced it exists and is unique, this mapping from p x to the space of G is not continuous. To see this, consider changing a distribution p1 x to a new distribution p2 x by moving a small amount of its density to an isolated region in space where p1 x has no support. Let's further assume this region has distance d to any other regions of support of p1 x. By letting → 0, the distance between p1 x and p2 x becomes smaller, yet the distance between the inducing generator functions G 1 and G 2 (e.g. using the supremum norm on bounded functions) will not tend to zero because for at least one function input z we have: DISPLAYFORM0 Because of this, we need to go further than the distribution space when analyzing GAN learning. In practice, when learning GANs, we are restricted to small steps in parameter space, which in turn lead to small steps in function space and finally to small steps in distribution space. But not all small steps in distribution space can be realized this way as shown in the example above. This causes local Nash equilibria in the function space, because even though in distribution space it would be easy to escape by making small steps, such a step would require very large changes in function space and is thus not realizable. In this paper we show that Coulomb GANs do not exhibit any local Nash equilibria in the space of the functions G and D. To the best of our knowledge, this is the first formulation of GAN learning that can guarantee this property. Of course, Coulomb GANs are learned as parametrized neural networks, and as we will discuss in Subsection 2.4.2, Coulomb GANs are not immune to the usual issues that arise from parameter learning, such as over-and underfitting, which can cause local Nash Equilibria due to a bad choice of parameters. If the density p x or p y approaches a Dirac delta-distribution, gradients vanish since the density approaches zero except for the exact location of data points. Similarly, electric point charges are often represented by Dirac delta-distributions, however the electric potential created by a point charge has influence everywhere in the space, not just locally. The electric potential (Coulomb potential) created by the point charge Q is Φ C = 1 4πε0 Q r, where r is the distance to the location of Q and ε 0 is the dielectric constant. Motivated by this electric potential, we introduce a similar concept for GAN learning: Instead of the difference of densities ρ(a), we rather consider a potential function Φ(a) defined as DISPLAYFORM0 with some kernel k (a, b) which defines the influence of a point at b onto a point at a. The crucial advantage of potentials Φ(a) is that each point can influence each other point in space if k is chosen properly. If we minimize this potential Φ(a) we are at the same time minimizing the difference of densities ρ(a): For all kernels k it holds that if ρ(b) = 0 for all b then Φ(a) = 0 for all a. We must still show that (i) Φ(a) = 0 for all a then ρ(b) = 0 for all b, and even more importantly, (ii) whether a gradient optimization of Φ(a) leads to Φ(a) = 0 for all a. This is not the case for every kernel. Indeed only for particular kernels k gradient optimization of Φ(a) leads to BID22 ) (see also Theorem 1 below). DISPLAYFORM1 An example for such a kernel k is the one leading to the Coulomb potential Φ C from above, where DISPLAYFORM2 As we will see in the following, the ability to have samples that influence each other over long distances, like charges in a Coulomb potential, will lead to GANs with a single, optimal Nash equilibrium. For Coulomb GANs, the generator objective is derived from electrical field dynamics: real and generated samples generate a potential field, where samples of the same class (real vs. generated) repel each other, but attract samples of the opposite class. However, real data points are fixed in space, so the only samples that can move are the generated ones. In turn, the gradient of the potential with respect to the input samples creates a vector field in the space of samples. The generator can move its samples along the forces generated by this field. Such a field is depicted in Figure 1. The discriminator learns to predict the potential function, in order to approximate the current potential landscape of all samples, not just the ones in the current mini-batch. Meanwhile, the generator learns to distribute its samples across the whole field in such a way that the energy is minimized, thus naturally avoids mode collapse and covering the whole region of support of the data. The energy is minimal and equal to zero only if all potential differences are zero and the model distribution is equal to the target distribution. Within an electrostatic field, the strength of the force on one particle depends on its distance to other particles and their charges. If left to move freely, the particles will organize themselves into a constellation where all forces equal out and no potential differences are present. For continuous charge distributions, the potential field is constant without potential differences if charges no longer move since forces are equaled out. If the potential field is constant, then the difference of densities ρ is constant, too. Otherwise the potential field would have local bumps. The same behavior is modeled within our Coulomb GAN, except that real and generated samples replace the positive and negative particles, respectively, and that the real data points remain fixed. Only the generated samples are allowed to move freely, in order to minimize ρ. The generated samples are attracted by real samples, so they move towards them. At the same time, generated samples should repel each other, so they do not clump together, which would lead to mode collapsing. Analogously to electrostatics, the potential Φ(a) from Eq. gives rise to a field E(a) = −∇ a Φ(a). and to an energy function DISPLAYFORM0 The field E(a) applies a force on charges at a which pushes the charges toward lower energy constellations. Ultimately, the Coulomb GAN aims to make the potential Φ zero everywhere via the field E(a), which is the negative gradient of Φ. For proper kernels k, it can be shown that (i) Φ can be pushed to zero via its negative gradient given by the field and (ii) that Φ(a) = 0 for all a implies ρ(a) = 0 for all a, therefore, p x (a) = p y (a) for all a BID22 ) (see also Theorem 1 below). During learning we do not change Φ or ρ directly. Instead, the location a = G(z) to which the random variable z is mapped changes to a new location a = G (z). For the GAN optimization dynamics, we assume that generator samples a = G(z) can move freely, which is ensured by a sufficiently complex generator. Importantly, generator samples originating from random variables z do neither disappear nor are they newly created but are conserved. This conservation is expressed by the continuity equation BID39 ) that describes how the difference between distributions ρ(a) changes as the particles are moving along the field, i.e., how moving samples during the learning process changes our densities:ρ DISPLAYFORM0 for sample density difference ρ and unit charges that move with "velocity" v(a) = sign(ρ(a))E(a). The continuity equation is crucial as it establishes the connection between moving samples and changing the generator density and thereby ρ. The sign function of the velocity indicates whether positive or negative charges are present at a. The divergence operator "∇·" determines whether samples move toward or outward of a for a given field. Basically, the continuity equation says that if the generator density increases, then generator samples must flow into the region and if the generator density decreases, they flow outwards. We assume that differently charged particles cancel each other. If generator samples are moved away from a location a then ρ(a) is increasing while ρ(a) is decreasing when generator samples are moved toward a. The continuity equation is also obtained as a first order ODE to move particles in a potential field BID11, therefore describes the dynamics how the densities are changing. We obtaiṅ DISPLAYFORM1 Published as a conference paper at ICLR 2018The density difference ρ(a) indicates how many samples are locally available for being moved. At each local minimum and local maximum a of ρ we obtain ∇ a ρ(a) = 0. Using the product rule for the divergence operator, at points a that are minima or maxima, Eq. reduces tȯ DISPLAYFORM2 In order to ensure that ρ converges to zero, it is necessary and sufficient that sign(∇ · E(a)) = sign(ρ(a)), where ÷aρ(a) = 0, as this condition ensures the uniform decrease of the maximal absolute density differences |ρ(a max)|. As discussed before, the choice of kernel is crucial for Coulomb GANs. The m-dimensional Coulomb kernel and the m-dimensional Plummer kernel lead to (i) Φ that is pushed to zero via the field it creates and (ii) that Φ(a) = 0 for all a implies ρ(a) = 0 for all a, therefore, p x (a) = p y (a) for all a BID22. Thus, gradient learning with these kernels has been proved to converge to an optimal solution. However, both the m-dimensional Coulomb and the mdimensional Plummer kernel lead to numerical instabilities if m is large. Therefore the Coulomb potential Φ(a) for the Coulomb GAN was constructed by a low-dimensional Plummer kernel k with parameters d m − 2 and: DISPLAYFORM0 The original Plummer kernel is obtained with d = m − 2. The ing field and potential energy is DISPLAYFORM1 DISPLAYFORM2 The next theorem states that for freely moving generated samples, ρ converges to zero, that is, p x = p y , when using this potential function Φ(a). Theorem 1 (Convergence with low-dimensional Plummer kernel). For a, b ∈ R m, d m − 2, and > 0 the densities p x and p y equalize over time when minimizing energy F with the low-dimensional Plummer kernel by gradient descent. The convergence is faster for larger d. Proof. See Section A.2. The Coulomb GAN minimizes the electric potential energy from Eq. using a stochastic gradient descent based approach using mini-batches. Appendix Section A.4 contains the equations for the Coulomb potential, field, and energy in this case. Generator samples are obtained by drawing N x random numbers z i and transforming them into outputs x i = G(z i). Each mini-batch also includes N y real world samples y i. This gives rise to a mini-batch specific potential, where in Eq. FORMULA8 we use ρ(a) = p y (a) − p x (a) and replace the expectations by empirical means using the drawn samples: DISPLAYFORM0 It is tempting to have a generator network that directly minimizes this potentialΦ between generated and training set points. In fact, we show thatΦ is an unbiased estimate for Φ in Appendix Section A.4. However, the estimate has very high variance: for example, if a mini-batch fails to sample training data from an existing mode, the field would drive all generated samples that have been generated at this mode to move elsewhere. The high variance has to be counteracted by extremely low learning rates, which makes learning infeasible in practice, as confirmed by initial experiments. Our solution to this problem is to have a network that generalizes over the mini-batch specific potentials: each mini-batch contains different generator samples X = x i for i = 1,..., N x and real world samples Y = y i for i = 1,..., N y, they create a batch-specific potentialΦ. The goal of the discriminator is to learn E X,Y (Φ(a)) = Φ(a), i.e., the potential averaged over many mini-batches. Thus the discriminator function D fulfills a similar role as other typical GAN discriminator functions, i.e., it discriminates between real and generated data such that for any point in space a, D(a) should be greater than zero if the p y (a) > p x (a) and smaller than zero otherwise. In particular D(a) also indicates, via its gradient and its potential properties, directions toward regions where training set samples are predominant and where generator samples are predominant. The generator in turn tries to move all of its samples according to the vector field into areas where generator samples are missing and training set samples are predominant. The generator minimizes the approximated energy F as predicted by the discriminator. The loss L D for the discriminator and L G for the generator are given by: DISPLAYFORM1 Where p(a) = 1/2 N (a; G(z), I)p z (z)dz + 1/2 N (a; y, I)p y (y)dy, i.e., a distribution where each point of support both of the generator and the real world distribution is surrounded with a Gaussian ball of width I similar to BID4, in order to overcome the problem that the generator distribution is only a sub-manifold of R m. These loss functions cause the approximated potential values D(a) that are negative are pushed toward zero. Finally, the Coulomb GAN, like all other GANs, consists of two parts: a generator to generate model samples, and a discriminator that provides its learning signal. Without a discriminator, our would be very similar to GMMNs BID30, as can be seen in Eq., but with an optimal Kernel specifically tailored to the problem of estimating differences between probability distributions. We use each mini-batch only for one update of the discriminator and the generator. It is important to note that the discriminator uses each sample in the mini batch twice: once as a point to generate the mini-batch specific potentialΦ, and once as a point in space for the evaluation of the potentialΦ and its approximation D. Using each sample twice is done for performance reasons, but not strictly necessary: the discriminator could learn the potential field by sampling points that lie between generator and real samples as in BID18, but we are mainly interested in correct predictions in the vicinity of generator samples. Pseudocode for the learning algorithm is detailed in Algorithm 1 in the appendix. Convergence of the GAN learning process was proved for a two time-scales update rule by BID20. A local Nash equilibrium is a pair of generator and discriminator (D *, G *) that fulfills the two conditions DISPLAYFORM0 for some neighborhoods U (D *) and U (G *). We show in the following Theorem 2 that for Coulomb GANs every local Nash equilibrium necessarily is identical to the unique global Nash equilibrium. In other words, any equilibrium point of the Coulomb GAN that is found to be local optimal has to be the one global Nash equilibrium as the minimization of the energy F (ρ) in Eq. leads to a single, global optimum at p y = p x. Theorem 2 (Optimal Solution). If the pair (D *, G *) is a local Nash equilibrium for the Coulomb GAN objectives, then it is the global Nash equilibrium, and no other local Nash equilibria exist, and G * has output distribution p x = p y.Proof. See Appendix Section A.3. To implement GANs in practice, we need learnable models for G and D. We assume that our models for G and D are continuously differentiable with respect to their parameters and inputs. Toward this end, GANs are typically implemented as neural networks optimized by (some variant of) gradient descent. Thus we may not find the optimal G * or D *, since neural networks may suffer from capacity or optimization issues. Recent research indicates that the effect of local minima in deep learning vanishes with increasing depth BID10 BID8 BID25, such that this limitation becomes less restrictive as our ability to train deep networks grows thanks to hardware and optimization improvements. The main problem with learning Coulomb GANs is to approximate the potential function Φ, which is a complex function in a high-dimensional space, since the potential can be very non-linear and non-smooth. When learning the discriminator, we must ensure that enough data is sampled and averaged over. We already lessened the non-linear function problem by using a low-dimensional Plummer kernel. But still, this kernel can introduce large non-linearities if samples are close to each other. It is crucial that the discriminator learns slow enough to accurately estimate the potential function which is induced by the current generator. The generator, in turn, must be even slower since it must be tracked by the discriminator. These approximation problems are supposed to be tackled by the research community in near future, which would enable optimal GAN learning. The formulation of GAN learning as a potential field naturally solves the mode collapsing issue: the example described in Section A.1, where a normal GAN cannot get out of a local Nash equilibria is not a converged solution for the Coulomb GAN: If all probability mass of the generator lies in one of the modes, then both attracting forces from real-world samples located at the other mode as well as repelling forces from the over-represented generator mode will act upon the generator until it generates samples at the other mode as well. In all of our experiments, we used a low-dimensional Plummer Kernel of dimensionality d = 3. This kernel both gave best computational performance and has low risk of running into numerical issues. We used a batch size of 128. To evaluate the quality of a GAN, the FID metric as proposed by BID20 was calculated by using 50k samples drawn from the generator, while the training set statistics were calculated using the whole training set. We compare to BEGAN BID3, DCGAN and WGAN-GP BID18 both in their original version as well as when using the two-timescale update-rule (TTUR), using the settings from BID20. We additionally compare to MMD-GAN, which is conceptually very similar to the Coulomb GAN, but uses a Gaussian Kernel instead of the Plummer Kernel. We use the dataset-specific settings recommended in and report the highest FID score over the course of training. All images shown in this paper were produced with a random seed and not cherry picked. The implementation used for these experiments is available online 1. The appendix Section A.5 contains an additional toy example demonstrating that Coulomb GANs do not suffer from mode collapse when fitting a simple Gaussian Mixture of 25 components. To demonstrate the ability of the Coulomb GAN to learn distributions in high dimensional spaces, we trained a Coulomb GAN on several popular image data sets: The cropped and centered images of celebrities from the Large-scale CelebFaces Attributes ("CelebA") data set BID32, the LSUN bedrooms data set consists of over 3 million 64x64 pixel images of the bedrooms category of the large scale image database LSUN BID40 as well as the CIFAR-10 data set. For these experiments, we used the DCGAN architecture ) with a few modifications: our convolutional kernels all have a kernel size of 5x5, our random seed that serves as input to the generator has fewer dimensions: 32 for CelebA and LSUN bedrooms, and 16 for CIFAR-10. Furthermore, the discriminator uses twice as many feature channels in each layer as in the DCGAN architecture. For the Plummer kernel, was set to 1. We used the Adam optimizer with a learning rate of 10 −4 for the generator and 5 · 10 −5 for the discriminator. To improve convergence performance, we used the tanh output activation function BID27. For regularization we used an L2 weight decay term with a weighting factor of 10 −7. Learning was stopped by monitoring the FID metric BID20. Once learning plateaus, we scaled the learning rate down by a factor of 10 and let it continue once more until the FID plateaus. The are reported in Table 1b, and generated images can be seen in Figure 2 and in the Appendix in Section A.7. Coulomb GANs tend to outperform standard GAN approaches like BEGAN and DCGAN, but are outperformed by the Improved Wasserstein GAN. However it is important to note that the Improved Wasserstein GAN used a more advanced network architecture based on ResNet blocks BID18, which we could not replicate due to runtime constraints. Overall, the low FID of Coulomb GANs stem from the fact that the images show a wide variety of different samples. E.g. on CelebA, Coulomb GAN exhibit a very wide variety of faces, s, eye colors and orientations. To further investigate how The most similar pairs found in batches of 1024 generated faces sampled from the Coulomb GAN, and the nearest neighbor from the training data shown as third image. Distances were calculated as Euclidean distances on pixel level. much variation the samples generated by the Coulomb GAN contains, we followed the advice of Arora and Zhang BID2 to estimate the support size of the generator's distribution by checking how large a sample from the generator must be before we start generating duplicates. We were able to generate duplicates with a probability of around 50 % when using samples of size 1024, which indicates that the support size learned by the Coulomb GAN would be around 1M. This is a strong indication that the Coulomb GAN was able to spread out its samples over the whole target distribution. A depiction is included in Figure 3, which also shows the nearest neighbor in the training set of the generated images, confirming that the Coulomb GAN does not just memorize training images. We repeated the experiments from BID18, where Improved Wasserstein GANs (WGAN-GP) were trained to produce text samples after being trained on the Google Billion Word data set BID6, using the same network architecture as in the original publication. We use the Jensen-Shannon-divergence on 4-grams and 6-grams as an evaluation criterion. The are summarized in The Coulomb GAN is a generative adversarial network with strong theoretical guarantees. Our theoretical show that the Coulomb GAN will be able to approximate the real distribution perfectly if the networks have sufficient capacity and training does not get stuck in local minima. Our show that the potential field used by the Coulomb GAN far outperforms MMD based approaches due to its low-dimensional Plummer kernel, which is better suited for modeling probability density functions, and is very effective at eliminating the mode collapse problem in GANs. This is because our loss function forces the generated samples to occupy different regions of the learned distribution. In practice, we have found that Coulomb GANs are able to produce a wide range of different samples. However, in our experience, this sometimes leads to a small number of generated samples that are non-sensical interpolations of existing data modes. While these are sometimes also present in other GAN models, we found that our model produces such images at a slightly higher rate. This issue might be solved by finding better ways of learning the discriminator, as learning the correct potential field is crucial for the Coulomb GAN's performance. We also observed that increasing the capacity of the discriminator seems to always increase the generative performance. We thus hypothesize that the largest issue in learning Coulomb GANs is that the discriminator needs to approximate the potential field Φ very well in a high-dimensional space. In summary, instead of directly optimizing a criterion based on local differences of densities which can exhibit many local minima, Coulomb GANs are based on a potential field that has no local minima. The potential field is created by point charges in an analogy to electric field in physics. We have proved that if learning converges then it converges to the optimal solution if the samples can be moved freely. We showed that Coulomb GANs avoid mode collapsing, model the target distribution more truthfully than standard GANs, and do not overlook high probability regions of the target distribution. A APPENDIX As an example of how a GAN can converge to a Nash Equilibrium that exhibits mode collapse, consider a target distribution that consists of two distinct/non-overlapping regions of support C 1 and C 2 that are distant from each other, i.e., the target probability is zero outside of C 1 and C 2. Further assume that 50 % of the probability mass is in C 1 and 50 % in C 2. Assume that the the generator has mode-collapsed onto C 1, which contains 100 % of the generator's probability mass. In this situation, the optimal discriminator classifies all points from C 2 as "real" (pertaining to the target distribution) by supplying an output of 1 for them (1 is the target for real samples and 0 the target for generated samples). Within C 1, the other region, the discriminator sees twice as many generated data points as real ones, as 100 % of the probability mass of the generator's distribution is in C 1, but only 50 % of the probability mass of the real data distribution. So one third of the points seen by the discriminator in C 1 are real, the other 2 thirds are generated. Thus, to minimize its prediction error for a proper objective (squared or cross entropy), the discriminator has to output 1/3 for every point from C 1. The optimal output is even independent of the exact form of the real distribution in C 1. The generator will match the shape of the target distribution locally. If the shape is not matched, local gradients of the discriminator with respect to its input would be present and the generator would improve locally. If local improvements of the generator are no longer possible, the shape of the target distribution is matched and the discriminator output is locally constant. In this situation, the expected gradient of the discriminator is the zero vector, because it has reached an optimum. Since the discriminator output is constant in C 1 (and C 2), the generator's expected gradient is the zero vector, too. The situation is also stable even though we still have random fluctuations from the ongoing stochastic gradient (SGD) learning: whenever the generator produces data outside of (but close to) C 1, the discriminator can easily detect this and push the generator's samples back. Inside C 1, small deviations of the generator from the shape of the real distribution are detected by the discriminator as well, by deviating slightly from 1/3. Subsequently, the generator is pushed back to the original shape. If the discriminator deviates from its optimum, it will also be forced back to its optimum. So overall, the GAN learning reached a local Nash equilibrium and has converged in the sense that the parameters fluctuate around the attractor point (fluctuations depend on learning rate, sample size, etc.). To achieve true mathematical convergence, BID20 assume decaying learning rates to anneal the random fluctuations, similar to BID37 original convergence proof for SGD. We first recall Theorem 1:Theorem (Convergence with low-dimensional Plummer kernel). For a, b ∈ R m, d m − 2, and > 0 the densities p x and p y equalize over time when minimizing energy F with the lowdimensional Plummer kernel by gradient descent. The convergence is faster for larger d. In a first step, we prove that for local maxima or local minima a of ρ, the expression sign(∇ · E(a)) = sign(ρ(a)) holds for small enough. For proving this equation, we apply the Laplace operator for spherical coordinates to the low-dimensional Plummer kernel. Using the , we see that the integral ∇ · E(a) = − ρ(b)∇ 2 a k (a, b) db is dominated by large negative values of ∇ 2 a k around a. These negative values can even be decreased by decreasing. Therefore we can ensure by a small enough that at each local minimum and local maximum a of ρ sign(ρ(a)) = −sign(ρ(a)). Thus, the maximal and minimal points of ρ move toward zero. In a second step, we show that new maxima or minima cannot appear and that the movement of Φ toward zero stops at zero and not earlier. Since ρ is continuously differentiable, all points in environments of maxima and minima move toward zero. Therefore the largest |ρ(a)| moves toward zero. We have to ensure that moving toward zero does not converge to a point apart from zero. We derive that the movement toward zero is lower bounded byρ(a) = −sign(ρ(a))λρ 2 (a). Thus, the movement slows down at ρ(a) = 0. Solving the differential equation and applying it to the maximum of the absolute value of ρ gives |ρ| max (t) = 1/(λt + (|ρ| max) −1 ). Thus, ρ converges to zero over time. DISPLAYFORM0, where the theorem has already been proved for small enough BID22.At each local minimum and local maximum a of ρ we have ∇ a ρ(a) = 0. Using the product rule for the divergence operator, Eq. reduces tȯ DISPLAYFORM1 The term ∇ · E(a) can be expressed as DISPLAYFORM2 We next consider ∇ 2 a k (a, b) for the low-dimensional Plummer kernel. We define the spherical Laplace operator in (m − 1) dimensions as ∇ 2 S m−1, then the Laplace operator in spherical coordinates is (Proposition 2.5 in Frye & Efthimiou BID13): DISPLAYFORM3 Note that ∇ 2 S m−1 only has second order derivatives with respect to the angles of the spherical coordinates. With r = a − b we obtain for the Laplace operator applied to the low-dimensional Plummer kernel: DISPLAYFORM4 and in particular DISPLAYFORM5 For d m − 2 we have (2 + d − m) 0, and obtain DISPLAYFORM6 and DISPLAYFORM7 Therefore, ∇ 2 k(a, b) is negative with minimum −md −(d+2) at r = 0 and increasing with r and increasing with for d m − 4. For d = m − 3 we have to restrict in the following the sphere S τ (a) to τ < √ m and ensure increase of ∇ 2 k(a, b) with.If ρ(b) = 0, then we define a sphere S τ (a) with radius τ around a for which holds sign(ρ(b)) = sign(ρ(a)) for each b ∈ S τ (a). Note that ∇ 2 k(a, b) is continuous differentiable. We have DISPLAYFORM8 Using τ, we now bound T \Sτ (a) ρ(b) ∇ 2 a k (a, b) db independently from, since ρ is a difference of distributions. For small enough we can ensure DISPLAYFORM9 Therefore we have sign(∇ · E(a)) = sign(ρ(a)).Therefore we have at each local minimum and local maximum a of ρ sign(ρ(a)) = − sign(ρ(a)). Therefore the maximal and minimal points of ρ move toward zero. Since ρ is continuously differentiable as is the field, also the points in an environment of the maximal and minimal points move toward zero. Points that are not in an environment of the maximal or minimal points cannot become maximal points in an infinitesimal time step. Since the contribution of a environment S τ (a) dominates the integral Eq., for small enough there exists a positive 0 < λ globally for all minima and maxima as well as for all time steps for which holds: DISPLAYFORM10 The factor λ depends on k and on the initial ρ. λ is proportional to d. Larger d lead to larger |∇ · E(a)| since the maximum or minimum ρ(a) is upweighted. There might exist initial conditions ρ for which λ → 0, e.g. for infinite many maxima and minima, but they are impossible in our applications. Therefore maximal or minimal points approach zero faster or equal than given bẏ Consequently, ρ converges to the zero function over time, that is, p x becomes equal to p y . DISPLAYFORM11 We first recall Theorem 2: Theorem (Optimal Solution). If the pair (D *, G *) is a local Nash equilibrium for the Coulomb GAN objectives, then it is the global Nash equilibrium, and no other local Nash equilibria exist, and G * has output distribution p x = p y.Proof. (D *, G *) being in a local Nash equilibrium means that (D *, G *) fulfills the two conditions DISPLAYFORM0 for some neighborhoods U (D *) and U (G *). For Coulomb GANs that means, D * has learned the potential Φ induced by G * perfectly, because L D is convex in D, thus if D * is optimal within an neighborhood U (D *), it must be the global optimum. This means that G * is directly minimizing (G(z)) ). The Coulomb potential energy is according to Eq. DISPLAYFORM1 DISPLAYFORM2 Only the samples from p x stem from the generator, where p x (a) = δ(a − G(z))p z (z)dz. Here δ is the δ-distribution centered at zero. The part of the energy which depends on the generator is DISPLAYFORM3 Theorem 1 guarantees that there are no other local minima except the global one when minimizing F. F has one minimum, F = 0, which implies Φ(a) = 0 and ρ(a) = 0 for all a, therefore also p y = p x according to Theorem 1. Each Φ(a) = 0 would mean there exist potential differences which in turn would cause forces on generator samples that allow to further minimize the energy. Since we assumed that the generator can reach the minimum p y = p x for any p y, it will be reached by local (stepwise) optimization of − 1 2 E pz (Φ(G(z))) with respect to G. Since the pair (D *, G *) is optimal within their neighborhood, the generator has reached this minimum as there is not other local minimum than the global one. Therefore G * has model density p x with p y = p x. The convergence point is a global Nash equilibrium, because there is no approximation error and zero energy F = 0 is a global minimum for discriminator and generator, respectively. Theorem 1 ensures that other local Nash equilibria are not possible. GANs are sample-based, that is, samples are drawn from the model for learning BID22 BID19. Typically this is done in mini-batches, where each mini-batch consists of two sets of samples, the target samples Y = {y i |i = 1 . . . N y}, and the model samples X = {x i |i = 1 . . . N x}.For such finite samples, i.e. point charges, we have to use delta distributions to obtain unbiased estimates of the the model distribution p x and the target distribution p y : DISPLAYFORM0 where δ is the Dirac δ-distribution centered at zero. These are unbiased estimates of the underlying distribution, as can be seen by: DISPLAYFORM1 In the rest of the paper, we will drop the explicit parameterization with X and Y for all estimates to unclutter notation, and instead just use the hat sign to denote estimates. In the same fashion as for the distributions, when we use fixed samples X and Y, we obtain the following unbiased estimates for the potential, energy and field given by Eq., Eq., and Eq. FORMULA10: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 These are again unbiased, e.g.: DISPLAYFORM5 If we draw samples of infinite size, all these expressions for a fixed sample size lead to the equivalent statements for densities. The sample-based formulation, that is, point charges in physical terms, can only have local energy minima or maxima at locations of samples BID11. Furthermore the field lines originate and end at samples, therefore the field guides model samples x toward real world samples y, as depicted in Figure 1. The factors N y and N x in the last equations arise from the fact that −∇ a F gives the force which is applied to a sample with charge. A sample y i is positively charged with 1/N y and follows −∇ yi F while a sample x i is negatively charged with −1/N x and therefore follows −∇ xi F, too. Thus, following the force induced on a sample by the field is equivalent to gradient descent of the energy F with respect to samples y i and x i. We use the synthetic data set introduced by BID31 BID31, the Coulomb GAN used a discriminator network with 2 hidden layers of 128 units, however we avoided batch normalization by using the ELU activation function BID9. We used the Plummer kernel in 3 dimensions (d = 3) with an epsilon of 3 (= 3) and a learning rate of 0.01, both of which were exponentially decayed during the 1M update steps of the Adam optimizer. As can be seen in FIG2, samples from the learned Coulomb GAN very well approximate the target distribution. All components of the original distribution are present at the model distribution at approximately the correct ratio, as shown in FIG3. Moreover, the generated samples are distributed approximately according to the same spread for each component of the real world distribution. Coulomb GANs outperform other compared methods, which either fail to learn the distribution completely, ignore some of the modes, or do not capture the within-mode spread of a Gaussian. The Coulomb GAN is the only GAN approach that manages to avoid a within-cluster collapse leading to insufficient variance within a cluster. Gaussians. For constructing the histogram, 10k samples were drawn from the target and the model distribution. The Coulomb GAN captures the underlying distribution well, does not miss any modes, and places almost all probability mass on the modes. Only the Coulomb GAN captured the withinmode spread of the Gaussians. The following gives the pseudo code for training GANs. Note that when calculating the derivative ofΦ(a i ; X, Y), it is important to only derive with respect to a, and not wrt. X, Y, even if it can happen that e.g. a ∈ X. In frameworks that offer automatic differentiation such as Tensorflow or Theano, this means stopping the possible gradient back-propagation through those parameters. Algorithm 1 Minibatch stochastic gradient descent training of Coulomb GANs for updating the the discriminator weights w and the generator weights θ.while Stopping criterion not met do • Sample minibatch of N x training samples {x 1, . . ., x Nx} from training set • Sample minibatch of N y generator samples {y 1, . . ., y Ny} from the generator • Calculate the gradient for the discriminator weights: DISPLAYFORM0 • Calculate the gradient for the generator weights: DISPLAYFORM1 • Update weights according to optimizer rule (e.g. Adam): DISPLAYFORM2 end while Images from a Coulomb GAN after training on the LSUN bedroom data set. Images from a Coulomb GAN after training on the CIFAR 10 data set.
Coulomb GANs can optimally learn a distribution by posing the distribution learning problem as optimizing a potential field
1,178
scitldr
Predicting properties of nodes in a graph is an important problem with applications in a variety of domains. Graph-based Semi Supervised Learning (SSL) methods aim to address this problem by labeling a small subset of the nodes as seeds, and then utilizing the graph structure to predict label scores for the rest of the nodes in the graph. Recently, Graph Convolutional Networks (GCNs) have achieved impressive performance on the graph-based SSL task. In addition to label scores, it is also desirable to have a confidence score associated with them. Unfortunately, confidence estimation in the context of GCN has not been previously explored. We fill this important gap in this paper and propose ConfGCN, which estimates labels scores along with their confidences jointly in GCN-based setting. ConfGCN uses these estimated confidences to determine the influence of one node on another during neighborhood aggregation, thereby acquiring anisotropic capabilities. Through extensive analysis and experiments on standard benchmarks, we find that ConfGCN is able to significantly outperform state-of-the-art baselines. We have made ConfGCN’s source code available to encourage reproducible research. Graphs are all around us, ranging from citation and social networks to knowledge graphs. Predicting properties of nodes in such graphs is often desirable. For example, given a citation network, we may want to predict the research area of an author. Making such predictions, especially in the semisupervised setting, has been the focus of graph-based semi-supervised learning (SSL) BID25. In graph-based SSL, a small set of nodes are initially labeled. Starting with such supervision and while utilizing the rest of the graph structure, the initially unlabeled nodes are labeled. Conventionally, the graph structure has been incorporated as an explicit regularizer which enforces a smoothness constraint on the labels estimated on nodes BID36 BID2 BID31. Recently proposed Graph Convolutional Networks (GCN) BID8 BID14 provide a framework to apply deep neural networks to graphstructured data. GCNs have been employed successfully for improving performance on tasks such as semantic role labeling, machine translation BID1, relation extraction BID28 BID35, event extraction BID20, shape segmentation BID34, and action recognition BID12. GCN formulations for graph-based SSL have also attained state-of-the-art performance BID14 BID18 BID29. In this paper, we also focus on the task of graphbased SSL using GCNs. GCN iteratively estimates embedding of nodes in the graph by aggregating embeddings of neighborhood nodes, while backpropagating errors from a target loss function. Finally, the learned node embeddings are used to estimate label scores on the nodes. In addition to the label scores, it is desirable to also have confidence estimates associated with them. Such confidence scores may be used to determine how much to trust the label scores estimated on a given node. While methods to estimate label score confidence in non-deep graph-based SSL has been previously proposed BID21, confidence-based GCN is still unexplored. Figure 1: Label prediction on node a by Kipf-GCN and ConfGCN (this paper). L 0 is a's true label. Shade intensity of a node reflects the estimated score of label L 1 assigned to that node. Since Kipf-GCN is not capable of estimating influence of one node on another, it is misled by the dominant label L 1 in node a's neighborhood and thereby making the wrong assignment. ConfGCN, on the other hand, estimates confidences (shown by bars) over the label scores, and uses them to increase influence of nodes b and c to estimate the right label on a. Please see Section 1 for details. In order to fill this important gap, we propose ConfGCN, a GCN framework for graph-based SSL. ConfGCN jointly estimates label scores on nodes, along with confidences over them. One of the added benefits of confidence over node's label scores is that they may be used to subdue irrelevant nodes in a node's neighborhood, thereby controlling the number of effective neighbors for each node. In other words, this enables anisotropic behavior in GCNs. Let us explain this through the example shown in Figure 1. In this figure, while node a has true label L 0 (white), it is incorrectly classified as L 1 (black) by Kipf-GCN 2. This is because Kipf-GCN suffers from limitations of its neighborhood aggregation scheme BID32. For example, Kipf-GCN has no constraints on the number of nodes that can influence the representation of a given target node. In a k-layer Kipf-GCN model, each node is influenced by all the nodes in its k-hop neighborhood. However, in real world graphs, nodes are often present in heterogeneous neighborhoods, i.e., a node is often surrounded by nodes of other labels. For example, in Figure 1, node a is surrounded by three nodes (d, e, and f) which are predominantly labeled L 1, while two nodes (b and c) are labeled L 0. Please note that all of these are estimated label scores during GCN learning. In this case, it is desirable that node a is more influenced by nodes b and c than the other three nodes. However, since Kipf-GCN doesn't discriminate among the neighboring nodes, it is swayed by the majority and thereby estimating the wrong label L 1 for node a. ConfGCN is able to overcome this problem by estimating confidences on each node's label scores. In Figure 1, such estimated confidences are shown by bars, with white and black bars denoting confidences in scores of labels L 0 and L 1, respectively. ConfGCN uses these label confidences to subdue nodes d, e, f since they have low confidence for their label L 1 (shorter black bars), whereas nodes b and c are highly confident about their labels being L 0 (taller white bars). This leads to higher influence of b and c during aggregation, and thereby ConfGCN correctly predicting the true label of node a as L 0 with high confidence. This clearly demonstrates the benefit of label confidences and their utility in estimating node influences. Graph Attention Networks (GAT) BID29, a recently proposed method also provides a mechanism to estimate influences by allowing nodes to attend to their neighborhood. However, as we shall see in Section 6, ConfGCN, through its use of label confidences, is significantly more effective. Our contributions in this paper are as follows.• We propose ConfGCN, a Graph Convolutional Network (GCN) framework for semisupervised learning which models label distribution and their confidences for each node in the graph. To the best of our knowledge, this is the first confidence-enabled formulation of GCNs.• ConfGCN utilize label confidences to estimate influence of one node on another in a labelspecific manner during neighborhood aggregation of GCN learning.• Through extensive evaluation on multiple real-world datasets, we demonstrate ConfGCN effectiveness over state-of-the-art baselines. ConfGCN's source code and datasets used in the paper are made publicly available 3 to foster reproducible research. Semi-Supervised learning (SSL) on graphs: SSL on graphs is the problem of classifying nodes in a graph, where labels are available only for a small fraction of nodes. Conventionally, the graph structure is imposed by adding an explicit graph-based regularization term in the loss function BID36 BID31 BID2. Recently, implicit graph regularization via learned node representation has proven to be more effective. This can be done either sequentially or in an end to end fashion. Methods like DeepWalk BID22, node2vec BID11, and LINE BID27 first learn graph representations via sampled random walk on the graph or breadth first search traversal and then use the learned representation for node classification. On the contrary, Planetoid BID33 learns node embedding by jointly predicting the class labels and the neighborhood context in the graph. Recently, BID14 employs Graph Convolutional Networks (GCNs) to learn node representations. The generalization of Convolutional Neural Networks to non-euclidean domains is proposed by BID6 which formulates the spectral and spatial construction of GCNs. This is later improved through an efficient localized filter approximation BID8. BID14 provide a first-order formulation of GCNs and show its effectiveness for SSL on graphs. propose GCNs for directed graphs and provide a mechanism for edge-wise gating to discard noisy edges during aggregation. This is further improved by BID29 which allows nodes to attend to their neighboring nodes, implicitly providing different weights to different nodes. BID18 propose Graph Partition Neural Network (GPNN), an extension of GCNs to learn node representations on large graphs. GPNN first partitions the graph into subgraphs and then alternates between locally and globally propagating information across subgraphs. An extensive survey of GCNs and their applications can be found in BID5. The natural idea of incorporating confidence in predictions has been explored by BID16 for the task of active learning. BID15 proposes a confidence based framework for classification problems, where the classifier consists of two regions in the predictor space, one for confident classifications and other for ambiguous ones. In representation learning, uncertainty (inverse of confidence) is first utilized for word embeddings by BID30. BID0 further extend this idea to learn hierarchical word representation through encapsulation of probability distributions. BID21 propose TACO (Transduction Algorithm with COnfidence), the first graph based method which learns label distribution along with its uncertainty for semi-supervised node classification. BID3 embeds graph nodes as Gaussian distribution using ranking based framework which allows to capture uncertainty of representation. They update node embeddings to maintain neighborhood ordering, i.e. 1-hop neighbors are more similar to 2-hop neighbors and so on. Gaussian embeddings have been used for collaborative filtering (Dos BID9 and topic modelling BID7 as well. Let G = (V, E, X) be an undirected graph, where V = V l ∪ V u is the union of labeled (V l) and unlabeled (V u) nodes in the graph with cardinalities n l and n u, E is the set of edges and X ∈ R (n l +nu)×d is the input node features. The actual label of a node v is denoted by a one-hot vector Y v ∈ R m, where m is the number of classes. Given G and seed labels Y ∈ R n l ×m, the goal is to predict the labels of the unlabeled nodes. To incorporate confidence, we additionally estimate label distribution µ v ∈ R m and a diagonal co-variance matrix Σ v ∈ R m×m, ∀v ∈ V. Here, µ v,i denotes the score of label i on node v, while (Σ v) ii denotes the variance in the estimation of µ v,i. In other words, (Σ −1 v) ii is ConfGCN's confidence in µ v,i. In this section, we give a brief overview of Graph Convolutional Networks (GCNs) for undirected graphs as proposed by BID14. Given a graph G = (V, E, X) as defined Section 3, the node representation after a single layer of GCN can be defined as DISPLAYFORM0 where, W ∈ R d×d denotes the model parameters, A is the adjacency matrix andD ii = j (A + I) ij. f is any activation function, we have used ReLU, f (x) = max(0, x) in this paper. Equation 1 can also be written as DISPLAYFORM1 Here, b ∈ R d denotes bias, N (v) = {u : {u, v} ∈ E} corresponds to immediate neighbors of v in graph G and h v is the obtained representation of node v. For capturing multi-hop dependencies between nodes, multiple GCN layers can be stacked on top of one another. The representation of node v after k th layer of GCN is given as DISPLAYFORM2 where, W k, b k denote the layer specific parameters of GCN. Following BID21, ConfGCN uses co-variance matrix based symmetric Mahalanobis distance for defining distance between two nodes in the graph. Formally, for any two given nodes u and v, with label distributions µ u and µ v and co-variance matrices Σ u and Σ v, distance between them is defined as follows. DISPLAYFORM0 Characteristic of the above distance metric is that if either of Σ u or Σ v has large eigenvalues, then the distance will be low irrespective of the closeness of µ u and µ v. On the other hand, if Σ u and Σ v both have low eigenvalues, then it requires µ u and µ v to be close for their distance to be low. Given the above properties, we define r uv, the influence score of node u on its neighboring node v during GCN aggregation, as follows. DISPLAYFORM1.This influence score gives more relevance to neighboring nodes with highly confident similar label, while reducing importance of nodes with low confident label scores. This in ConfGCN acquiring anisotropic capability during neighborhood aggregation. For a node v, ConfGCN's equation for updating embedding at the k-th layer is thus defined as follows. DISPLAYFORM2 The final node representation obtained from ConfGCN is used for predicting labels of the nodes in the graph as follows. Ŷ DISPLAYFORM3 where, K denotes the number of ConfGCN's layers. Finally, in order to learn label scores {µ v} and co-variance matrices {Σ v} jointly with other parameters {W k, b k}, following , we include the following three terms in ConfGCN's objective function. For enforcing neighboring nodes to be close to each other, we include L smooth defined as DISPLAYFORM4 To impose the desirable property that the label distribution of nodes in V l should be close to their input label distribution, we incorporate L label defined as DISPLAYFORM5 Here, for input labels, we assume a fixed uncertainty of 1 γ I ∈ R L×L, where γ > 0. We also include the following regularization term, L reg to constraint the co-variance matrix to be finite and positive. DISPLAYFORM6 for some η > 0. The first term increases monotonically with the eigenvalues of Σ and the second term prevents them from becoming zero. Additionally in ConfGCN, we include the L const in the objective, to push the label distribution (µ) close to the final model prediction (Ŷ). DISPLAYFORM7 Finally, we include the standard cross-entropy loss for semi-supervised multi-class classification over all the labeled nodes (V l). DISPLAYFORM8 The final objective for optimization is the linear combination of the above defined terms. DISPLAYFORM9 where, λ i ∈ R, are the weights of the terms in the objective. We optimize L({W k, b k, µ v, Σ v}) using stochastic gradient descent. We hypothesize that all the terms help in improving ConfGCN's performance and we validate this in Section 7.4. For evaluating the effectiveness of ConfGCN, we evaluate on several semi-supervised classification benchmarks. Following the experimental setup of BID14 BID18, we evaluate on Cora, Citeseer, and Pubmed datasets BID23. The dataset statistics is summarized in Table 1. Label mismatch denotes the fraction of edges between nodes with different labels in the training data. The benchmark datasets commonly used for semi-supervised classification task have substantially low label mismatch rate. In order to examine models on datasets with more heterogeneous neighborhoods, we also evaluate on Cora-ML dataset BID4.All the four datasets are citation networks, where each document is represented using bag-of-words features in the graph with undirected citation links between documents. The goal is to classify documents into one of the predefined classes. We use the data splits used by BID33 and follow similar setup for Cora-ML dataset. Following BID14, additional 500 labeled nodes are used for hyperparameter tuning. Table 1: Details of the datasets used in the paper. Please refer Section 6.1 for more details. Citeseer Cora Pubmed Cora ML LP BID36 45.3 68.0 63.0 -ManiReg BID2 60.1 59.5 70.7 -SemiEmb BID31 59.6 59.0 71.1 -Feat BID33 57.2 57.4 69.8 -DeepWalk BID22 43.2 67.2 65.3 -GGNN BID17 68.1 77.9 77.2 -Planetoid BID33 64.9 75.7 75.7 -Kipf-GCN BID14 70.3 81.5 79.0 51.6 G-GCN 71.1 82.0 77.3 50.4 GPNN BID18 69.7 81.8 79.3 60.6 GAT BID29 72.5 83.0 79.0 54.9ConfGCN (this paper) 73.9 83.5 80.7 80.9 Table 2: Performance comparison of several methods for semi-supervised node classification on multiple benchmark datasets. ConfGCN performs consistently better across all the datasets. Baseline method performances on Citeseer, Cora and Pubmed datasets are taken from Liao et al. FORMULA0; BID29. We consider only the top performing baseline methods on these datasets for evaluation on the Cora-ML dataset. Please refer Section 7.1 for details. We use the same data splits as described in BID33, with a test set of 1000 labeled nodes for testing the prediction accuracy of ConfGCN and a validation set of 500 labeled nodes for optimizing the hyperparameters. The model is trained using Adam with a learning rate of 0.01. The weight matrices along with µ are initialized using Xavier initialization BID10 and Σ matrix is initialized with identity. For evaluating ConfGCN, we compare against the following baselines:• Feat BID33 takes only node features as input and ignores the graph structure.• ManiReg BID2 ) is a framework for providing data-dependent geometric regularization.• SemiEmb BID31 augments deep architectures with semi-supervised regularizers to improve their training.• LP BID36 ) is an iterative iterative label propagation algorithm which propagates a nodes labels to its neighboring unlabeled nodes according to their proximity.• DeepWalk BID22 learns node features by treating random walks in a graph as the equivalent of sentences.• Planetoid BID33 provides a transductive and inductive framework for jointly predicting class label and neighborhood context of a node in the graph.• GCN BID14 ) is a variant of convolutional neural networks used for semisupervised learning on graph-structured data.• G-GCN ) is a variant of GCN with edge-wise gating to discard noisy edges during aggregation.• GGNN BID17 is a generalization of RNN framework which can be used for graphstructured data.. On xaxis we have quartiles of (a) node entropy and (b) degree, i.e., each bin has 25% of the samples in sorted order. Overall, we observe that the performance of Kipf-GCN and GAT degrades with the increase in node entropy and degree. In contrast, ConfGCN is able to avoid such degradation due to its estimation and use of confidence scores. Refer Section 7.2 for details.• GPNN BID18 ) is a graph partition based algorithm which propagates information after partitioning large graphs into smaller subgraphs.• GAT BID29 ) is a graph attention based method which provides different weights to different nodes by allowing nodes to attend to their neighborhood. In this section, we attempt to answer the following questions:Q1. How does ConfGCN compare against the existing methods semi-supervised node classification task? (Section 7.1) Q2. How does the performance of methods vary with increasing node degree and label mismatch? (Section 7.2) Q3. What is the effect of ablating different terms in ConfGCN's loss function? (Section 7.4) Q4. How does increasing the number of layers effects ConfGCN's performance? (Section 7.3) The evaluation for semi-supervised node classification are summarized in Table 2. Results of all other baseline methods on Cora, Citeseer and Pubmed datasets are taken from BID18 BID29 directly. Overall, we find that ConfGCN outperforms all existing approaches consistently across all the datasets. We observe that on the more noisy and challenging Cora-ML dataset, ConfGCN performs considerably better, giving nearly 20% absolute increase in accuracy compared to the previous state-of-the-art method. This can be attributed to ConfGCN's ability to model nodes' label distribution along with the confidence scores which subdues the effect of noisy nodes during neighborhood aggregation. The lower performance of G-GCN compared to Kipf-GCN on Cora-ML shows that calculating edge-wise gating scores using the hidden representation of nodes is not much helpful in suppressing noisy neighborhood nodes as the representations lack label information or are over averaged or unstable. Similar reasoning holds for GAT for its poor performance on Cora-ML. In this section, we provide an analysis of the performance of Kipf-GCN, GAT and ConfGCN for node classification on Cora-ML dataset which has higher label mismatch rate. We use neighborhood label entropy to quantify label mismatch, which for a node u is defined as follows. DISPLAYFORM0 Here, label(v) is the true label of node v. The neighborhood label entropy of a node increases with label mismatch amongst its neighbors. The problem of node classification becomes difficult with increase in node degree, therefore, we also evaluate the performance of methods with increasing node degree. The are summarized in FIG1. We find that the performance of both Kipf-GCN and GAT decreases with increase in node entropy and degree. On the contrary, ConfGCN's performance remains consistent and does not degrade with increase in entropy or degree. This shows that ConfGCN is able to use the label distributions and confidence effectively to subdue irrelevant nodes during aggregation. Recently, BID32 highlighted an unusual behavior of Kipf-GCN where its performance degraded significantly with increasing number of layers. For comparison, we evaluate the performance of Kipf-GCN and ConfGCN on citeseer dataset with increasing number of convolutional layers. The are summarized in Figure 3. We observe that Kipf-GCN's performance degrades drastically with increasing number of layers, whereas ConfGCN's decrease in performance is more gradual. We also note that ConfGCN outperforms Kipf-GCN at all layer levels. In this section, we evaluate the different ablated version of ConfGCN by cumulatively eliminating terms from its objective function as defined in Section 5. The on citeseer dataset are summarized in Figure 4. Overall, we find that ConfGCN performs best when all the terms in its loss function (Equation 5) are included. In this paper we present ConfGCN, a confidence based Graph Convolutional Network which estimates label scores along with their confidences jointly in GCN-based setting. In ConfGCN, the influence of one node on another during aggregation is determined using the estimated confidences and label scores, thus inducing anisotropic behavior to GCN. We demonstrate the effectiveness of ConfGCN against recent methods for semi-supervised node classification task and analyze its performance in different settings. We make ConfGCN's source code available.
We propose a confidence based Graph Convolutional Network for Semi-Supervised Learning.
1,179
scitldr
In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize. The human vision system is robust in ways that existing computer vision systems are not BID50 BID1. Unlike current deep learning classifiers BID36 BID21 BID60, the human vision system is not fooled by small changes in query images. Humans are also not confused by many forms of corruption such as snow, blur, pixelation, and novel combinations of these. Humans can even deal with abstract changes in structure and style. Achieving these kinds of robustness is an important goal for computer vision and machine learning. It is also essential for creating deep learning systems that can be deployed in safety-critical applications. Most work on robustness in deep learning methods for vision has focused on the important challenges of robustness to adversarial examples BID54 BID5, unknown unknowns BID25 BID23 BID41, and model or data poisoning BID53. In contrast, we develop and validate datasets for two other forms of robustness. Specifically, we introduce the IMAGETNET-C dataset for input corruption robustness and the IMAGENET-P dataset for input perturbation robustness. To create IMAGENET-C, we introduce a set of 75 common visual corruptions and apply them to the ImageNet object recognition challenge BID7. We hope that this will serve as a general dataset for benchmarking robustness to image corruptions and prevent methodological problems such as moving goal posts and cherry picking. We evaluate the performance of current deep learning systems and show that there is wide room for improvement on IMAGENET-C. We also introduce a total of three methods and architectures that improve corruption robustness without losing accuracy. To create IMAGENET-P, we introduce a set of perturbed or subtly differing ImageNet images. Using metrics we propose, we measure the stability of the network's predictions on these perturbed images. Although these perturbations are not chosen by an adversary, currently existing networks exhibit surprising instability on common perturbations. Then we then demonstrate that approaches which enhance corruption robustness can also improve perturbation robustness. For example, some recent architectures can greatly improve both types of robustness. More, we show that the Adversarial Logit Pairing ∞ adversarial example defense can yield substantial robustness gains on diverse and common perturbations. By defining and benchmarking perturbation and corruption robustness, we facilitate research that can be overcome by future networks which do not rely on spurious correlations or cues inessential to the object's class. Adversarial Examples. An adversarial image is a clean image perturbed by a small distortion carefully crafted to confuse a classifier. These deceptive distortions can occasionally fool black-box classifiers BID38. Algorithms have been developed that search for the smallest additive distortions in RGB space that are sufficient to confuse a classifier. Thus adversarial distortions serve as type of worst-case analysis for network robustness. Its popularity has often led "adversarial robustness" to become interchangeable with "robustness" in the literature BID2. In the literature, new defenses BID42 BID47 BID44 BID22 often quickly succumb to new attacks BID12 BID5, with some exceptions for perturbations on small images BID52. For some simple datasets, the existence of any classification error ensures the existence of adversarial perturbations of size O(d −1/2), d the input dimensionality BID18. For some simple models, adversarial robustness requires an increase in the training set size that is polynomial in d. BID17 suggest modifying the problem of adversarial robustness itself for increased real-world applicability. Robustness in Speech. Speech recognition research emphasizes robustness to common corruptions rather than worst-case, adversarial corruptions BID39 BID45. Common acoustic corruptions (e.g., street noise, chatter, wind) receive greater focus than adversarial audio, because common corruptions are ever-present and unsolved. There are several popular datasets containing noisy test audio BID27 BID26. Robustness in noisy environments requires robust architectures, and some research finds convolutional networks more robust than fully connected networks BID0. Additional robustness has been achieved through pre-processing techniques such as standardizing the statistics of the input BID40 BID58 BID20 BID35. Studies. Several studies demonstrate the fragility of convolutional networks on simple corruptions. For example, BID28 apply impulse noise to break Google's Cloud Vision API. Using Gaussian noise and blur, BID9 demonstrate the superior robustness of human vision to convolutional networks, even after networks are fine-tuned on Gaussian noise or blur. BID15 compare networks to humans on noisy and elastically deformed images. They find that fine-tuning on specific corruptions does not generalize and that classification error patterns underlying network and human predictions are not similar. BID56; propose different corrupted datasets for object and traffic sign recognition. Robustness Enhancements. In an effort to reduce classifier fragility, BID59 finetune on blurred images. They find it is not enough to fine-tune on one type of blur to generalize to other blurs. Furthermore, fine-tuning on several blurs can marginally decrease performance. BID61 also find that fine-tuning on noisy images can cause underfitting, so they encourage the noisy image softmax distribution to match the clean image softmax. BID8 address underfitting via a mixture of corruption-specific experts assuming corruptions are known beforehand. We now define corruption and perturbation robustness and distinguish them from adversarial perturbation robustness. To begin, we consider a classifier f: X → Y trained on samples from distribution D, a set of corruption functions C, and a set of perturbation functions E. We let P C (c), P E (ε) approximate the real-world frequency of these corruptions and perturbations. Most classifiers are judged by their accuracy on test queries drawn from D, i.e., P (x,y)∼D (f (x) = y). Yet in a vast range of cases the classifier is tasked with classifying low-quality or corrupted inputs. In view of this, we suggest also computing the classifier's corruption robustness E c∼C [P (x,y)∼D (f (c(x) = y))]. This contrasts with a popular notion of adversarial robustness, often formulated min δ p <b P (x,y)∼D (f (x + δ) = y), b a small budget. Thus, corruption robustness measures the classifier's average-case performance on corruptions C, while adversarial robustness measures the worst-case performance on small, additive, classifier-tailored perturbations. Average-case performance on small, general, classifier-agnostic perturbations motivates us to define perturbation robustness, namely E ε∼E [P (x,y)∼D (f (ε(x)) = f (x))]. Consequently, in measuring perturbation robustness, we track the classifier's prediction stability, reliability, or consistency in the face of minor input changes. Now in order to approximate C, E and these robustness measures, we designed a set of corruptions and perturbations which are frequently encountered in natural images. We will refer to these as "common" corruptions and perturbations. These common corruptions and perturbations are available in the form of IMAGENET-C and IMAGENET-P.4 THE IMAGENET-C AND IMAGENET-P ROBUSTNESS BENCHMARKS 4.1 THE DATA OF IMAGENET-C AND IMAGENET-P IMAGENET-C Design. The IMAGENET-C benchmark consists of 15 diverse corruption types applied to validation images of ImageNet. The corruptions are drawn from four main categoriesnoise, blur, weather, and digital-as shown in Figure 1. Research that improves performance on this benchmark should indicate general robustness gains, as the corruptions are diverse and numerous. Each corruption type has five levels of severity since corruptions can manifest themselves at varying intensities. Appendix A gives an example of the five different severity levels for impulse noise. Real-world corruptions also have variation even at a fixed intensity. To simulate these, we introduce variation for each corruption when possible. For example, each fog cloud is unique to each image. These algorithmically generated corruptions are applied to the ImageNet BID7 ) validation images to produce our corruption robustness dataset IMAGENET-C. The dataset can be downloaded or re-created by visiting https://github.com/hendrycks/robustness. IMAGENET-C images are saved as lightly compressed JPEGs; this implies an image corrupted by Gaussian noise is also slightly corrupted by JPEG compression. Our benchmark tests networks with IMAGENET-C images, but networks should not be trained on these images. Networks should be trained on datasets such as ImageNet and not be trained on IMAGENET-C corruptions. To enable further experimentation, we designed an extra corruption type for each corruption category (Appendix B), and we provide CIFAR-10-C, TINY IMAGENET-C, IMAGENET 64 × 64-C, and Inception-sized editions. Overall, the IMAGENET-C dataset consists of 75 corruptions, all applied to ImageNet validation images for testing a pre-existing network. IMAGENET-P Design. The second benchmark that we propose tests the classifier's perturbation robustness. Models lacking in perturbation robustness produce erratic predictions which undermines user trust. When perturbations have a high propensity to change the model's response, then perturbations could also misdirect or destabilize iterative image optimization procedures appearing in style transfer BID14, decision explanations BID13, feature visualization BID46, and so on. Like IMAGENET-C, IMAGENET-P consists of noise, blur, weather, and digital distortions. Also as before, the dataset has validation perturbations; has difficulty levels; has CIFAR-10, Tiny ImageNet, ImageNet 64 × 64, standard, and Inception-sized editions; and has been designed for benchmarking not training networks. IMAGENET-P departs from IMAGENET-C by having perturbation sequences generated from each ImageNet validation image; examples are in FIG0. Each sequence contains more than 30 frames, so we counteract an increase in dataset size and evaluation time by using only 10 common perturbations. Common Perturbations. Appearing more subtly than the corruption from IMAGENET-C, the Gaussian noise perturbation sequence begins with the clean ImageNet image. The following frames in the sequence consist in the same image but with minute Gaussian noise perturbations applied. This sequence design is similar for the shot noise perturbation sequence. However the remaining perturbation sequences have temporality, so that each frame of the sequence is a perturbation of the previous frame. Since each perturbation is small, repeated application of a perturbation does not bring the image far out-of-distribution. For example, an IMAGENET-P translation perturbation sequence shows a clean ImageNet image sliding from right to left one pixel at a time; with each perturbation of the pixel locations, the ing frame is still of high quality. The perturbation sequences with temporality are created with motion blur, zoom blur, snow, brightness, translate, rotate, tilt (viewpoint variation through minor 3D rotations), and scale perturbations. IMAGENET-C Metrics. Common corruptions such as Gaussian noise can be benign or destructive depending on their severity. In order to comprehensively evaluate a classifier's robustness to a given type of corruption, we score the classifier's performance across five corruption severity levels and aggregate these scores. The first evaluation step is to take a trained classifier f, which has not been trained on IMAGENET-C, and compute the clean dataset top-1 error rate. Denote this error rate E f clean. The second step is to test the classifier on each corruption type c at each level of severity s (1 ≤ s ≤ 5). This top-1 error is written E f s,c. Before we aggregate the classifier's performance across severities and corruption types, we will make error rates more comparable since different corruptions pose different levels of difficulty. For example, fog corruptions often obscure an object's class more than brightness corruptions. We adjust for the varying difficulties by dividing by AlexNet's errors, but any baseline will do (even a baseline with 100% error rates, corresponding to an average of CEs). This standardized aggregate performance measure is the Corruption Error, computed with the formula DISPLAYFORM0. This in the mean CE or mCE for short. We now introduce a more nuanced corruption robustness measure. Consider a classifier that withstands most corruptions, so that the gap between the mCE and the clean data error is minuscule. Contrast this with a classifier with a low clean error rate which has its error rate spike in the presence of corruptions; this corresponds to a large gap between the mCE and clean data error. It is possible that the former classifier has a larger mCE than the latter, despite the former degrading more gracefully in the presence of corruptions. The amount that the classifier declines on corrupted inputs is given by the formula Relative CE DISPLAYFORM0. Averaging these 15 Relative Corruption Errors in the Relative mCE. This measures the relative robustness or the performance degradation when encountering corruptions. IMAGENET-P Metrics. A straightforward approach to estimate E ε∼E [P (x,y)∼D (f (ε(x)) = f (x))] falls into place when using IMAGENET-P perturbation sequences. Let us denote m perturbation sequences with S = x DISPLAYFORM1 where each sequence is made with perturbation p. The "Flip Probability" of network f: X → {1, 2, . . ., 1000} on perturbation sequences S is DISPLAYFORM2 For noise perturbation sequences, which are not temporally related, DISPLAYFORM3 1. We can recast the FP formula for noise sequences as FP DISPLAYFORM4. As was done with the Corruption Error formula, we now standardize the Flip Probability by the sequence's difficulty for increased commensurability. We have, then, the "Flip Rate" FR DISPLAYFORM5 Averaging the Flip Rate across all perturbations yields the mean Flip Rate or mFR. We do not define a "relative mFR" since we did not find any natural formulation, nor do we directly use predicted class probabilities due to differences in model calibration BID19.When the top-5 predictions are relevant, perturbations should not cause the list of top-5 predictions to shuffle chaotically, nor should classes sporadically vanish from the list. We penalize top-5 inconsistency of this kind with a different measure. Let the ranked predictions of network f on x be the permutation τ (x) ∈ S 1000. Concretely, if "Toucan" has the label 97 in the output space and "Pelican" has the label 145, and if f on x predicts "Toucan" and "Pelican" to be the most and second-most likely classes, respectively, then τ (x) = 1 and τ (x) = 2. These permutations contain the top-5 predictions, so we use permutations to compare top-5 lists. To do this, we define DISPLAYFORM6. If the top-5 predictions represented within τ (x) and τ (x) are identical, then d(τ (x), τ (x)) = 0. More examples of d on several permutations are in Appendix C. Comparing the top-5 predictions across entire perturbation sequences in the unstandardized Top-5 Distance uT5D DISPLAYFORM7 For noise perturbation sequences, we have uT5D DISPLAYFORM8 Once the uT5D is standardized, we have the Top-5 Distance T5D counts of the scenery before them. Hence, we propose the following protocol. The image recognition network should be trained on the ImageNet training set and on whatever other training sets the investigator wishes to include. Researchers should clearly state whether they trained on these corruptions or perturbations; however, this training strategy is discouraged (see Section 2). We allow training with other distortions (e.g., uniform noise) and standard data augmentation (i.e., cropping, mirroring), even though cropping overlaps with translations. Then the ing trained model should be evaluated on IMAGENET-C or IMAGENET-P using the above metrics. Optionally, researchers can test with the separate set of validation corruptions and perturbations we provide for IMAGENET-C and IMAGENET-P. How robust are current methods, and has progress in computer vision been achieved at the expense of robustness? As seen in Figure 3, as architectures improve, so too does the mean Corruption Error (mCE). By this measure, architectures have become progressively more successful at generalizing to corrupted distributions. Note that models with similar clean error rates have fairly similar CEs, and in TAB3 there are no large shifts in a corruption type's CE. Consequently, it would seem that architectures have slowly and consistently improved their representations over time. However, it appears that corruption robustness improvements are mostly explained by accuracy improvements. Recall that the Relative mCE tracks a classifier's accuracy decline in the presence of corruptions. Figure 3 shows that the Relative mCEs of many subsequent models are worse than that of AlexNet BID36. Full are in Appendix D. In consequence, from AlexNet to ResNet, corruption robustness in itself has barely changed. Thus our "superhuman" classifiers are decidedly subhuman. On perturbed inputs, current classifiers are unexpectedly bad. For example, a ResNet-18 on Scale perturbation sequences have a 15.6% probability of flipping its top-1 prediction between adjacent frames (i.e., FP ResNet-18 Scale = 15.6%); the uT5DResNet-18 Scale is 3.6. More are in Appendix E. Clearly perturbations need not be adversarial to fool current classifiers. What is also surprising is that while VGGNets are worse than ResNets at generalizing to corrupted examples, on perturbed examples they can be just as robust or even more robust. Likewise, Batch Normalization made VGG-19 less robust to perturbations but more robust to corruptions. Yet this is not to suggest that there is a fundamental trade-off between corruption and perturbation robustness. In fact, both corruption and perturbation robustness can improve together, as we shall see later. Be aware that Appendix F contains many informative failures in robustness enhancement. Those experiments underscore the necessity in testing on a a diverse test set, the difficulty in cleansing corruptions from image, and the futility in expecting robustness gains from some "simpler" models. Histogram Equalization. Histogram equalization successfully standardizes speech data for robust speech recognition BID58 BID20. For images, we find that preprocessing with Contrast Limited Adaptive Histogram Equalization BID48 ) is quite effective. Unlike our image denoising attempt (Appendix F), CLAHE reduces the effect of some corruptions while not worsening performance on most others, thereby improving the mCE. We demonstrate CLAHE's net improvement by taking a pre-trained ResNet-50 and fine-tuning the whole model for five epochs on images processed with CLAHE. The ResNet-50 has a 23.87% error rate, but ResNet-50 with CLAHE has an error rate of 23.55%. On nearly all corruptions, CLAHE slightly decreases the Corruption Error. The ResNet-50 without CLAHE preprocessing has an mCE of 76.7%, while with CLAHE the ResNet-50's mCE decreases to 74.5%.Multiscale Networks. Multiscale architectures achieve greater corruption robustness by propagating features across scales at each layer rather than slowly gaining a global representation of the input as in typical convolutional neural networks. Some multiscale architectures are called Multigrid Networks BID34. Multigrid networks each have a pyramid of grids in each layer which enables the subsequent layer to operate across scales. Along similar lines, Multi-Scale Dense Networks (MSDNets) BID31 use information across scales. MSDNets bind network layers with DenseNet-like BID30 ) skip connections. These two different multiscale networks both enhance corruption robustness, but they do not provide any noticeable benefit in perturbation robustness. Now before comparing mCE values, we first note the Multigrid network has a 24.6% top-1 error rate, as does the MSDNet, while the ResNet-50 has a 23.9% top-1 error rate. On noisy inputs, Multigrid networks noticeably surpass ResNets and MSDNets, as shown in Figure 5. Since multiscale architectures have high-level representations processed in tandem with fine details, the architectures appear better equipped to suppress otherwise distracting pixel noise. When all corruptions are evaluated, ResNet-50 has an mCE of 76.7%, the MSDNet has an mCE of 73.6%, and the Multigrid network has an mCE of 73.3%.Feature Aggregating and Larger Networks. Some recent models enhance the ResNet architecture by increasing what is called feature aggregation. Of these, DenseNets and ResNeXts BID60 are most prominent. Each purports to have stronger representations than ResNets, and the evidence is largely a hard-won ImageNet error-rate downtick. Interestingly, the IMAGENET-C mCE clearly indicates that DenseNets and ResNeXts have superior representations. Accordingly, a switch from a ResNet-50 (23.9% top-1 error) to a DenseNet-121 (25.6% error) decreases the mCE from 76.7% to 73.4% (and the relative mCE from 105.0% to 92.8%). More starkly, switching from a ResNet-50 to a ResNeXt-50 (22.9% top-1) drops the mCE from 76.7% to 68.2% (relative mCE decreases from 105.0% to 88.6%). Corruption robustness are summarized in Figure 5. This shows that corruption robustness may be a better way to measure future progress in representation learning than the clean dataset top-1 error rate. Some of the greatest and simplest robustness gains sometimes emerge from making recent models more monolithic. Apparently more representations, more redundancy, and more capacity allow these massive models to operate more stably on corrupted inputs. We saw earlier that making models smaller does the opposite. Swapping a DenseNet-121 (25.6% top-1) with the larger DenseNet-161 (22.9% top-1) decreases the mCE from 73.4% to 66.4% (and the relative mCE from 92.8% to 84.6%). In a similar fashion, a ResNeXt-50 (22.9% top-1) is less robust than the a giant ResNeXt-101 (21.0% top-1). The mCEs are 68.2% and 62.2% respectively (and the relative mCEs are 88.6% and 80.1% respectively). Both model size and feature aggregation are summarized in Figure 6. Consequently, future models with even more depth, width, and feature aggregation may attain further corruption robustness. Feature aggregation and their larger counterparts similarly improve perturbation robustness. While a ResNet-50 has a 58.0% mFR and a 78.3% mT5D, a DenseNet-121 obtains a 56.4% mFR and 76.8% mT5D, and a ResNeXt-50 does even better with a 52.4% mFR and a 74.2% mT5D. Reflecting the corruption robustness findings further, the larger DenseNet-161 has a 46.9% mFR and 69.5% mT5D, while the ResNeXt-101 has a 43.2% mFR and 65.9% mT5D. Thus in two senses feature aggregating networks and their larger versions markedly enhance robustness. Stylized ImageNet. BID16 propose a novel data augmentation scheme where ImageNet images are stylized with style transfer. The intent is that classifiers trained on stylized images will rely less on textural cues for classification. When a ResNet-50 is trained on typical ImageNet images and stylized ImageNet images, the ing model has an mCE of 69.3%, down from 76.7%.Adversarial Logit Pairing. ALP is an adversarial example defense for large-scale image classifiers BID33. Like nearly all other adversarial defenses, ALP was bypassed and has unclear value as an adversarial defense going forward BID11 ), yet this is not a decisive reason dismiss it. ALP provides significant perturbation robustness even though it does not provide much adversarial perturbation robustness against all adversaries. Although ALP was designed to increase robustness to small gradient perturbations, it markedly improves robustness to all sorts of noise, blur, weather, and digital IMAGENET-P perturbations-methods generalizing this well is a rarity. In point of fact, a publicly available Tiny ImageNet ResNet-50 model fine-tuned with ALP has a 41% and 40% relative decrease in the mFP and mT5D on TINY IMAGENET-P, respectively. ALP's success in enhancing common perturbation robustness and its modest utility for adversarial perturbation robustness highlights that the interplay between these problems should be better understood. In this paper, we introduced what are to our knowledge the first comprehensive benchmarks for corruption and perturbation robustness. This was made possible by introducing two new datasets, IMAGENET-C and IMAGENET-P. The first of which showed that many years of architectural advancements corresponded to minuscule changes in relative corruption robustness. Therefore benchmarking and improving robustness deserves attention, especially as top-1 clean ImageNet accuracy nears its ceiling. We also saw that classifiers exhibit unexpected instability on simple perturbations. Thereafter we found that methods such as histogram equalization, multiscale architectures, and larger featureaggregating models improve corruption robustness. These larger models also improve perturbation robustness. However, we found that even greater perturbation robustness can come from an adversarial defense designed for adversarial ∞ perturbations, indicating a surprising interaction between adversarial and common perturbation robustness. In this work, we found several methods to increase robustness, introduced novel experiments and metrics, and created new datasets for the rigorous study of model robustness, a pressing necessity as models are unleashed into safety-critical real-world settings. Clean Severity = 1 Severity = 2 Severity = 3 Severity = 4 Severity = 5Figure 7: Impulse noise modestly to markedly corrupts a frog, showing our benchmark's varying severities. In Figure 7, we show the Impulse noise corruption type in five different severities. Clearly, IMAGENET-C corruptions can range from negligible to pulverizing. Because of this range, the benchmark comprehensively assesses each corruption type. Speckle Noise Gaussian Blur Spatter Saturate Directly fitting the types of IMAGENET-C corruptions should be avoided, as it would cause researchers to overestimate a model's robustness. Therefore, it is incumbent on us to simplify model validation. This is why we provide an additional form of corruption for each of the four general types. These are available for download at https://github.com/hendrycks/robustness. There is one corruption type for each noise, blur, weather, and digital category in the validation set. The first corruption type is speckle noise, an additive noise where the noise added to a pixel tends to be larger if the original pixel intensity is larger. Gaussian blur is a low-pass filter where a blurred pixel is a of a weighted average of its neighbors, and farther pixels have decreasing weight in this average. Spatter can occlude a lens in the form of rain or mud. Finally, saturate is common in edited images where images are made more or less colorful. See FIG3 for instances of each corruption type. For some readers, the following function may be opaque, DISPLAYFORM0 where σ = (τ (x)) −1 τ (x) and the empty sum is understood to be zero. A high-level view of d is that it computes the deviation between the top-5 predictions of two prediction lists. For simplicity we find the deviation between the identity and σ rather than τ (x) and τ (x). In consequence we can consider d Also, d ((2, 3, 4, 5, 6, . . ., 1)) = 5. Distinctly, d ((1, 2, 3, 5, 6, 4, 7, 8, . . .)) = 2. As a final example, d ((5, 4, 3, 2, 1, 6, 7, 8, 9, . . .)) = 12.It may be that we want perturbation robustness for all predictions, including classes with lesser relevance. In such cases, it is still common that the displacement of the top prediction matters more than the displacement of, say, the 500th ranked class. For this there are many possibilities, such as the measure d (σ) = 1000 i=1 w i |w i − w σ(i) | such that w i = 1/i. This uses a Zipfian assumption about the rankings of the classes: the first class is n times as relevant as the nth class. Other possibilities involve using logarithms rather than hyperbolic functions as in the discounted cumulative gain BID37. One could also use the class probabilities provided by the model (should they exist). However such a measure could make it difficult to compare models since some models tend to be more uncalibrated than others BID19.As progress is made on this task, researchers may be interested in perturbations which are more likely to cause unstable predictions. To accomplish that, researchers can simply compare a frame with the frame two frames ahead rather than just one frame ahead. We provide concrete code of this slight change in the metric at https://github.com/hendrycks/robustness. For nontemporal perturbation sequences, i.e., noise sequences, we provide sequences where the noise perturbation is larger. IMAGENET-C corruption relative robustness are in BID32. IMAGENET-P mFR values are in TAB7, and mT5D values are in Stability Training. Stability training is a technique to improve the robustness of deep networks BID61. The method's creators found that training on images corrupted with noise can lead to underfitting, so they instead propose minimizing the cross-entropy from the noisy image's softmax distribution to the softmax of the clean image. The authors evaluated performance on images with subtle differences and suggested that the method provides additional robustness to JPEG corruptions. We fine-tune a ResNet-50 with stability training for five epochs. For training with noisy images, we corrupt images with uniform noise, where the maximum and minimum of the uniform noise is tuned over {0.01, 0.05, 0.1}, and the stability weight is tuned over {0.01, 0.05, 0.1}. Across all noise strengths and stability weight combinations, the models with stability training tested on IMAGENET-C have a larger mCEs than the baseline ResNet-50's mCE. Even on unseen noise corruptions, stability training does not increase robustness. However, the perturbation robustness slightly improves. The best model according to the IMAGENET-P validation set has an mFR of 57%, while the original ResNet's mFR is 58%. An upshot of this failure is that benchmarking robustness-enhancing techniques requires a diverse test set. Image Denoising. An approach orthogonal to modifying model representations is to improve the inputs using image restoration techniques. Although general image restoration techniques are not yet mature, denoising restoration techniques are not. We thus attempt restore an image with the denoising technique called non-local means BID3. The amount of denoising applied is determined by the noise estimation technique of BID10. Therefore clean images receive should nearly no modifications from the restoration method, while noisy images should undergo considerable restoration. We found that denoising increased the mCE from 76.7% to 82.1%. A plausible account is that the non-local means algorithm striped the images of their subtle details even when images lacked noise, despite having the non-local means algorithm governed by the noise estimate. Therefore, the gains in noise robustness were wiped away by subtle blurs to images with other types of corruptions, showing that targeted image restoration can prove harmful for robustness.10-Crop Classification. Viewing an object at several different locations may give way to a more stable prediction. Having this intuition in mind, we perform 10-crop classification. 10-crop classification is executed by cropping all four corners and cropping the center of an image. These crops and their horizontal mirrors are processed through a network to produce 10 predicted class probability distributions. We average these distributions to compute the final prediction. Of course, a prediction informed by 10-crops rather than a single central crop is more accurate. Ideally, this revised prediction should be more robust too. However, the gains in mCE do not outpace the gains in accuracy on a ResNet-50. In all, 10-crop classification is a computationally expensive option which contributes to classification accuracy but not noticeably to robustness. Smaller Models. All else equal, "simpler" models often generalize better, and "simplicity" frequently translates to model size. Accordingly, smaller models may be more robust. We test this hypothesis with CondenseNets BID29. A CondenseNet attains its small size via sparse convolutions and pruned filter weights. An off-the-shelf CondenseNet (C = G = 4) obtains a 26.3% error rate and a 80.8% mCE. On the whole, this CondenseNet is slightly less robust than larger models of similar accuracy. Even more pruning and sparsification yields a CondenseNet (C = G = 8) with both deteriorated performance (28.9% error rate) and robustness (84.6% mCE). Here again robustness is worse than larger model robustness. Though models fashioned for mobile devices are smaller and in some sense simpler, this does not improve robustness. Another goal for machine learning is to learn the fundamental structure of categories. Broad categories, such as "bird," have many subtypes, such as "cardinal" or "bluejay." Humans can observe previously unseen bird species yet still know that they are birds. A test of learned fundamental structure beyond superficial features is subtype robustness. In subtype robustness we test generalization to unseen subtypes which share share essential characteristics of a broader type. We repurpose the ImageNet-22K dataset for a closer investigation into subtype robustness. Subtype Robustness. A natural image dataset with a hierarchical taxonomy and numerous types and subtypes is ImageNet-22K, an ImageNet-1K superset. In this subtype robustness experiment, we manually select 25 broad types from ImageNet-22K, listed in the next paragraph. Each broad type has many subtypes. We call a subtype "seen" if and only if it is in ImageNet-1K and a subtype of one of the 25 broad types. The subtype is "unseen" if and only if it is a subtype of the 25 broad types and is from ImageNet-22K but not ImageNet-1K. In this experiment, the correct classification decision for an image of a subtype is the broad type label. We take pre-trained ImageNet-1K classifiers which have not trained on unseen subtypes. Next we fine-tune the last layer of these pre-trained ImageNet-1K classifiers on seen subtypes so that they predict one of 25 broad types. Then, we test the accuracy on images of seen subtypes and on images of unseen subtypes. Accuracy on unseen subtypes is our measure of subtype robustness. Seen and unseen accuracies are shown in FIG5, while the ImageNet-1K classification accuracy before fine-tuning is on the horizontal axis. Despite only having 25 classes and having trained on millions of images, these classifiers demonstrate a subtype robustness performance gap that should be far less pronounced. We also observe that the architectures proposed so far hardly deviate from the trendline.
We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness
1,180
scitldr
The paper explores a novel methodology in source code obfuscation through the application of text-based recurrent neural network network (RNN) encoder-decoder models in ciphertext generation and key generation. Sequence-to-sequence models are incorporated into the model architecture to generate obfuscated code, generate the deobfuscation key, and live execution. Quantitative benchmark comparison to existing obfuscation methods indicate significant improvement in stealth and execution cost for the proposed solution, and experiments regarding the model’s properties yield positive regarding its character variation, dissimilarity to the original codebase, and consistent length of obfuscated code. The field of code obfuscation has aimed to tackle reverse-engineering of code bases for years. The entire basis of this methodology is that if a program is constructed with logic not easily recognizable by a reader, the logic would be preserved intact and the software would be intrinsically protected. Traditional tactics include creative uses of whitespace, redundant logical operations, unnecessary conditional operations, amongst others. The common issue with obfuscation is that it can be reverseengineered, the only factor for a malicious actor would be the amount of time needed to discover the logic. DeepObfusCode is a proposed methodology to use neural networks to convert the plaintext source code into a cipher text by using the propagating architecture of neural networks to compound the randomness factor in the creation of the ciphertext. Yet at the same time, neural networks have the ability to learn statistical patterns and generate weights to convert one text to another, in our case from the ciphertext to plaintext. This would eventually permit users to simply load the ciphertext and the key to self-execute the program without foreign users viewing the inner workings of the code. From an academic standpoint, this methodology redirects obfuscation methodology towards complete obfuscation in contrary of incremental obfuscation, and suggests the usage and development of asymmetric key infrastructure in obfuscation. Beyond sequence-to-sequence network models, further obfuscation models could be built with greater degree of resilience, and other deep learning methods could be harnessed to develop alternative techniques to obfuscate code. The methodology can be adopted for more efficient, more effective obfuscation for developers protecting their proprietary codebase or cloud computing services wishing to guarantee confidentiality to customers. The algorithmic architecture could be further incorporated into larger frameworks or infrastructures to render homomorphic encryption and ensure complete anonymity of the codebase during execution from the execution provider. The objective of code obfuscation is to mask the computational processes and logic behind software code, to protect trade secrets, intellectual property, or confidential data. As such, there have been eight generalized methods behind code obfuscation, namely: (i) Name obfuscation; (ii) Data obfuscation; (iii) Code flow obfuscation; (iv) Incremental obfuscation; (v) Intermediate code optimization; (vi) Debug information obfuscation; (vi) Watermarking; (vii) Source code obfuscation. Source code obfuscation, the focus of the paper, is the process that hides the meaning behind the source code, in the case that a third party obtains the code that the code renders itself un-understandable. Under each branch of obfuscation method, there are sub-techniques to reduce understandability of code that are shared by the other obfuscation methods, including control ordering (changing the execution order of program statements), control computation (changing the control flow of the program, such as inserting dead code to increase code complexity), data aggregation (changing the access to data structures after their conversion into other types), renamed identifiers (replacing identifiers with meaningless strings as new identifiers to reduce readability and comprehension of what certain functions or methods do). While existing methods tend to require manual altering of source code with the aforementioned methods, the proposed method performs complete obfuscation with relatively randomly generated characters as the code output. Malicious attackers or readers of the code will not be able to reverse-engineer the code based on the readability of the obfuscated code, and a well-lodged key file of the model weights (e.g. kept on the execution server) would prohibit de-obfuscation. Existing methods of evaluating obfuscation techniques have tended towards qualitative surveys of difficulty and time taken to de-obfuscate by students, software engineers or computer scientists. To quantitatively compare the performance of the proposed source code obfuscation method against existing code obfuscation methods, we will modify a framework that has been used to compare obfuscation methods before, but also malleable enough for us to adapt for our specific comparison use case. The four original comparison metrics are: (i) Code potency: This metric arbitrarily computes the degree of obfuscation with traditional complexity measures, with specific focus on control flow and data obfuscation. An example of an implementation would be frequency counts of misleading statements. (ii) Resilience: This metric measures the ability of the obfuscated text to withstand attacks from automated tools. (iii) Stealth: This metric tests the difficulty in manual de-obfuscation by humans. It inherently checks how quickly adversarial de-obfuscators can detect misdirecting statements, correctly interpret supposedly-confusing identifiers, and tests their ability to uncover the logic and process behind the code without prior or minimal knowledge of the program. (iv) Execution cost: This metric measures (i) the incremental time required to perform the obfuscation, and (ii) the incremental time required to execute the obfuscated code. This would be contrasted to the time taken for the original code without obfuscation. Since the proposed obfuscation method generates a ciphertext completely different from the original source text and cannot be reverse-engineered by manual human de-obfuscation, the former two metrics (code potency and resilience) would not be used as comparison metrics for this method. The main metrics for evaluating DeepObfusCode would be stealth and execution time. Neural networks have had recent applications in the field of cryptography and stenography. There has been a recent implementation of style transfer in stenography, where secret images are encrypted within a cover image, and the secret could be obtained by passing the encrypted image through a secondary network. There is another implementation of n-dimensional convolutional neural networks used to encrypt input data. Another implementation involves the use of a multilayer perception (MLP) model to encrypt satellite images, and decrypting using a secondary MLP model. Implementations on neural networks that can execute on encrpyted data has also been a recent development. This indicates a growing interest in encrypting data through the use of neural networks. Unlike prior on data encryption through deep learning techniques, the proposed architecture opens an alternative application, which is to encrypt source code itself through the use of deep learning, then using the generated model file to decrypt and execute the code. RNN Encoder-Decoders or sequence-to-sequence models, trained to predict an output text set from provided input text, have had growing applications in statistical machine translation, grammatical error correction, and text summarization. The notion of taking input sentences and converting them to labeled output sentences with a trained model of weights for character-by-character prediction would be the basis for the proposed obfuscation method. The proposed obfuscation utilizes the advantage of text-based encoder-decoders in character generation and calculating weights to convert one string into another. The architecture behind the obfuscation is first a primary recurrent neural network (RNN) encoderdecoder model with randomly-set weights taking the original code text as an input to generate the obfuscated text. Then a secondary RNN encoder-decoder model is passed two arguments, the generated ciphertext and the original code as the corresponding label, and is trained for a number of iterations to calculate weights to generate the original text, and the weights of the network would be the key generated. This section will discuss the architecture and implementation in further detail. Figure 1: Overview of ciphertext generation. We pass the source code and a character set as inputs to initialize both character sets for the encoder-decoder model, then randomly assign weights to generate the ciphertext, an obfuscated version of the source code. To generate the obfuscated code, we first take the original legible text (source code) and a full character set (a string containing all characters, including letters, numbers and punctuation). We perform text processing for both text strings to prepare the inputs for the encoder and decoder models. First we create unique character sets for both strings, then create two dictionaries for both strings to index each character (key is index and value is character), and create two other dictionaries for both strings to return character given its index (key is character and value is index). Finally we vectorize both strings. We use the source code character set as inputs for the encoder model and both text character sets for the decoder model. We will randomly generate weights for a given number of times n (the randomness index; in our experiment we set n = 10), and set the model weights to be the generated weights array. The variation in the weights array will determine the degree of randomness in the ciphertext. Weights generated will alter the character value generated for each segment of the string, as shown in the Ciphertext Generation function C(p). c refers to the ciphertext (obfuscated code), p refers to the plaintext (source code), N refers to each weight position. f n and w rand are the n-th feature and randomized weight, respectively. Z(p) is a normalization constant that is independent of the weights. The output will be the output sequence, to which we will decode by taking reference from the index-to-character dictionary to convert the output sequence into an output character string and iteratively append characters. The ing output would be the ciphertext. After generating the ciphertext, we use it along with the original source code plaintext to generate a key. We apply the same text processing steps as before, set ciphertext as input into the encoder, and both texts as inputs into the decoder, and train the model for a certain number of iteration counts. For our experiments, we tended to use 2000 iterations, though Early Stopping mechanisms could be used to lower training time for shorter source code; the iteration count should be the number of iterations needed to ensure the output text is executable and identical to the source code text. After the training and model weight setting process is complete, export the encoder and decoder model files in HDF5 format (the key), and export the metadata (the dictionaries of index to char, and char to index) in pickle format. The Key Generation function K(p, c) accepts the arguments c ciphertext and p plaintext, and sets weights while minimizing the loss function. f n and w n are the n-th feature and weight (after training), respectively. Z(c) is a normalization constant that is independent of the weights. w n f n (p|c) + log Z(c) After the ciphertext is decoded from the output sequence, we test if the code is executable; if it fails, we retrain the model (a pre-determined loss threshold that ensures correct execution would facilitate early stopping in iterative training); if it passes, the ciphertext and key pair are retained. During live execution, we have three inputs: the obfuscated code (ciphertext), the key (model files), and the metadata files; function K(c, k) would be modified to accept c ciphertext, and k model file (along with metadata file). For our own experiments, the model and metadata files were separate; for Figure 3: Overview of live execution. To run the obfuscated code on any server or system, one would pass in the obfuscated code into an execution engine that takes the ciphertext and the lodged model files as inputs to execute the withheld code. execution in live systems, it would be possible to combine them into a single file if preferred. When we pass all three through the model container, the output value is executed as soon as it is returned, i.e. Exec(K(c, k)). The evaluate the obfuscation method compared to existing obfuscation techniques, the method will be tested on two aspects, stealth and execution cost. Other parameters such as code potency or resilience would not be applicable to this method, as those comparison metrics require some form of the original code to be preserved; but since this obfuscation method completely regenerates the code base to be indistinguishable from the original code, such testing (e.g. searching for misleading control flow) will not work. The objective of testing for the stealthiness of obfuscated code is to measure the complexity or distance in randomness from the original code. Hence the logic behind our comparison is to first obtain recognized well-obfuscated code and its deobfuscated copy (the original code), obfuscate the original code using our proposed method, then compare the distance in similarity between the original code and each of the obfuscated code. Obfuscated code samples were obtained through the International Obfuscated C Code Contest (IOCCC), and de-obfuscated code samples of the selected obfuscated code was found on forums. The obfuscated samples were winners in the IOCCC contest, signifying high quality, and were manually created by programmers. A total of five obfuscated-deobfuscated paired samples were used for the experiment, which can be found in https://github.com/dattasiddhartha-1/dsource. Obfuscated code of reputable, unbiased quality and its deobfuscated counterpart is generally difficult to obtain in large samples; the aforementioned samples would be used in bench-marking and comparison, but not for testing the inherent properties of the proposed architecture. To test the obfuscation method extensively, for each de-obfuscated sample, we performed the obfuscation to obtain the ciphertext for 100 iterations (to return 500 samples of obfuscated code samples in total, 100 per de-obfuscated code sample). Then we used Levenshtein Distance to measure the distance between two code samples; the distance is the number of deletions, insertions, or substitutions required to transform the source string into the target string. After calculating the Levenshtein Distance values between the original de-obfuscated code and the IOCCC obfuscated code, we calculated the Levenshtein Distance values between the original de-obfuscated code with each respective sample's DeepObfusCode obfuscated code and took the mean Levenshtein Distance for the sample. The are tabulated in Table 1. From the table, we can observe that in general the DeepObfusCode obfuscation has a greater degree of randomness or dissimilarity from the original source text compared to the benchmark IOCCC obfuscated text, with an average magnitude improvement of 1.2614 (the average of the ratio of proposed:benchmark across all samples) and with a standard deviation 0.7176. Set 2 of the experiment indicates the benchmark outperforming the proposed method. Inspection of set 2 reveals a greater proportion of repetitive characters as junk code insertions, which serve to confuse de-obfuscators, but also dilute the similarity (conversely inflate the Levenstein distance) since the proportion of overlapping characters to the total number of characters would be lower. While the total collection indicates a general improvement in dissimilarity, removing set 2 would indicate an average magnitude improvement of 1.5287 and a standard deviation of 0.5352. This aids in justifying that the proposed obfuscation method performs well in the aspect of stealth. The primary focus of the section would be to measure how ciphertext generation time (time to encrypt) and key generation time (time to decrypt) would vary with the length of the source code (plaintext length). The experimental design starts with the random generation of strings varying from length 1 to 4000 (to vary the plaintext length). Then we set loggers to record information regarding the six main variables we would like to test: (i) Length of string input, (ii) Randomness metric (Levenshtein distance), (iii) Execution time for encryption, (iv) Execution time for decryption, (v) Character variation, (vi) Average character length of ciphertext. As execution time depends on the device running the simulation, for reference, the simulation was run on a Python Jupyter notebook, running on Windows 10 with a Nvidia Geforce 1070 GTX graphics card (running at 10% GPU usage level). The plot of ciphertext generation time requirement against length of the source code shows no distinct pattern, while the plot of key generation time requirement against length of the source code shows a linear pattern. This infers that ciphertext generation is not length-dependent and can be executed without significant incremental cost. Since the length of the source code affects the training time required for the same number of iterations (higher length would increase training time per epoch), longer source code would require more time to generate a key, so this obfuscation method may be more suitable for smaller code bases or systems with sufficient computing resources. We can also infer the key generation takes linear time, i.e. time complexity is O(n). Beyond execution cost, this experiment yielded additional information regarding the properties of this obfuscation model. Plotting ciphertext character variation and ciphertext length against plaintext length reveals: (i) the character variation is widely distributed regardless of the length of the plaintext input, which further supports the notion of randomness of ciphertext generation, as the ciphertext is based purely on the randomness in the model weight generation; (ii) the ciphertext length is kept low (on average 72 character length) regardless of the plaintext length, which reduces obfuscated code storage requirements. The notion that the cipher generation algorithm produces short random ciphertexts further implies it would be difficult for malicious actors to reverse-engineer the ciphertext by setting random weights or training a model without a known output text. For the model used in the experiment, the model file consists of 3 layers of arrays: the main layer contains 8 arrays, of which each array has an array-length as represented in, of which each sub-array (except the third and sixth array) contains an array of 1024 values. One would have to randomly generate 32-bit floats for 975,872 values, at least to a proximate range to the actual values before they can generate readable de-obfuscated code, but even then they would need to further generate values to the 8th decimal place if they intend to obtain scrambled text without any lapse in meaning. The correlation matrix in Figure 7 summarizes the relationships between the properties tested in the experiment. This paper presents a novel obfuscation methodology using RNN encoder-decoder models to generate ciphertext from source code and generating and utilizing model weights as keys. Figure 7: Correlation matrix of properties obfuscation methods, it is at least on par in terms of stealth and is expected to outperform for larger code bases in terms of obscurity and readability, and though key generation may take a significant amount of time for larger code bases or require more computational resources, it would be less time-intensive than to manually obfuscate the source code. This would be a good use case application for services that have confidential source code in plaintext but would prefer ciphertext yet require the ability to execute.
Obfuscate code using seq2seq networks, and execute using the obfuscated code and key pair
1,181
scitldr
We propose a method for joint image and per-pixel annotation synthesis with GAN. We demonstrate that GAN has good high-level representation of target data that can be easily projected to semantic segmentation masks. This method can be used to create a training dataset for teaching separate semantic segmentation network. Our experiments show that such segmentation network successfully generalizes on real data. Additionally, the method outperforms supervised training when the number of training samples is small, and works on variety of different scenes and classes. The source code of the proposed method will be publicly available.
GAN-based method for joint image and per-pixel annotation synthesis
1,182
scitldr
Vision-Language Navigation (VLN) is the task where an agent is commanded to navigate in photo-realistic unknown environments with natural language instructions. Previous research on VLN is primarily conducted on the Room-to-Room (R2R) dataset with only English instructions. The ultimate goal of VLN, however, is to serve people speaking arbitrary languages. Towards multilingual VLN with numerous languages, we collect a cross-lingual R2R dataset, which extends the original benchmark with corresponding Chinese instructions. But it is time-consuming and expensive to collect large-scale human instructions for every existing language. Based on the newly introduced dataset, we propose a general cross-lingual VLN framework to enable instruction-following navigation for different languages. We first explore the possibility of building a cross-lingual agent when no training data of the target language is available. The cross-lingual agent is equipped with a meta-learner to aggregate cross-lingual representations and a visually grounded cross-lingual alignment module to align textual representations of different languages. Under the zero-shot learning scenario, our model shows competitive even compared to a model trained with all target language instructions. In addition, we introduce an adversarial domain adaption loss to improve the transferring ability of our model when given a certain amount of target language data. Our methods and dataset demonstrate the potentials of building a cross-lingual agent to serve speakers with different languages. Recently, the Vision-Language Navigation (VLN) task , which requires the agent to follow natural language instructions and navigate in houses, has drawn significant attention. In contrast to some existing navigation tasks , where the agent has an explicit representation of the target to know if the goal has been reached or not, an agent in the VLN task can only infer the target from natural language instructions. Therefore, in addition to normal visual challenges in navigation tasks, language understanding and cross-modal alignment are essential to complete the VLN task. However, existing benchmarks for the VLN task are all monolingual in that they only contain English instructions. Specifically, the navigation agents are trained and tested with only English corpus and thus unable to serve non-English speakers. To fill this gap, one can collect the corresponding instructions in the language that the agent is expected to execute. But it is not scalable and practical as there are thousands of languages on this planet and collecting large-scale data for each language would be very expensive and time-consuming. Therefore, in this paper, we study the task of cross-lingual VLN to endow an agent the ability to understand multiple languages. First, can we learn a model that has been trained on existing English instructions but is still able to perform reasonably well on a different language (e.g. Chinese)? This is indeed a zero-shot learning scenario where no training data of target language is available. An intuitive approach is to train the agent with English data, and at test time, use a machine translation system to translate the target language instructions to English, which are then fed into the agent for testing (see the above part of Figure 1). The inverse solution is also rational: we can translate all English instructions into the target language and train the agent on the translated data, so it can be directly tested with target language instructions (see the below part of Figure 1). The former agent is tested on translated instructions while the latter is trained on translated instructions. Both solutions suffer from translation errors and deviation from the corresponding human-annotated instructions. But meanwhile, the former is trained on human-annotated English instructions (which we view as "golden" data) and the latter is tested on "golden" target language instructions. Motivated by this fact, we design a cross-lingual VLN framework that learns to benefit from both solutions. As shown in Figure 1, we combine these two principles and introduce a meta-learner, which learns to produce beliefs for human-annotated instruction and its translation pair and dynamically fuse the cross-lingual representations for better navigation. In this case, however, the training and inference are mismatched. During training, the agent takes source human language and target machine translation (MT) data as input, while during inference, it needs to navigate with target human instructions and source MT data. To better align the source and target languages, we propose a visually grounded cross-lingual alignment module to align the paired instructions via the same visual feature because they are describing the same demonstration path. The cross-lingual alignment loss can also implicitly alleviate the translation errors by aligning the human language and its MT pair in the latent visual space. After obtaining an efficient zero-shot agent, we investigate the question that, given a certain amount of data for the target language, can we learn a better adaptation model to improve source-to-target knowledge transfer? The meta-learner and visually grounded cross-lingual alignment module provide a foundation for solving the circumstances that the agent has access to the source language and (partial) target language instructions for training. To further leverage the fact that the agent has access to the target language training data, we introduce an adversarial domain adaption loss to alleviate the domain shifting issue between human-annotated and MT data, thus enhancing the model's transferring ability. To validate our methods, we collect a cross-lingual VLN dataset (XL-R2R) by extending complimentary Chinese instructions for the English instructions in the R2R dataset. Overall, our contributions are four-fold: We collect the first cross-lingual VLN dataset to facilitate navigation models towards accomplishing instructions of various languages such as English and Chinese, and conduct analysis between English and Chinese corpus. We introduce the task of cross-lingual visionlanguage navigation and propose a principled meta-learning method that dynamically utilizes the augmented MT data for zero-shot cross-lingual VLN. We propose a visually grounded crosslingual alignment module for better cross-lingual knowledge transfer. We investigate how to transfer knowledge between human-annotated and MT data and introduce an adversarial domain adaption loss to improve the navigation performance given a certain amount of human-annotated target language data. The cross-lingual vision-language navigation task is defined as follows: we consider an embodied agent that learns to follow natural language instructions and navigate from a starting pose to a goal location in photo-realistic 3D indoor environments. Formally, given an environment E, an initial pose p 1 = (v 1, φ 1, θ 1) (spatial position, heading, elevation angles) and natural language instructions x 1:N, the agent takes a sequence of actions a 1:T to finally reach the goal G. Thus the VLN dataset D is defined as {(E, p 1, x 1:N, G)} |D |. Note that we eliminate the footscript here for simplicity. At each time step t, the agent at pose p t receives a new observation I t = E(p t), which is a raw RGB image pictured by the mounted camera. Then it takes an action a t and leads to a new pose p t+1 = (v t+1, φ t+1, θ t+1). Taking actions sequentially, the agent stops when a stop action is taken. A cross-lingual VLN agent learns to understand multiple languages and navigate to the goal. Without loss of generality, we consider a bilingual situation coined as cross-lingual VLN. For this specific task, we built the XL-VLN dataset D, which extends the VLN dataset and includes a bilingual version of instructions. Specifically, where S and T indicate source and target language domains separately. The source language domain S contains instructions in the source language covering the full VLN dataset D (including training and testing splits), while the target language domain T consists of a fully annotated testing set and a training set in the target language that covers a varying percentage of trajectories of the training set in D (may range from 0% to 100%). The agent is allowed to leverage both source and target language training sets and expected to perform navigation given an instruction from either the source or target language testing sets. In this study, we first focus on a more challenging setting where no human-annotated target language data are available for training (= 0%), i.e., with no training data for the target language but the only access to the source language training set, the agent is required to follow a target language instruction x T 1:N to navigate in houses. Then we investigate the agent's transferring ability by gradually increasing the percentage of human-annotated target language instructions for training (= 0%, 10%, ..., 100%). We build a Cross-Lingual Room-to-Room (XL-R2R) dataset 1, the first cross-lingual dataset for the vision-language navigation task. The XL-R2R dataset includes 4,675/340/783 trajectories for train/validation seen/validation unseen sets, preserving the same split as in the R2R dataset. The official testing set of R2R is unavailable because the testing trajectories are held for challenge use. Each trajectory is described with 3 English and 3 Chinese instructions independently annotated by different workers. Data Collection. We keep the English instructions of the R2R dataset and collect Chinese instructions via a public Chinese crowdsourcing platform. The Chinese instructions are annotated by native speakers through an interactive 3D WebGL environment, following the R2R dataset guidance . More details can be found in the Appendix. Data Analysis. XL-R2R dataset includes 5,798 trajectories in total and 17,394 instructions for both languages. The bilingual instruction part here is compared with each other from four perspectives for a broad understanding, including vocabulary, instruction length, sub-instruction number per instruction and part-of-speech tags. Removing words with less than 5 frequency, we obtain an English vocabulary with 1,583 words and a Chinese one with 1,134 words. The Chinese instructions are relatively shorter than English ones and less likely to be long sentences (Figure 2a). The instructions usually consist of several sub-instructions separated by punctuation tokens, and the number of sub-instructions per instruction distributes similar across language (Figure 2b). Figure 2c and Figure 2d show that nouns and verbs, which often refer to landmarks and actions respectively, are more frequent in Chinese dataset (32.9% and 29.0%) than in English one (24.3% and 13.7%) 2. We present a general cross-lingual VLN framework in Figure 3. It is based on an encoder-decoder architecture and composed of three novel modules: a cross-lingual meta-learner (Meta), a visually grounded cross-lingual alignment module (txt2img), and an adversarial domain adaptation module. Particularly, as shown in Figure 3, both English and Chinese instructions are encoded by a shared encoder. Then the shared decoder takes the encoded contextual embeddings c In addition, the txt2img module is introduced to align h en t and h zh t in the visual space with the visual feature I t as an anchor point, improving the cross-lingual knowledge transfer via a visually grounded cross-lingual alignment loss. The adversarial domain adaptation module is particularly designed for the transfer setting where human-annotated target language instructions are also provided. We employ a domain discriminator that is trying to distinguish human-annotated instructions from machine translation (MT) instructions, and a gradient reversal layer to reverse the gradients back-propagated to the encoder so that the encoder is indeed trying to generate indistinguishable representations for both human-annotated and MT data and align the distributions of the two domains. We employ the sequence-to-sequence architecture in for both languages. Receiving a pair of natural language instruction x L 1:N, L ∈ {S, T}, the agent encodes it with an embedding matrix followed by an LSTM encoder to obtain contextual word representations c L 1:N. The decoder LSTM takes the concatenation of current image feature I t and previous action embedding a t−1 as input, and updates the hidden state s t−1 to s t aware of the historical trajectory: An attention mechanism is used to compute a weighted context representation, grounded on the instruction c To bridge the gap between source and target languages, we leverage a machine translation (MT) system to translate the source language in the training data into the target language. During testing, the MT system will translate the target language instruction into the source language. The MT data serves as augmented data for zero-shot or low-resource settings as well as associates two different human languages in general. We take two instructions (the human language instruction and its MT pair) as input for both training and testing. We observed that, even if one instruction is a direct translation from the other, when the paired instructions are fed into the same encoder and decoder, the two instructions will often generate different predictions when executing. At each time step, when the agent observed the local visual environment, with two instructions at hand but lead to different next positions. It remains a challenging question which language representation the agent shall trust more. Therefore, we propose a cross-lingual meta-learner that tries to help the agent make the judgment. At each time step, we let the cross-lingual meta-learner decide which language representation we should have more faith in, i.e., "learning to trust". The meta-learner is a SoftMax layer which takes the concatenation of two hidden states h S t and h T t as input, and produces a probability α t representing the belief of the source language representation. The final hidden vector used for predicting actions is defined as a mixture of the representations in two languages: Finally, the predicted action distribution for the next time step is computed as: To better ground and align two languages to the images they describe, we map h S t and h T t into the latent space of image representations such that their similarity is maximized. In other words, we use the image space as an anchor point to align cross-lingual representations. Let I t be the latent representation of the local visual environment on the target trajectory at time step t (e.g. the final layer of a ResNet), the loss function is formulated as: where ψ denotes a non-linearity activation such as ReLU or tanh, W I, W S and W T are the projection matrices. L2 distance is used to measure the similarity between contextual word and image features in the same vector space. The intuition behind such an aligning mechanism is that, since the human instruction and the MT instruction are both describing the same trajectory, their representation should be close to the visual environment in some way (a projected latent space in our case). This module would ensure the consistency among cross-lingual representations and the visual inputs. During training, we have human-annotated data for the source language and the machine-translated data for the target language, while an opposite situation for testing. To bridge the gap between training and inference, we leverage an adversarial domain adaption loss to make the context representations indistinguishable across domains for the transfer setting, where a certain amount of human-annotated instructions for the target language is available. A sentence vector c is computed as the mean of the context vector c 1:N to a single vector, then forwarded to a domain discriminator through the gradient reversal layer. With the gradient reversal layer, the gradients minimizing domain classification errors are passed back with opposed sign to the language encoder, which adversarially encourages the encoder to be domain-agnostic. We then minimize the following domain adaptation loss: where y H is the domain label of an instruction indicating whether it is from human annotation or machine translation, andŷ H is the approximation by the domain discriminator. Figure 1. meta+txt2img equips the meta-learner with txt2img module. The first three models are all for zero-shot learning. The last one, train w/ AN is to train the agent with 100% Chinese human annotated data. All models are tested with Chinese human instructions. , measures the fidelity to the described path, unlike previous metrics which are mostly based on goal completion. Model PL NE ↓ SR ↑ SPL ↑ CLS ↑ PL NE ↓ SR ↑ SPL ↑ CLS ↑ train We first report for the zero-shot setting, to show the effectiveness of our two components, meta-learner, and txt2img. We compare with two models, a seq2seq model trained with humanannotated Chinese instructions (collected in XL-R2R dataset), and that trained with MT Chinese instructions translated from English. Results are shown in Table 1. First, there is a clear gap between training with human-annotated and MT data, indicating the insufficiency of using only an MT system for zero-shot learning. Second, our meta-learner can successfully aggregate the information of the annotated data and MT data, which enables efficient zero-shot learning. Third, the txt2img module further improves vision-language alignment, which especially helps the agent generalize better on the unseen data. Besides, even though the agent does not have access to the target annotation data, it achieves competitive compared to training with 100% annotated data. To investigate the potential of transferring knowledge from English to Chinese, we draw learning curves by utilizing varying percentages of Chinese annotations for training (see Figure 4). The starting point is our zero-shot setting, where one has no access to the human-annotated data for the target language, and the endpoint is when one has 100% training data of the target language. The figures demonstrate that the proposed adversarial domain adaption module provides consistent improvement over other methods, for both seen and unseen environments. The approach works for both low-resource and high-resource settings and is capable of transferring knowledge steadily as the size of target data grows. Besides, our transfer method trained with 40% Chinese human data can achieve similar performance as trained with 100% Chinese human data. This demonstrates the potential of building a functioning cross-lingual VLN agent by collecting a large-scale dataset for a certain language (i.e., English), and a small amount of data for other languages. One can also observe that pretraining with English data and MT Chinese data help the model learn useful encoding that is especially valuable when only limited Chinese training data are available. To enable cross-lingual VLN, we examine four models (see Figure 5) equipped with our metalearner and txt2img modules: Base-Bi, which has two separate encoder-decoder. Shared Enc, which has a shared language encoder. Shared Dec, which has a shared policy decoder. Share Enc-Dec, which shares both the encoder and the decoder, with different word embeddings for different languages. These models take English and Chinese natural language instructions as input, for both training and testing. They are also compared with Base-mono, which is a single encoder-decoder model trained and tested with Chinese human instructions only. Table 2: Performance comparison for cross-lingual VLN models. All models are trained and tested with English and Chinese annotation data. Results are averaged over 3 runs. Table 2 shows the of four architectures on the validation seen and unseen part. First, the performance of multi-lingual models are consistently improved over the monolingual model (Basemono), indicating the potential of cross-lingual learning for improving navigation . Second, sharing parameters can further boost navigation performance. Finally, Shared Enc and Shared EncDec produce similar , which motivates us to use a Shared Enc-Dec design since it yields a competitive good with fewer parameters required. For a more intuitive understanding of the meta-learner, we visualize the confidences assigned to each language in Figure 6. In this case, the meta-learner trusts more on the human-annotated Chinese instruction which is of better quality. More specifically, at time step 10, when the meta-learner has the highest faith in the Chinese instruction, we visualize the textual attention on the whole instruction at this time step. Evidently, the corresponding textual attention on the Chinese command makes more sense than the machine-translated English command. The agent is supposed to keep turning left and then move forward to the green plant. The attentions on Chinese instruction assigns 0.25 to "turn left", and nearly zero attention to "head towards the door" which is already completed by previous actions. While the attention on English is more uniform and less accurate than on Chinese. Textual attention at time step 10: Turn left Go to Figure 6: Case Study. We choose a succeeded instruction from the validation set for illustration. Vision and Language Grounding. Over the past years, deep learning approaches have boosted the performance of computer vision and natural language processing tasks (; ; ;). A large body of benchmarks are proposed to facilitate the research, including Image and Video caption , VQA , and visual dialog . These tasks require grounding on both visual and textual modalities, but mostly limited to a fixed visual input. Thus, we focus on the task of vision-language navigation (VLN) , where an agent needs to actively interact with the visual environment following language instructions. Vision-Language Navigation. Several approaches have been proposed for the VLN task on the R2R dataset. For example, presented a planned-ahead module combining modelfree and model-based reinforcement learning methods, introduced a speaker which can synthesize new instructions and implement pragmatic reasoning. Subsequent methods extend the speaker-follower model with Reinforced Cross-modal Matching (a), self-monitoring , back-translation (etc. Previous works mainly improve navigation performance by data augmentation or leveraging efficient searching methods. In this paper, we address the task from a cross-lingual perspective, aiming at building an agent to execute instructions for different languages. Cross-lingual Language Understanding. Learning cross-lingual representations is a crucial step to make natural language tasks scalable to all the world's languages. Recently, cross-lingual studies on typical NLP tasks has achieved success, such as Part-of-Speech tagging), sentiment classification and Named Entity Recognition These studies successfully disentangle the linguistic knowledge into languagecommon and language-specific parts and learn both knowledges with individual modules. Moreover, cross-lingual image and video captioning (; b) aim to bridge vision and language towards a deeper understanding, by learning a cross-lingual model grounded on visual inputs. Our dataset and method address the cross-lingual representation learning for the vision-language navigation task. To our knowledge, we are the first to study the cross-lingual learning in a dynamic visual environment, where the agent needs to interact with its surroundings and take a sequence of actions. In this paper, we introduce a new task, namely cross-lingual vision-language navigation, to study cross-lingual representation learning situated in the navigation task where cross-modal interaction with the real world is involved. We collect a cross-lingual R2R dataset and conduct pivot studies towards solving this challenging but practical task. The proposed cross-lingual VLN framework is proven effective in cross-lingual knowledge transfer. There are still lots of promising future directions for this task and dataset, e.g. to incorporate recent advances in VLN and greatly improve the model capacity. It would also be valuable to extend the cross-lingual setting to support numerous different languages in addition to English and Chinese. We follow the same preprocessing procedure as in previous work. A ResNet-152 pretrained on ImageNet is used to extract image features, which are 2, 048-d vectors. Instructions are clipped with a maximum length of 80. Words are embedded into a 256-d vector space, and the action embedding is 32. The hidden size for the encoder and decoder LSTM is 512. The dropout ratio is 0.5. The meta-learner is a single fully connected layer. The dimension of vision-language alignment vector space is set to 1, 024. Each episode consists of no more that 40 actions. The network is optimized via the ADAM optimizer with an initial learning rate of 0.001, a weight decay of 0.0005, and a batch size of 100. The learning rate of domain adaptation loss is scheduled with an adaption factor : where γ is set to 10 and p is learning steps. We use 0.2λ p to train the domain discriminator. We run each model 30, 000 iterations and report the iteration with the highest SPL, and evaluate the models every 500 iterations. For evaluating our proposed approach on the unseen test set, we participate in the Vision and Language Navigation challenge and submitted our to the test server. Here we treat Chinese as the source language and English as the target language. Hence for zeroshot learning, the agent has 100% Chinese annotated data but no English annotated data. The agent is commanded to follow English human instructions during testing. Results are shown in Table 3. For zero-shot learning, our method (meta+txt2img) improves over the model trained with MT data only. For transfer learning, our method can efficiently transfer knowledge between Chinese and English data. The are coherent with the reported on the validation set. (See Table 1 and Figure 4). Table 3: Performance comparison on the English test set. The first two rows are for zero-shot learning, the last two rows are trained with access to 100% target training data (i.e. English annotated instructions). Meta-learner To validate the effectiveness of the meta-learner, we compare it with a simple ensemble, which assigns equal confidence to two languages at all time steps, without any learnable parameters. The are summarized in Table 4. Our meta-leaner has higher performance on the validation unseen set, suggests that "learning to trust" is important for cross-lingual vision-language navigation. Adversarial Domain Adaptation Loss To demonstrate the domain adaptation loss indeed enhance knowledge transfer between two languages, we compare it with our vanilla zero-shot model, a metalearner equipped with a txt2img module. Table 5 shows that, as the size of target training data grows, although the vanilla model can also benefit from the augmented data, the performance stops growing as the data size reaches 40% or 60%. Meanwhile, domain adaptation loss provides a more consistent Table 4: Ablation study for the meta-learner. Reported are averages of 5 individual runs. ensemble is to assign equal weight to each language. meta-learner is our basic framework in Figure 1. Both are reported on Chinese human instructions. and steady improvement. At the endpoint (100%), the SPL is 22.14 vs 21.50, proves its efficiency and the potential of transferring knowledge between different languages. We compare the statistics of the Chinese annotated dataset with a machine-translated one. The annotated instructions are more likely to contain fewer words as well as fewer instructions. Besides, nouns and verbs, which usually represent landmarks and actions in VLN task, are more frequent in annotated instructions than machine-translated ones.
We introduce a new task and dataset on cross-lingual vision-language navigation, and propose a general cross-lingual VLN framework for the task.
1,183
scitldr
Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequately explored. Most methods rely on defining a pipeline of deriving features via generative modeling and then applying clustering algorithms, separating the modeling and discriminative processes. We propose a deep hierarchical generative model which uses a mixture of discrete and continuous distributions to learn to effectively separate the different data manifolds and is trainable end-to-end. We show that by specifying the form of the discrete variable distribution we are imposing a specific structure on the model's latent representations. We test our model's discriminative performance on the task of CLL diagnosis against baselines from the field of computational FC, as well as the Variational Autoencoder literature. Variational Autoencoders (VAEs) have recently shown remarkable performance in unsupervised generative modeling of high-dimensional data generated by complex distributions BID19, as well as semi-supervised classification where only a small subset of the data set is labeled. While the interaction between the generative and the classification capabilities of semi-supervised models has been recently explored in literature, there has been little investigation of the discriminative capabilities of a purely unsupervised framework with most works focusing on the task of unsupervised clustering BID17 BID36 BID16. Furthermore, most of these works have been evaluated on mostly benchmark data sets which do not capture the difficulties that are often encountered on real-world data. For instance, there has been no investigation of the performance of these methods on data sets with significant class imbalance. The question that is, then, posed is whether deep generative models can be used effectively as unsupervised classifiers, which can, in essence, be cast into a question of what type of features and architectural choices are required to achieve good classification performance in an unsupervised manner. To examine the aforementioned questions we propose a deep hierarchical generative model and evaluate its performance on a difficult real world data set. In principle, we train our model in a completely unsupervised fashion, however in our experiments we rely on labeled data to measure our model's performance using suitable metrics for the problem domain, as well as derive a stopping criterion for training. Our model outperforms established state-of-the-art baselines used in the field of the problem domain. Our contributions are summarized in the following:• A framework which utilizes a hierarchy of continuous representations which conclude in a discrete variable explicitly representing categories, ing in complex, expressive, invariant and interpretable representations BID1, which are crucial in separating widely overlapping manifolds and achieve good classification in significantly imbalanced data sets.• Controllable representation structure through specification of the form of the aforementioned discrete variable which better suits the task at hand given a problem scenario. In this section we review techniques and frameworks that provide a useful backdrop for the derivation of our own model. Variational Autoencoders (VAEs) as presented in BID19 use at most two layers of stochastic latent variables, however generalizations to multiple layers of latent variables have since been introduced BID6.Generation in a deep generative model is achieved by a top-down stochastic pass through the model defined as: DISPLAYFORM0 where L is the number of stochastic hidden layers and z L denotes the number of latent variables in the layer. As in a standard VAE the dependence of each layer on the previous one is considered to be nonlinear and is modeled by multi-layered perceptrons (MLPs). Similarly, inference is carried out by a bottom-up stochastic pass through the model's layers: DISPLAYFORM1 Optimization of a deep generative model is akin to that of a standard VAE. Namely, the reparameterization trick BID19 ) is being applied to each layer of stochastic latent variables. Until recently, models trained with the variational objective have been employing mainly Gaussian latent stochastic variables, optimizing indirectly through discrete variables wherever they were used (e.g. integrate over the discrete variable). This was due to the inability of backpropagating through discrete variables because of their discontinuous nature. BID15 and BID26 independently developed a continuous relaxation of discrete random variables. The ing distribution was presented as the Gumbel-Softmax distribution and the Concrete distribution respectively but essentially has the same functional form. From this point on, for the sake of clarity we will adopt the latter name to refer to this distribution. A simple way to sample from a discrete distribution is to employ the Gumbel-Max trick BID11 BID28 BID25. The Gumbel distribution produces samples according to − log(− log(u)) where u ∼ Uniform. Given parameters α 1,..., α k and samples g i from Gumbel, samples z from the Categorical distribution can be drawn according to: DISPLAYFORM0 where z is represented as a one-hot vector. Samples from the Concrete distribution are produced by replacing the argmax operation with the softmax function: DISPLAYFORM1 Crucially, the reparameterization trick can now be applied in a similar manner to Gaussian samples. The probability density function of the Concrete distribution is the following: DISPLAYFORM2 As the temperature parameter τ approaches 0, samples from the Concrete distribution more accurately approximate one-hot samples from a Categorical distribution and in the limit, the two distributions become identical. We refer the reader to BID15 BID26 for more details. In this section we present the problem setting and our proposed model. Additionally, we motivate our model design choices and modifications to the variational objective. Chronic lymphocytic leukemia (CLL) is the most common form of leukemia in adults in Western countries BID12. CLL is usually diagnosed during routine blood tests and Flow Cytometry (FC) is one of the examination procedures used for confirming the diagnosis BID12. Flow cytometry (FC) is a powerful technique for single cell analysis, which is routinely used for the diagnosis of haematological malignancies BID5, however it is heavily dependent on the experience of the expert performing it, oftentimes ing in serious discrepancies across experts. During or after treatment of CLL, the term Minimal Residual Disease (MRD) is used to define the small amount of leukemic cells detected in patients (typically 10s to a few 100s of leukemic cells in samples of 500,000 cells or more). Particularly for CLL, the limit for MRD positive diagnosis is set at 1 leukemic cell per 10,000 white blood cells in blood or bone marrow BID2. The problem, from a manifold learning and generative modeling point of view is to adequately separate the two data manifolds of healthy and leukemic cells. This problem is difficult not just because of the sheer size of the healthy cell population, which leads to significant manifold overlap, but also because there are other manifolds present in the data, e.g. representing different types of cell populations in the blood sample, with many different factors of variation, some of which are known (cell granulation, cell size, etc.) and some of which may be unknown. Most clustering algorithms, which are traditionally used in computational FC, particularly those that "infer" the number of clusters automatically, are unable to separate the manifolds of interest given a particular problem, because they act directly on the input/data space without making any assumptions about the latent structure present in a data set. As a they are sensitive to noise and ultimately one has to resort to merging clusters. Furthermore, for the clustering to be interpretable, significant amounts of hyperparameter tuning are necessary during the clustering proper, the cluster merging phase or both (e.g. number of nearest neighbors to be taken into account, appropriate distance metric, linkage, etc) ing in an impractical, computationally expensive overall solution. An alternative to clustering algorithms would be to learn a low-dimensional feature mapping as in traditional deterministic autoencoders and then perform clustering in feature space, optimizing the two tasks either separately or jointly BID36. While this is a viable strategy, possibly alleviating some of the problems discussed above, depending on the problem domain, the deterministic mapping could potentially lack expressiveness due to the low-dimensionality requirement, as it would need to compress information to avoid the curse of dimensionality. Furthermore, these methods are sensitive to noise and so, they overfit the data, being unable to disentangle the different factors of variation BID1.A practical and principled solution in this problem domain should be able to adequately separate the multiple manifolds underlying the data and model the signal of interest, remaining invariant to factors of variation. Furthermore, the candidate model should be able to learn feature representations that enforce a structure that facilitates the above and as a be able to correctly classify each cell in an unsupervised fashion. Ideally it should be trained and optimized end-to-end without resorting to clustering algorithms that are separated from the feature representation learning process. Latent variable models like VAEs and deep generative models seem like an ideal fit to the above description. They address the problem of generation of new samples with the help of a recognition/inference model which learns the global structure of the training data. In theory, by employing this encoder, such a model could learn representations that would prove useful in unsupervised discriminative tasks. In practice, however, this process is a lot more complicated. First of all, recognition models with inadequate expressiveness are known to encode only local information, because they model well only the regions of the posterior distribution that lie near the training examples, being unable to generalize to the full posterior manifold BID37. When one uses powerful and expressive generative models as in Bowman et al. FORMULA0; Serban et al. FORMULA0; BID9, this is further exacerbated to the point where generation on the one hand and representation learning on the other, become competing tasks with the model preferring to encode mainly local information using the generative/decoding model p(x|z) and ignore the latent code. From a representation learning standpoint which is what we're interested in for this particular problem, the generative model is now perceived as the regularizer and we need to make sure that the recognition model is expressive enough to be able to model our true posterior as closely as possible, as well as provide an interpretable way of making a diagnosis. As such we propose a framework that addresses both these needs jointly. To address the issues discussed in 3.2 we introduce a deep generative model composed of L layers of continuous stochastic latent variables. Beyond alleviating the aforementioned issues, we also aim to model a set of latent factors affecting the cell measurements and eventually, the diagnosis. We introduce a hierarchy of continuous latent variables to express these latent factors. The diagnosis is itself represented as a single (relaxed) binary variable after this cascade of layers. For the sake of brevity we will refrain from explicitly stating the continuous relaxation on this discrete variable for the remainder of this text, except when necessary. To highlight the close relationship between the discrete diagnosis and the continuous latent factors and for practical reasons we denote the mixture of the discrete and continuous variables by h. The generative model assumes the following form: DISPLAYFORM0 with DISPLAYFORM1 where DISPLAYFORM2 Parameters µ θ (·), Σ θ (·) are non-linear functions of the next layer in the hierarchy computed by feedforward neural networks and θ are the generative parameters. The prior on the discrete variable p(y) is set to be the discrete uniform probability. The observation model is defined as: DISPLAYFORM3 and describes the observations conditioned on the latent variables. The inference model is described below: DISPLAYFORM4 where Figure 1: PCA plots of latent representations for different number of classes K of discrete variable y. Increasing K, imposes a different structure on the model, encouraging it to learn and separate different manifolds. E.g. setting K = 4 the model successfully separates the 4 different cell populations present in the data (lymphocytes, monocytes, granulocytes and destroyed red blood cells). DISPLAYFORM5 DISPLAYFORM6 where similar to the generative model, parameters DISPLAYFORM7 are functions that describe non-linear relationships between stochastic layers and are computed by feedforward neural networks with φ being the variational parameters. The temperature parameter τ governs the degree of relaxation of the discrete variable and can either be constant, annealed during training or learned. In the case of the first stochastic layer z 1, layer z 0 refers to the observed data x. The function α φ (z L) models the relationship between the last Gaussian layer and the discrete variable y. By conditioning on the discrete variable y, we are enforcing a tight coupling between the latent factors of variation (represented by z) and categorical features (represented by y). As a the activity of the discrete variable y and continuous variables z 1...z L is highly correlated. More than simply capturing the modes of continuous variable z L, the discrete variable y imposes a particular structure that is propagated through all the continuous layers z 1...z L of the feature hierarchy. This allows us a degree of control with respect to the structure of the latent variables, which we can exploit depending on the task at hand. Figure 1 illustrates such an example. By increasing K we facilitate multiple manifold learning. Both in the inference and the generative models, covariances assume a diagonal structure. Our model's overall architecture can be seen in FIG2.Employing a relaxed discrete variable allows us to avoid marginalization over all its possible values. At the same time, however, we are using a continuous surrogate loss in place of the original discrete one BID32 BID26. DISPLAYFORM8 To induce more distributed and thus expressive latent codes we adopt deterministic warm-up per Sønderby et al. FORMULA0, where we begin training our model with a deterministic autoencoder and then gradually introduce the KL term, to prevent the approximate posterior from collapsing onto the prior early in training and disconnecting latent units. Thus we introduce a λ term in expression 11, which we linearly anneal during training from 0 to 1: The gradient estimations of the lower bound will be biased with respect to the original discrete loss, but unbiased and low-variance with respect to the surrogate loss BID26. DISPLAYFORM9 To obtain low variance updates to our parameters through stochastic gradient optimization of the bound, we use the reparameterization trick for both discrete and continuous variables concurrently: DISPLAYFORM10 where h is expressed as a deterministic function of the form presented in BID19 and section 2.2:h DISPLAYFORM11 with DISPLAYFORM12 We note that q φ (h|x) is used to denote the Concrete density and the KL term is computed according to eq. 20 in BID26. Finally, we also make use of batch normalization introduced by BID14, since it has been shown to both improve convergence time and allow for the training of the upper layers in deep generative models. In our experiments we use two real world data sets of deidentified patient data, which correspond to flow cytometric measurements of two different blood samples 1 We investigate our model's ability to learn useful representations via its performance as an "unsupervised classifier". Detailed explanations of the data sets can be found in Appendix A. In our experiments we compare our model's predictive capacity with that of popular baselines from the field of computational FC, as well as generative models. Because in this particular problem one population (i.e. class) vastly outnumbers the other, one cannot rely on accuracy to correctly estimate the models' performance, as a model that would predict only the over-represented class would consistently get high accuracy scores, but in reality would have limited predictive power. Instead, we make use of confusion matrix-based metrics, which are popular in medical applications. More specifically, we use true positive rate (TPR -also called sensitivity or recall) and true negative rate (TNR -also called specificity).Because we found that training the model is difficult, as far as retaining optimal representations is concerned throughout the training procedure, we use early stopping, where we check the model's performance on the test set with a fixed frequency. To derive a useful criterion we also turn to a confusion matrix-based quantity, Matthews correlation coefficient (MCC), introduced by BID27. As a correlation coefficient, MCC takes values in [-1,1] with 1 suggesting that predictions and ground truth are in complete accordance, -1 that they are in complete discord and 0 that predictions are random. MCC is regarded as a good measure for estimating classifier performance on imbalanced data sets BID3. Formally, it is defined as follows: DISPLAYFORM0 We check MCC in the test set during training in fixed intervals and save the model parameters once it has crossed a set threshold. In general, we noticed in our experiments that a threshold of 0.5 yields good with very small variance in performance. In our experiments we investigate our model's predictive performance, i.e. its capacity to correctly classify cell measurements 2. Our model's predictions are based on hard assignment of the probabilities obtained by the predictive distribution q φ (y|z). I.e. given a minibatch of observations DISPLAYFORM0, probabilities of samples drawn from q φ (y|z) are thresholded at 0.5. Probabilities that lie above this threshold are considered positive cell measurements. As baselines we choose the 3 best-performing algorithms from who compare a wide set of clustering algorithm implementations used in computational FC in two different tasks -multiple population identification and rare population detection. We are interested in the latter task which is closer to our own. The first baseline, X-shift BID31 ) is based on k-Nearest Neighbors (kNN) density estimation. Local density maxima become centroids and the remaining points are connected to those centroids via ascending density paths. Rclusterpp BID22 is the second baseline and is an efficient implementation (with respect to memory requirements) of hierarchical clustering. The last baseline, flowMEANS BID0 ) is based on k-means clustering but can identify concave cell populations using multiple clusters. We argue that to successfully learn and separate the two manifolds of healthy and pathological cells, more expressive and distributed representations are necessary, with explicit steps taken during training to enforce these characteristics. Additionally, the proposed model must be a mixture of continuous and discrete distributions to appropriately represent different cell attributes and category respectively. To illustrate the merit in these points, we also include 2 methods from the VAE literature, a vanilla VAE BID19, followed by a linear support vector machine (denoted by VAE+SVM in tables 1 and 2) and a version of our own model but within the β-VAE framework BID13. The motivation for VAE+SVM is the fact that given adequate manifold separation, a linear classifier such as a support vector machine should, in principle, be able to correctly classify observations. The β-VAE framework introduces a hyperparameter β to the variational objective, similar to λ, which we are using for deterministic warm-up. For β > 1, the model is forced to learn more compressed and disentangled latent representations, which is what BID13 argue for. Learning disentangled representations is certainly closely related to successfully separating multiple data manifolds and we consider a representation to be efficient and disentangled in this scenario if it leads to good predictive performance, implying adequate manifold separation. In short we are treating "unsupervised classification" as a proxy for manifold separation. Finally to illustrate the merits of generative modeling in the task of separating widely overlapping data manifolds we also include a deterministic 2-layer MLP classifier which is trained in a supervised way, i.e. using cross entropy as a cost function. The of our experiments can be shown in tables 1 and 2, where we denote our model by HCDVAE (Hierarchical Continuous-Discrete VAE). The baselines developed for the domain of FC are in essence clustering algorithms and as such they suffer from issues clustering algorithms traditionally suffer from, the most important of which, are sensitivity to parameter configurations and dependence on initialization schemes. Concordantly, they exhibited high variance across multiple runs so we present the best across 30 runs for each algorithm. Most of the baselines were able to achieve good predictive performance for the healthy cell population across most runs, which is to be expected since it is overrepresented in the data set, but average and erratic performance, for pathological cells. With the exception of Rclusterpp which achieved good overall performance in the second data set, all algorithms in both data sets exhibited a tendency to either be sensitive (high TPR) sacrificing healthy cell predictive accuracy or be specific (high TNR) and sacrifice pathological cell predictive accuracy. The supervised MLP classifier is clearly unable to separate the two manifolds, "overfitting" the healthy cell population. This suggests that at least for binary problems in which classes are severely imbalanced, training supervised classifiers (i.e. training on log-likelihood objectives with explicit labeled information) in subpar performance compared to generative modeling. This is not surprising as appropriate generative models incorporate more information about the data in their representations than just its labels, ing in greater discriminative power. The performance of VAE+SVM suggests that the topmost latent layer of vanilla VAE is not able to separate the healthy/pathological manifolds. Presumably this is the case because the approximate posterior has collapsed early on the prior and rendered most units inactive, making the overall representation uninformative with respect to the global structure of the data. Deep VAE architectures are also known to be difficult to train, with top layers being slow to learn useful representations or even unable to learn at all. This is not the case, however with β-VAE and HCDVAE. The predictive performance of both approaches is similar, however opting for a more distributed representation seems to yield marginally better predictive capacity. Visually, the two approaches seem to in similar latent representations as can be seen in FIG4, where they collapse the majority of healthy cells into a compact cluster and separate the pathological cell manifold, remaining largely invariant to other factors present in the data. A possible explanation for the marginally better performance of HCDVAE is that its denser representation captures the activity of more explanatory factors in the data set, which β-VAE's representations miss due to excessive compression. We further hypoth- esize that the mixture of discrete and continuous variables, which HCDVAE and β-VAE share in our experiments provides a powerful posterior which benefits both approaches, ing in similar performance. The discrete variable y at the top of the feature hierarchy representing cell state (or alternatively category) is largely invariant to perturbations in the factors modeled by the lower latent layers. Overall, HCDVAE achieves near human-level performance in both data sets, especially in the second case. We note that that the ground-truth diagnosis was crosschecked by 2 different human experts. Models that use mixtures of discrete and continuous variables and are trained with the variational objective have previously been discussed in literature. proposed a model for semi-supervised classification (M2) based on a discrete class variable y and a continuous "style" variable z. Optimization is performed using the reparameterization trick for z and marginalizing over y. This model is similar in structure to our own and also raise the point of learning better discriminative features which are more clearly separable, making classification easier (M1 model). Another similarity is that the distribution q φ (y|x) can be used as a classifier, performing classification through inference. A crucial difference is that we are using a relaxed discrete variable, removing the need to marginalize over its possible values. Additionally, we are enforcing dependence between the continuous and discrete variables. BID24 presented a method that also employs discrete and continuous variables also in the context of semi-supervised learning where the continuous variables model the formation of natural clusters in the data and the discrete variables representing class information refine this clustering scheme. BID17 use both discrete and continuous latent variables to construct a structured VAE that uses conjugate priors to create more flexible approximating posteriors in the context of switching linear dynam-ical systems. Works that attempt to alleviate the non-informativeness of the prior have also been presented in literature. BID8 present a VAE variant for the task of unsupervised clustering, introducing a Gaussian Mixture prior to enforce multi-modality on the inference distribution while employing the minimum information constraint of to avoid cluster degeneracy. BID10 develop tree-based priors, which they integrate into VAEs and train them jointly for learning representations in videos with the overall model being able to learn a hierarchical representation of the data set. presented a model which combines a discrete component which consists of a bipartite Boltzmann machine with binary units on the one hand and a continuous component consisting of multiple continuous layers on the other. To perform optimization using the reparameterization trick, the binary variables are marginalized out. While the proposed "discrete VAE" learns the class of objects in images, it is significantly more complex in its architecture and still relies on marginalization of the discrete variables. A DATA SETS Both data sets represent measurements on blood samples taken from patients and comprise 500,000 cells. Each cell measurement is represented by a 12-dimensional vector, corresponding to 10 different markers for each cell, the time stamp of the measurement and a label indicating healthy/pathological state. The first dimension corresponds to the time stamp, while the next 2 correspond to the forward and side scatter of the light. The remaining dimensions correspond to the protein markers used to make a diagnosis. The data sets' last column represents the label for each measurement. The data set used in the first experiment contains 107 pathological cells, while the one used in the second experiment contains 103 pathological cells. All patient identifiers are removed and the data sets are anonymized. For our experiments we shuffle and then split the data sets to training and test sets with a 75/25 split. To speed up training, we ensure that there is at least one example of a pathological cell in every minibatch and reshuffle the training set every 20 iterations. We used early stopping to stabilize training, with the stopping criterion being a threshold score of 0.5 for MCC, as reported in the main text. For these experiments we used 3 layers of latent variables of 128, 64 and 32 units respectively. The discrete variable was chosen to be a Bernoulli variable and p θ (x|h) is a Gaussian. In the inference model, every Gaussian latent layer is parameterized by feedforward neural networks with 2 hidden layers of 256 units each. The non-linearity used is the rectifier function, while the mean and variance parameters are given by linear and softplus layers respectively. The mean parameter of the relaxed Bernoulli distribution is also given by a feedforward neural network with the same architecture as above. For the generative model, every Gaussian latent layer is also parameterized by feedforward neural networks with 2 hidden layers with 128 rectifier units each. The mean and variance parameters are computed as before. All networks are optimized with minibatch gradient descent using the Adam optimizer BID18 with an initial learning rate of 10 −4 which we decay exponentially for 3500 steps with a basis of 0.99. The minibatch size was set at 100. The relaxation parameter τ of the relaxed Bernoulli distribution was fixed at 0.66 per BID26. The λ term was linearly annealed from 0 to 1 with a step of 0.002. For the baselines, all parameter values were chosen according to BID35. The above architecture is shared between the model we denote by β-VAE and HCDVAE in tables 1 and 2. The VAE part of the model we denote by VAE+SVM has the same Gaussian layer architecture.
Unsupervised classification via deep generative modeling with controllable feature learning evaluated in a difficult real world task
1,184
scitldr
Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework. Representation learning over graph structured data has emerged as a keystone machine learning task due to its ubiquitous applicability in variety of domains such as social networks, bioinformatics, natural language processing, and relational knowledge bases. Learning node representations to effectively encode high-dimensional and non-Euclidean graph information is a challenging problem but recent advances in deep learning has helped important progress towards addressing it BID4 BID17;; a; BID15 BID14, with majority of the approaches focusing on advancing the state-of-the-art in static graph setting. However, several domains now present highly dynamic data that exhibit complex temporal properties in addition to earlier cited challenges. For instance, social network communications, financial transaction graphs or longitudinal citation data contain fine-grained temporal information on nodes and edges that characterize the dynamic evolution of a graph and its properties over time. These recent developments have created a conspicuous need for principled approaches to advance graph embedding techniques for dynamic graphs (b). We focus on two pertinent questions fundamental to representation learning over dynamic graphs: (i) What can serve as an elegant model for dynamic processes over graphs? -A key modeling choice in existing representation learning techniques for dynamic graphs BID16; BID14; ) assume that graph dynamics evolve as a single time scale process. In contrast to these approaches, we observe that most real-world graphs exhibit at least two distinct dynamic processes that evolve at different time scales -Topological Evolution: where the number of nodes and edges are expected to grow (or shrink) over time leading to structural changes in the graph; and Node Interactions: which relates to activities between nodes that may or may not be structurally connected. Modeling interleaved dependencies between these non-linearly evolving dynamic processes is a crucial next step for advancing the formal models of dynamic graphs. (c) Communication Events (k=1) where nodes interact with each other. For both these processes, t p,k=0 < (t 1, t 2, t 3, t 4, t 5) k=1 < t q,k=0 < (t 6, t 7) k=1 < t r,k=0. (b) Evolving Representations.(ii) How can one leverage such a model to learn dynamic node representations that are effectively able to capture evolving graph information over time? -Existing techniques in this direction can be divided into two approaches: a.) Discrete-Time Approach, where the evolution of a dynamic graph is observed as collection of static graph snapshots over time (; BID16). These approaches tend to preserve (encode) very limited structural information and capture temporal information at a very coarse level which leads to loss of information between snapshots and lack of ability to capture fine-grained temporal dynamics. Another challenge in such approaches is the selection of appropriate aggregation granularity which is often misspecified. b.) Continuous-Time Approach, where evolution is modeled at finer time granularity in order to address the above challenges. While existing approaches have demonstrated to be very effective in specific settings, they either model simple structural and complex temporal properties in a decoupled fashion BID14 or use simple temporal models (exponential family in ). But several domains exhibit highly nonlinear evolution of structural properties coupled with complex temporal dynamics and it remains an open problem to effectively model and learn informative representations capturing various dynamical properties of such complex systems. As noted in BID5, an important requirement to effectively learn over such dynamical systems is the ability to express the dynamical processes at different scales. We propose that any dynamic graph must be minimally expressed as a of two fundamental processes evolving at different time scales: Association Process (dynamics of the network), that brings change in the graph structure and leads to long lasting information exchange between nodes; and Communication Process (dynamics on the network), that relates to activities between (not necessarily connected) nodes which leads to temporary information flow between them BID15 BID1. We, then, posit our goal of learning node representations as modeling a latent mediation process that bridges the above two observed processes such that learned representations drive the complex temporal dynamics of both processes and these processes subsequently lead to the nonlinear evolution of node representations. Further, the information propagated across the graph is governed by the temporal dynamics of communication and association histories of nodes with its neighborhood. For instance, in a social network, when a node's neighborhood grows, it changes that node's representation which in turn affects her social interactions (association → embedding → communication). Similarly, when node's interaction behavior changes, it affects the representation of her neighbors and herself which in turn changes the structure and strength of her connections due to link addition or deletion (communication → embedding → association). We call this phenomenon -evolution through mediation and illustrate it graphically in FIG0.In this work, we propose a novel representation learning framework for dynamic graphs, DyRep, to model interleaved evolution of two observed processes through latent mediation process expressed above and effectively learn richer node representations over time. Our framework ingests dynamic graph information in the form of association and communication events over time and updates the node representations as they appear in these events. We build a two-time scale deep temporal point process approach to capture the continuous-time fine-grained temporal dynamics of the two observed processes. We further parameterize the conditional intensity function of the temporal point process with a deep inductive representation network that learns functions to compute node representations. Finally, we couple the structural and temporal components of our framework by designing a novel Temporal Attention Mechanism, which induces temporal attentiveness over neighborhood nodes using the learned intensity function. This allows to capture highly interleaved and nonlinear dynamics governing node representations over time. We design an efficient unsupervised training procedure for end-to-end training of our framework. We demonstrate consistent and significant improvement over state-of-the-art representative baselines on two real-world dynamic graphs for the tasks of dynamic link prediction and time prediction. We further present an extensive qualitative analysis through embedding visualization and ablation studies to discern the effectiveness of our framework. Representation Learning approaches for static graphs either perform node embedding BID4 BID17;; a; BID15 BID14 or sub-graph embedding (; ; which can also utilize convolutional neural networks (; BID3 . Among them, GraphSage (a) is an inductive method for learning functions to compute node representations that can be generalized to unseen nodes. Most of these approaches only work with static graphs or can model evolving graphs without temporal information. Dynamic network embedding is pursued through various techniques such as matrix factorization , structural properties , CNN-based approaches , deep recurrent models BID14, and random walks . There exists a rich body of literature on temporal modeling of dynamic networks , that focus on link prediction tasks but their goal is orthogonal to our work as they build task specific methods and do not focus on representation learning. Authors in BID14 ) proposed models of learning dynamic embeddings but none of them consider time at finer level and do not capture both topological evolution and interactions simultaneously. In parallel, research on deep point process models include parametric approaches to learn intensity BID9 ) using recurrent neural networks and GAN based approaches to learn intensity functions . More detailed related works are provided in Appendix F. Stochastic point processes BID7 are random processes whose realization comprises of discrete events in time, t 1, t 2,.... A temporal point process is one such stochastic process that can be equivalently represented as a counting process, N (t), which contains the number of events up to time t. The common way to characterize temporal point processes is via the conditional intensity function λ(t), a stochastic model of rate of happening events given the previous events. Formally, λ(t)dt is the conditional probability of observing an event in the tiny window DISPLAYFORM0, where T (t) = t k |t k < t is history until t. Similarly, for t > t n and given history T = t 1,..., t n, we characterize the conditional probability that no event happens during [t n, t) as S(t|T) = exp − in an event. t represents time of the event. k ∈ {0, 1} and we use k = 0 to signify events from the topological evolution process (association) and k = 1 to signify events from node interaction process (communication). Persistent edges in the graph only appear through topological events while interaction events do not contribute them. Hence, k represents an abstraction of scale (evolution rate) associated with processes that generate topological (dynamic of the network) and interaction events (dynamic on the network) respectively. We then represent complete set of P observed events ordered by time in window DISPLAYFORM1. Here, t p ∈ R +, 0 ≤ t p ≤ T. Appendix B discusses a marked point process view of such an event set. Node Representation-Let z v ∈ R d represent d-dimensional representation of node v. As the representation evolve over time, we qualify them as function of time: z v (t) -the representation of node v being updated after an event involving v at time t. We use z v (t) for most recently updated embedding of node v just before t. Dynamic Graph Setting. Let G t0 = (V t0, E t0) be the initial snapshot of a graph at time t 0. Please note that G t0 may be empty or it may contain an initial structure (association edges) but it will not have any communication history. Our framework observes evolution of graph as a stream of events O and hence any new node will always be observed as a part of such an event. This will induce a natural ordering over nodes as available from the data. As our method is inductive, we never learn node-specific representations and rather learn functions to compute node representations. In this work, we only support growth of network i.e. we only model addition of nodes and structural edges and leave deletion as future work. Further, for general description of the model, we will assume that an edge in the graph do not have types and nodes do not have attributes but we discuss the details on how to use our model to accommodate these features in Appendix B. The key idea of DyRep is to build a unified architecture that can ingest evolving information over graphs and effectively model the evolution through mediation phenomenon described in Section 1. To achieve this, we design a two-time scale temporal point process model of observed processes and parameterize it with an inductive representation network which subsequently models the latent mediation process of learning node representations. The rationale behind our framework is that the observed set of events are the realizations of the nonlinear dynamic processes governing the changes in topological structure of graph and interactions between the nodes in the graph. Now, when an event is observed between two nodes, information flows from the neighborhood of one node to the other and affects the representations of the nodes accordingly. While a communication event (interaction) only propagates local information across two nodes, an association event changes the topology and thereby has more global effect. The goal is to learn node representations that encode information evolving due to such local and global effects and further drive the dynamics of the observed events.3.1 MODELING TWO-TIME SCALE OBSERVED GRAPH DYNAMICS The observations over dynamic graph contain temporal point patterns of two interleaved complex processes in the form of communication and association events respectively. At any time t, the occurrence of an event, from either of these processes, is dependent on the most recent state of the graph, i.e., two nodes will participate in any event based on their most current representations. Given an observed event p = (u, v, t, k), we define a continuous-time deep model of temporal point process using the conditional intensity function λ u,v k (t) that models the occurrence of event p between nodes u and v at time t: λ DISPLAYFORM0 wheret signifies the timepoint just before current event. The inner function g k (t) computes the compatibility of the most recently updated representations of two nodes, z u (t) and z v (t) as follows: DISPLAYFORM1 [;] signifies concatenation and ω k ∈ R 2d serves as the model parameter that learns time-scale specific compatibility. g k (t) is a function of node representations learned through a representation network described in Section 3.2. This network parameterizes the intensity function of the point process model which serves as a unifying factor. Note that the dynamics are not two simple point processes dependent on each other, but, they are related through the mediation process and in the embedding space. Further, a well curated attention mechanism is employed to learn how the past drives future. The choice of outer function f k needs to account for two critical criteria: 1) Intensity needs to be positive. 2) As mentioned before, the dynamics corresponding to communication and association processes evolve at different time scales. To account for this, we use a modified version of softplus function parameterized by a dynamics parameter ψ k to capture this timescale dependence: DISPLAYFORM2 where, x = g(t) in our case and ψ k (> 0) is scalar time-scale parameter learned as part of training. ψ k corresponds to the rate of events arising from a corresponding process. In 1D event sequences, the formulation in corresponds to the nonlinear transfer function in . We build a deep recurrent architecture that parameterizes the intensity function in Eq. and learns functions to compute node representations. Specifically, after an event has occurred, the representation of both the participating nodes need to be updated to capture the effect of the observed event based on the principles of:Self-Propagation. Self-propagation can be considered as a minimal component of the dynamics governing an individual node's evolution. A node evolves in the embedded space with respect to its previous position (e.g. set of features) and not in a random fashion. Exogenous Drive. Some exogenous force may smoothly update the node's current features during the time interval (e.g. between two global events involving that node).Localized Embedding Propagation. Two nodes involved in an event form a temporary (communication) or a permanent (association) pathway for the information to propagate from the neighborhood of one node to the other node. This corresponds to the influence of the nodes at second-order proximity passing through the other node participating in the event (See Appendix A for pictorial depiction).To realize the above processes in our setting, we first describe an example setup: Consider nodes u and v participating in any type of event at time t. Let N u and N v denote the neighborhood of nodes u and v respectively. We discuss two key points here: 1) Node u serves as a bridge passing information from N u to node v and hence v receives the information in an aggregated form through u. 2) While each neighbor of u passes its information to v, the information that node u relays is governed by an aggregate function parametrized by u's communication and association history with its neighbors. With this setup, for any event at time t, we update the embeddings for both nodes involved in the event using a recurrent architecture. Specifically, for p-th event of node v, we evolve z v as: DISPLAYFORM0 where, h u struct ∈ R d is the output representation vectors obtained from aggregator function on node u's neighborhood and z v (t v p) is the recurrent state obtained from the previous representation of node v. t p is time point of current event,t p signifies the timepoint just before current event andt v p represent time point of previous event for node v. z v (t v p = 0), the initial representation of a node v may be initialized either using input node features from dataset or random vector as per the setting. Eq. 4 is a neural network based functional form parameterized by DISPLAYFORM1 that govern the aggregate effect of all the three inputs (graph structure, previous embedding and exogenous feature) respectively to compute representations. The above formulation is inductive (supports unseen nodes) and flexible (supports node and edge types) as discussed in Appendix B. The Localized Embedding Propagation principle above captures rich structural properties based on neighborhood structure which is a key to any representation learning task over graphs. However, for a given node, not all of its neighbors are uniformly important and hence it becomes extremely important to capture information from each neighbor in some weighted fashion. Recently proposed attention mechanisms have shown great success in dealing with variable sized inputs, focusing on the most relevant parts of the input to make decisions. However, existing approaches consider attention as a static quantity. In dynamic graphs, changing neighborhood structure and interaction activities between nodes evolves importance of each neighbor to a node over time, thereby making attention itself a temporally evolving quantity. Further this quantity is dependent on the temporal history of association and communication of neighboring nodes through evolving representations. To this Algorithm 1 Update Algorithm for S and A DISPLAYFORM0, most recently updated A(t) and S(t). Output: A(t) and S(t) DISPLAYFORM1 where i is the other node involved in the event.←{λ computed in Eq. 2} DISPLAYFORM2 where i is the other node involved in the event ←{λ computed in Eq. 2} y w = y w − x; ∀w = i, y w = 0 end if Normalize y and set S j (t) ← y end for return S(t), A(t) end, we propose a novel Temporal Point Process based Attention Mechanism that uses temporal information to compute the attention coefficient for a structural edge between nodes. These coefficient are then used to compute the aggregate quantity (h struct) required for embedding propagation. Let A(t) ∈ R n×n be the adjacency matrix for graph G t at time t. Let S(t) ∈ R n×n be a stochastic matrix capturing the strength between pair of vertices at time t. One can consider S as a selection matrix that induces a natural selection process for a node -it would tend to communicate more with other nodes that it wants to associate with or has recently associated with. And it would want to attend less to non-interesting nodes. We start with following implication required for the construction of h u struct in: For any two nodes u and v at time t, S uv (t) ∈ if A uv (t) = 1 and S uv (t) = 0 if A uv (t) = 0. Denote N u (t) = {i : A iu (t) = 1} as the 1-hop neighborhood of node u at time t. To formally capture the difference in the influence of different neighbors, we propose a novel conditional intensity based attention layer that uses the matrix S to induce a shared attention mechanism to compute attention coefficients over neighborhood. Specifically, we perform localized attention for a given node u and compute the coefficients pertaining to the 1-hop neighbors i of node u as: DISPLAYFORM3, where q ui signifies the attention weight for the neighbor i at time t and hence it is a temporally evolving quantity. These attention coefficients are then used to compute the aggregate information h u struct (t) for node u by employing an attended aggregation mechanism across neighbors as follows: DISPLAYFORM4 are parameters governing the information propagated by each neighbor of u. z i (t) ∈ R d is the most recent embedding for node i. The use of max operator is inspired from learning on general point sets . By applying max-pooling operator element-wise, the model effectively captures different aspects of the neighborhood. We found max to work slightly better as it considers temporal aspect of neighborhood which would be amortized if mean is used instead. Connection to Neural Attention over Graphs. Our proposed temporal attention layer shares the motivation of recently proposed Graph Attention Networks (GAT) (Veličković et al., 2018) and Gated Attention Networks (GaAN) in the spirit of applying non-uniform attention over neighborhood. Both GAT and GaAN have demonstrated significant success in static graph setting. GAT advances GraphSage (a) by employing multi-head non-uniform attention over neighborhood and GaAN advances GAT by applying different weights to different heads in the multi-head attention formulation. The key innovation in our model is the parameterization of attention mechanism by a point process based temporal quantity S that is evolving and drives the impact that each neighbor has on the given node. Further, unlike static methods, we use these attention coefficients as input to the aggregator function for computing the temporal-structural effect of neighborhood. Finally, static methods use multi-head attention to stabilize learning by capturing multiple representation spaces but this is an inherent property in our layer as representations and event intensities update over time and hence new events help capture multiple representation spaces. Construction and Update of S. We construct a single stochastic matrix S (used to parameterize attention in the earlier section) to capture complex temporal information. At the initial timepoint t = t 0, we construct S(t 0) directly from A(t 0). Specifically, for a given node v, we initialize the elements of corresponding row vector S v (t 0) as: DISPLAYFORM5 After observing an event o = (u, v, t, k) at time t > t 0, we make updates to A and S as per the observation of k. Specifically, A only gets updated for association events (k=0, change in structure). Note that S is parameter for a structural temporal attention which means temporal attention is only applied on structural neighborhood of a node. Hence, the values of S are only updated/active in two scenarios: a) the current event is an interaction between nodes which already has structural edge (A uv (t) = 1 and k = 1) and b) the current event is an association event (k = 0). Given a neighborhood of node u, b represents (base) attention for each edge which is uniform attention based on neighborhood size. Whenever an event involving u occurs, this attention changes in following ways: For case (a), the attention value for corresponding S entries are updated using the intensity of the event. For case (b), repeat same as (a) but also adjust the attention (by b − b, b and b being the new and old attention respectively) for edge with other neighbors as the neighborhood size grows in this case. From mathematical viewpoint, this update resembles a standard temporal point process formulation where the term coming from b serves as attention while λ can be viewed as endogenous intensity based attention. Algorithm 1 outlines complete update scenarios. In the directed graph case, updates to A will not be symmetric, which will subsequently affect the neighborhood structure and attention flow for a node. Appendix A provides a pictorial depiction of the complete DyRep framework discussed in this section. We provide an extensive ablation study in Appendix C that can help discern the contribution of all the above components in achieving our goal. The complete parameter space for the current model is DISPLAYFORM0 For a set O of P observed events, we learn these parameters by minimizing the negative log likelihood: DISPLAYFORM1 represent the intensity of event at time t and Λ(τ) = n u=1 n v=1 k∈{0,1} λ u,v k (τ) represent total survival probability for events that do not happen. While it is intractable (will require O(n 2 k) time) and unnecessary to compute the integral in the log-likelihood equation for all possible non-events in a stochastic setting, we can locally optimize L using mini-batch stochastic gradient descent where we estimate the integral using novel sampling technique. Algorithm 2 in Appendix H adopts a simple variant of Monte Carlo trick to compute the survival term of log-likelihood equation. Specifically, in each mini-batch, we sample non-events instead of considering all pairs of non-events (which can be millions). Let m be the mini-batch size and N be the number of samples. The complexity of Algorithm 2 will then be O(2mkN) for the batch where the factor of 2 accounts for the update happening for two nodes per event which demonstrates linear scalability in number of events which is desired to tackle web-scale dynamic networks . The overall training procedure is adopted from BID14 where the Backpropagation Through Time (BPTT) training is conducted over a global sequence, thereby maintaining the dependencies between events across sequences while avoiding gradient related issues. Implementation details are left to Appendix G. #Communications: 604649 and Clustering Coefficient: 0.087. These datasets cover a range of configurations as Social Dataset is a small network with high clustering coefficient and over 2M events. In contrast, Github dataset forms a large network with low clustering coefficient and sparse events thus allowing us to test the robustness of our model. Further, Github dataset contains several unseen nodes which were never encountered during training. We study the effectiveness of DyRep by evaluating our model on tasks of dynamic link prediction and event time prediction tasks:Dynamic Link Prediction. When any two nodes in a graph has increased rate of interaction events, they are more likely to get involved in further interactions and eventually these interactions may lead to the formation of structural link between them. Similarly, formation of the structural link may lead to increased likelihood of interactions between newly connected nodes. To understand, how well our model captures these phenomenon, we ask questions like: Which is the most likely node u that would undergo an event with a given node v governed by dynamics k at time t? The conditional density of such and event at time t can be computed: DISPLAYFORM0 λ(s)ds, wheret is the time of the most recent event on either dimension u or v. We use this conditional density to find most likely node. For a given test record (u, v, t, k), we replace v with other entities in the graph and compute the density as above. We then rank all the entities in descending order of the density and report the rank of the ground truth entity. Please note that the latest embeddings of the nodes update even during the test while the parameters of the model remaining fixed. Hence, when ranking the entities, we remove any entities that creates a pair already seen in the test. We report Mean Average Rank (MAR) and HITS(@10) metric for dynamic link prediction. Event Time Prediction. This is a relatively novel application where the aim is to compute the next time point when a particular type of event (structural or interaction) can occur. Given a pair of nodes (u, v) and event type k at time t, we use the above density formulation to compute conditional density at time t. The next time pointt for the event can then be computed as:t = ∞ t tf u,v k (t)dt where the integral does not have an analytic form and hence we estimate it using Monte Carlo trick. For a given test record (u, v, t, k), we compute the next time this communication event may occur and report Mean Absolute Error (MAE) against the ground truth. GAT is designed for supervised learning. In Appendix A (Ablation studies), we show on one version where we only update attention based on Association events which is temporal analogous to GAT.Event Time Prediction. We compare our model against (i) Know-Evolve which has the ability to predict time in a multi-relational dynamic graphs (II) Multi-dimensional Hawkes Process (MHP) BID8 model where all events in graph are considered as dyadic.5.4 EVALUATION SCHEME We divide our test sets into n(= 6) slots based on time and report the performance for each time slot, thus providing comprehensive temporal evaluation of different methods. This method of reporting is expected to provide fine-grained insights on how various methods perform over time as they move farther from the learned training history. For dynamic baselines that do not explicitly model time (DynGem, DynTrd, GraphSage) and static baselines (Node2Vec), we adopt a sliding window training approach with warm-start method where we learn on initial train set and test for the first slot. Then we add the data from first slot in the train set and remove equal amount of data from start of train set and retrain the model using the embeddings from previous train. Communication Event Prediction Performance. We first consider the task of predicting communication events between nodes which may or may not have a permanent edge (association) between them. FIG2 (a-b) shows corresponding . Social Evolution. Our method significantly and consistently outperforms all the baselines on both metrics. While the performance of our method drops a little over time, it is expected due to the temporal recency affect on node's evolution. Know-Evolve can capture event dynamics well and shows consistently better rank than others but its performance deteriorates significantly in HITS@10 metric over time. We conjecture that features learned through edge-level modeling limits the predictive capacity of the method over time. The inability of DynGem (snapshot based dynamic), DynTrd and GraphSage (inductive) to significantly outperform Node2vec (transductive static baseline) demonstrate that discrete time snapshot based models fail to capture fine-grained dynamics of communication events. Github dataset. We demonstrate comparable performance with both Know-Evolve and GraphSage on Rank metric. We would like to note that overall performance for all methods on rank metric is low. As we reported earlier, Github dataset is very sparse with very low clustering coefficient which makes it a challenging dataset to learn. It is expected that for a large number of nodes with no communication history, most of the methods will show comparable performance but our method outperforms all others when there is some history available. This is demonstrated by our significantly better performance for HITS@10 metric where we are able to do highly accurate prediction for nodes where we learn better history. This can also be attributed to our model's ability to capture the effect of evolving topology which is missed by Know-Evolve. Finally, we do not see significant decrease in performance of any method over time in this case which can again be attributed to roughly uniform distribution of nodes with no communication history across time slots. Association Event Prediction Performance. Association events are not available for all time slots so FIG2 (c-d) report the aggregate number for this task. For both the datasets, our model significantly outperforms the baselines for this task. Specifically, our model's strong performance on HITS@10 metric across both datasets demonstrates its robustness in accurate learning from various properties of data. On Social evolution dataset, the number of association events are very small (only 485) and hence our strong performance shows that the model is able to capture the influence On the Github dataset, the network grows through new nodes and our model's strong performance across both metric demonstrates its inductive ability to generalize across new nodes across time. An interesting observation was poor performance of DynTrd which seems to be due to its objective to complete triangles. Github dataset is very sparse and has very few possibilities for triadic closure. Time Prediction Performance. Figure 3 demonstrates consistently better performance than stateof-the-art baseline for event time prediction on both datasets. While Know-Evolve models both processes as two different relations between entities, it does not explicitly capture the variance in the time scales of two processes. Further, Know-Evolve does not consider influence of neighborhood which may lead to capturing weaker temporal-structural dynamics across the graph. MHP uses specific parametric intensity function which fails to account for intricate dependencies across graph. Qualitative Performance. We conducted a series of qualitative analysis to understand the discriminative power of evolving embeddings learned by DyRep. We compare our embeddings against GraphSage embeddings as it is state-of-the-art embedding method that is also inductive. Figure 4 (a-b) shows the tSNE embeddings learned by Dyrep (left) and GraphSage (right) respectively. The visualization demonstrates that DyRep embeddings have more discriminative power as it can effectively capture the distinctive and evolving structural features over time as aligned with empirical evidence. Figure 4 (c-d) shows use case of two associated nodes (19 and 26) that has persistent edge but less communication for above two methods. DyRep keeps the embeddings nearby although not in same cluster (cos. dist. -0.649) which demonstrates its ability to learn the association and less communication dynamics between two nodes. For GraphSage the embeddings are on opposite ends of cluster with (cos. dist. -1.964). We provide more extensive analysis in Appendix D. We introduced a novel modeling framework for dynamic graphs that effectively and efficiently learns node representations by posing representation learning as latent mediation process bridging dynamic processes of topological evolution and node interactions. We proposed a deep temporal point process model parameterized by temporally attentive representation network that models these complex and nonlinearly evolving dynamic processes and learns to encode structural-temporal information over graph into low dimensional representations. Our superior evaluation performance demonstrates the effectiveness of our approach compared to state-of-the-art methods. We present this work as the first generic and unified representation learning framework that adopts a novel modeling paradigm for dynamic graphs and support wide range of dynamic graph characteristics which can potentially have many exciting adaptations. As a part of our framework, we also propose a novel temporal point process based attention mechanism that can attend over neighborhood based on the history of communications and association events in the graph. Currently, DyRep does not support network shrinkage due to following reasons: (i) It is difficult to procure data with fine grained deletion time stamps and (ii) The temporal point process model requires more sophistication to support deletion. For example, one can augment the model with a survival process formulation to account for lack of node/edge at future time. Another interesting future direction could be to support encoding higher order dynamic structures. contains h struct which is computed for updating each node involved in the event. For node u, the update will come from h v struct (green flow) and for node v, the update will come from h u struct (red flow). Please note all embeddings are dynamically evolving hence the information flow after every event is different and evolves in a complex fashion. With this mechanism, the information is passed from neighbors of node u to node v and neighbors of node v to node u. (i) Interaction events lead to temporary pathway -such events can occur between nodes which are not connected. In that case, this flow will occur only once but it will not make u and v neighbors of each other (e.g. meeting at a conference). (ii) Topological events lead to permanent pathway -in this case u and v becomes neighbor of each other and hence will contribute to structural properties moving forward (e.g. being academic friends). The difference in number of blue arrows on each side signify different importance of each node to node u and node v respectively. Overall Embedding Update Process. As a starting point, neighborhood only includes nodes connected by a structural edge. On observing an event, we update the embeddings of two nodes involved in the event using Eq 4. For a node u, the first term of Eq 4 (Localized Embedding Propagation) requires h struct which is the information that is passed from neighborhood (N v) of node v to node u via node v (one can visualize v as being the message passer from its neighborhood to u). This information is used to update the embedding of node u. However, we posit that node v does not relay equal amount of information from its neighbors to node u. Rather, node v receives its information to be relayed based on its communication and association history with its neighbors (which relates to importance of each neighbor). This requires to compute the attention coefficients on the structural edges between node v and its neighbors. For any edge, we want this coefficient to be dependent on rate of events between the two nodes (thereby emulating real world phenomenon that one gains more information from people one interacts more with). Hence, we parameterize our attention module with the temporal point process parameter S uv. Algorithm 1 outlines the process of computing the value of this parameter. where ∈ * ̅ is the node in neighborhood of node u. DISPLAYFORM0 Figure 6: Temporal Point Process based Self-Attention: This figure illustrates the computation of h u struct for node u to pass to node v for the same event described before between nodes u and v at time t with any k. h u struct is computed by aggregating information from neighbors of u. However, Nodes that are closely connected or has higher interactions tend to attend more to each other compared to nodes that are not connected or nodes between which interactions is less even in presence of connection. Further, every node has a specific attention span for other node and therefore attention itself is a temporally evolving quantity. DyRep computes the temporally evolving attention based on association and communication history between connected nodes. The attention coefficient function (q's) is parameterized by S which is computed using the intensity of events between connected nodes. Such attention mechanism allows the evolution of importance of neighbors to a particular node (u in this case) which aligns with real-world phenomenon. Connection to Marked Point Process. From a mathematical viewpoint, for any event e at time t, any information other than the time point can be considered a part of mark space describing the events. Hence, for DyRep, given a one-dimensional timeline, one can consider O = {(u, v, k) p, t p ) P p=1 as a marked process with the triple (u, v, k) representing the mark. However, from machine learning perspective, using a single-dimensional process with such marks does not allow to efficiently and effectively discover or model the structure in the point process useful for learning intricate dependencies between events, participants of the events and dynamics governing those events. Hence, it is often important to extract the information out of the mark space and build an abstraction that helps to discover the structure in point process and make this learning parameter efficient. In our case, this translates to two components:1. The nodes in the graph are considered as dimensions of the point process, thus making it a multi-dimensional point process where an event represents interaction/structure between the dimensions, thus allowing us to explicitly capture dependencies between nodes.2. The topological evolution of networks happen at much different temporal scale than activities on a fixed topology network (e.g. rate of making friends vs liking a post on a social network). However both these processes affect each other's evolution in a complex and nonlinear (c) u has a topological event with node 4. b changes to 0.2. b = 0.25 which is the previous b. Update happens to S u4 and S 4u based on intensity of event. Next attention for all other neighbors of both nodes (We only show for u here) are adjusted to reflect neighborhood size change. The matrix S is used for computing attention and hence does not get updated for interaction events between nodes which do not have an edge (for e.g. pair may have an interaction event S 12 won't be updated as they are not neighbors.fashion. Abstracting k to associate it with these different scales of evolution facilitates to model our purpose of expressing dynamic graphs at two time scales in a principled manner. It also provides an ability to explicitly capture the influential dynamics BID5 of topological evolution on dynamics of network activities and vice versa (through the learned embedding -aka evolution through mediation. Note that this distinction in use of mark information is also important as we learn representations for nodes (dimensions) but not for k. It is important to realize that k representing two different scales of event dynamics is not same as edge or interaction type. For instance, in case of typed persistent edge (e.g. wasbornIn, livesIn) or typed interaction (e.g. visit, fight), one would add type as another component in the mark space to represent an event while k still signifying different dynamic scales. Comparison to BID14. In the similar vein as above, the point process specification of BID14 can also be considered as a marked process that models the typed interaction dynamics at a single time-scale and does not model topological evolution. In contrast to that, our method explicitly models dynamic graph process at two time scales. While both models use a point process based formulation for modeling temporal dynamics, there are several significant methodological differences between the two approaches: Deep Point Process Model -While one can augment the event specification in BID14 ) with additional mark information, that itself is not adequate to achieve DyRep's modeling of dynamical process over graphs at multiple time scales. We employ a softplus function for f k which contains a dynamic specific scale parameter ψ k to achieve this while BID14 ) uses an exponential (exp) function for f with no scale parameter. Their intensity formulation attains a Rayleigh distribution which leads to a specific assumption about underlying dynamics which models fads where intensity of events drop rapidly between events after increasing. Our two-time scale model is more general and induces modularization, where each of two components allow complex, nonlinear and dependent dynamics towards a non-zero steady state intensity. Graph Structure-As shown in (b), the key idea behind representation learning over graphs is to capture both the global position and local neighborhood structural information of node into its representations. Hence, there has been significant research efforts invested in devising methods to incorporate graph structure into the computation of node representation. Aligned with these efforts, DyRep proposes a novel and sophisticated Localized Embedding Propagation principle that dynamically incorporates graph structure from both local neighborhood and faraway nodes (as interactions are allowed between nodes that do not have an edge). Contrary to that, BID14 uses single edge level information, specific to the relational setting, into their representations. Deep Temporal Point Process Based Self-Attention-For learning over graphs, attention has been shown to be extremely valuable as importance of nodes differ significantly relative to each other. The state-of-the-art approaches have focused solely on static graphs with Graph Attention Networks (Veličković et al., 2018) being the most recent one. Our attention mechanism for dynamic graphs present a significant and principled advancement over the existing state-of-the-art Graph based Neural Self-Attention techniques which only support static graphs. As do not incorporate graph structure, they do not use any kind of attention mechanism. Support for Node Attributes and Edge Types. Node types or attributes are supported in our work. In Eq. 4, z v (t p v) induces recurrence on node v's embedding, but when node v is observed for first time, z v (t p v) = x v where x v is randomly initialized or contains the raw node features available in data (which also includes type). One can also add an extra term in Eq. 4 to support high-dimension node attributes. Further, we also support different types of edges. If either the structural edge or an interaction has a type associated with it, our model can trivially support it in Eq. 3 and Eq. 4, first term h struct. Currently, for computing h struct, the formulation is shown to use aggregation over nodes. However, this aggregation can be augmented with edge type information as conventionally done in many representation learning frameworks (b). Further, for more direct effect, Eq 3 can include edge type as third feature vector in the concatenation for computing g k. Support for new nodes. As mentioned in Section 2.3 of the main paper, the data contains a set of dyadic events ordered in time. Hence, each event involves two nodes u and v. A new node will always appear as a part of such an event. Now, as mentioned above, the initial embedding of any new node u is given by z u (t p u) which can be randomly initialized or using the raw feature vector of the node u, x u. This allows the computation of intensity function for the event involving new node in Eq 1. Due to the inductive ability of our framework, we can then compute the embedding of the new node using Eq 4. There are two cases possible: Either one of the two nodes are new or both nodes are new. The mechanism for these two cases work as follows:-Only one new node in observed event -To compute the embedding of new nodes, h struct is computed using neighborhood of the existing (other) node, z(t u 0) s the feature vector of the node or random and drift is 0. To compute the new embedding of existing node, h struct is the feature vector of the new node, self-propation uses the most recent embedding of the node and drift is based on previous time point. -Both nodes in the observed event are new -h struct is the feature vector of the feature vector of the other nodes, z(t u 0) s the feature vector of the node or random and drift is 0. Finally, Algorithm 1 does not require to handle new nodes any differently. As already available in the paper, both A and S are qualified by time and hence the matrices get updated every time. The starting dimension of the two matrices can be specified in two ways: (i) Construct both matrices of dimension = total possible no. of nodes in dataset and make the rows belonging to unseen nodes 0.(ii) Expand the dimensions of matrices as you start seeing new nodes. While we implement the first case, (ii) will be required in real-world streaming scenario. DyRep framework unifies several components that contribute to its effectiveness in learning rich node representation over complex and nonlinear processes in dynamic graphs. In this section, we provide insights on each component and how it is indispensable to the learning mechanism by performing an ablation study on various design choices of our model. Specifically, DyRep can be divided into three main parts: Multi-time scale point process model, Representation Update Formulation and Conditional Intensity Based Attention Mechanism. We focus on design choices available in each component and evaluate them on large github dataset. DyRep in the FIG8 is the full model. Multiple Time-Scale Processes. For this component, we perform two major tests:• DyRep-Comm. In this variant, we make Eq 1., time-scale independent (i.e. remove k) and we train on only Communication Events. But we evaluate on both communication and association events. Please note that this is possible as our framework can compute representations for unseen nodes. Hence during training they will only learn representation parameters based on communication events. It is observed that compared to the full model, the performance of model degrades in prediction for both types of events. But the decline is more prominent for the Association events compared to Communication Events.• DyRep-Assoc. In this variant, similar to above, we make Eq 1., time-scale independent and we train on only Association Events. But we evaluate on both communication and association events. It is observed that compared to the full model, the performance of model degrades in prediction for both types of events. But the decline is more prominent for the Communication events compared to Association Events. The above two experiments show that considering events at a single time scale and not distinguishing between the processes hurt the performance. Although the performance is hurt more when communication events are not considered which may be due to the more availability of communication events due to its rapid frequency. We also performed a small test by training on all events but using a single scale parameter (ψ). The performance for both the dynamics degrades which demonstrates the effectiveness of ψ k.Representation Update Formulation. For this component, we focus on Eq. 4 and switch off the components to observe its effect.• DyRep-No-SP. In this variant, we switch off the self-propagation component and we observe that the overall performance is not hurt significantly by not using self-propagation. In general, this term provides a very weak feature and mainly captures the recurrent evolution of one's own latent features independent of others. It is observed that the deviation has increased for Association events which may point to the reason that there are few nodes who have links but highly varying frequency of communication and hence most of their features are either self-propagated or completely associated with others.• DyRep-No-Struct. In this variant, we remove the structural part of the model and as one would expect, the performance drops drastically in both the scenarios. This provides evidence to the necessity of building sophisticated structural encoders for dynamic graphs. Intensity Attention Mechanism. For this component, we focus on Section 3.2 which builds the novel intensity based attention mechanism. Specifically, we carry following test:• DyRep-No-Att. Here we completely remove the attention from the structural component and we see a significant drop in the performance.• DyRep-S-Comm. In this variant, we focus on Algorithm 1 and we only make update to the S matrix for Communication events but do not do it for Association events. This leads to slightly worse performance which helps to see how the S matrix is helping to mediate the two processes and not considering association events leads to loss of information.• DyRep-S-Assoc. In this variant, we focus on Algorithm 1 and we only make update to the S matrix for Association events but do not do it for Communication events. This leads to a significant drop in performance again validating the need for using both processes but its prominent effect also shows that communication events (dynamics on the network) is We assess the quality of learned embeddings and the ability of model to capture both temporal and structural information. Let t 0 be the time point when train ended. Let t 1 be the timepoint when the first test slot ends. Effect of Association and Communication on Embeddings. We conducted this experiment on Social dataset. We consider three use cases to demonstrate how the interactions and associations between the nodes changed their representations and visualize them to realize the effect.• Nodes that did not have association before test but got linked during first test slot. Nodes 46 and 76 got associated in test between test points 0 and 1. This reduced the cosine distance in both models but DyRep shows prominent effect of this association which should be the case. DyRep reduces the cosine distance from 1.231 to 0.005. Also, DyRep embeddings for these two points belong to different clusters initially but later converge to same cluster. In GraphSage, the cosine distance reduces from 1.011 to 0.199 and the embeddings still remain in original clusters. FIG10 shows the visualization of embeddings at the two time points in both the methods. This demonstrates that our embeddings can capture association events effectively.• Nodes that did not have association but many communication events. Nodes 27 and 70 is such a use case. DyRep embeddings consider the nodes to be in top 5 nearest neighbor of each other, in the same cluster and cosine distance of 0.005 which is aligned with the fact that nodes with large number of events tend to develop similar features over time. • Temporal evolution of DyRep embeddings. In FIG0 we visualize the embedding positions of the nodes (tracked in red) as they evolve through time and forms and breaks from clusters. Static Embedding Approaches. Representation Learning approaches for static graphs can be broadly classified into two categories -Node embedding approaches aim to encode structural information pertaining to a node to produce its low-dimensional representation BID4 BID17;; a; BID15 BID14. As they learn each individual node's representation, they are inherently transductive. Recently, (a) proposed GraphSage, an inductive method for learning functions to compute node representations that can be generalized to unseen nodes. Sub-graph embedding techniques learn to encode higher order graph structures into low dimensional vector representations (; ; . Further, various approaches to use convolutional neural networks (; BID3 BID8 develops extends skip-gram based approaches for network embedding to dynamic setting where the graphs a re observed as discrete time snapshot and the goal is to learn embeddings that can preserve the optimality of skip-gram objective. NetWalk is a discrete-time dynamic embedding approach specifically designed for anomaly detection which uses clique based embedding techniques to learn vertex representations. Recently, BID14 proposed Know-Evolve, a deep recurrent architecture to model multi-relational timestamped edges that addresses the communication process. Unlike our approach, Know-Evolve models all edges at a single timescale, works for setting restricted to relational graphs and uses only edge-level structural information with no attention mechanism. DANE BID14 proposes a network embedding method in dynamic environment but their dynamics consists of change in node's attributes over time and their current work can be considered orthogonal to our approach. proposes a dynamic network formation model to learn node representations by employing a Hawkes process to model the temporal evolution of neighborhood for nodes. This work only considers association events. proposes a continuous time embedding framework that employs a temporal version of traditional random walks in a simple manner to capture temporally evolving neighborhood information. Other models for dynamic networks. There exists a rich body of literature on temporal modeling of dynamic networks that focus on link prediction tasks but their goal is orthogonal to us as they build task specific methods and do not focus on representation learning. Further, there are several approaches in graph mining and temporal relational learning community (; ; BID10) that consider dynamic networks but are orthogonal to our current work. Research on learning dynamic embeddings has also progressed in linguistic community where the aim is to learn temporally evolving word embeddings BID2 ). BID14 ) include some other approaches that propose model of learning dynamic embeddings in graph data but none of these models consider time at finer level and do not capture both topological evolution and interactions. proposes subgraph pattern neural networks that focuses on evolution of subgraphs instead of single nodes and links. They build a novel neural network architecture for supervised learning where the hidden layers represent the subgraph patterns observed in the data and output layer is used to perform prediction. induces a dynamic graph from videos based on the visual correlation of object proposal that spans across the video. They further propose an LSTM based architecture to capture temporal dependencies over this induced graph and perform object detection. proposes a dynamic probabilistic model in bipartite case of user-item recommendation where the goal is to learn the evolution of user and item latent features under the context of Poisson factorization, thus considering the evolution processes of users' and items' latent features as independent of each other. Deep Temporal Point Process Models. Recently, BID9 has shown that fixed parametric form of point processes lead into the model misspecification issues ultimately affecting performance on real world datasets. BID9 therefore propose a data driven alternative to instead learn the conditional intensity function from observed events and thereby increase its flexibility. Following that work, there have been increased attraction in topic of learning conditional intensity function using deep learning and also intensity free approach using GANS for learning with deep generative temporal point process models. G IMPLEMENTATION DETAILS G.1 ADDITIONAL DATASET DETAILS We performed hyper parameter search for best performance for our method and all the baselines and used the following hyper-parameters to obtain the reported : -For social dataset: Num nodes = 100, Num Dynamics = 2, bptt (sequence length) = 200, embed_size = 32, hidden_unit_size = 32, nsamples (for survival) = 5, gradient_clip = 100 and no dropout.-For github dataset: Num nodes = 12328, Num Dynamics = 2, bptt (sequence length) = 300, embed_size = 256, hidden_unit_size = 256, nsamples (for survival) = 5, gradient_clip = 100.For baselines, we used the implementations provided by their authors and we report the range of configurations used for baseline here: max_iter = {1000, 5000, 10000}, bptt = {100, 200, 300}, lr = {0.0005, 0.0050.5, 0.1, 1}, embed_E = {32, 64, 128, 256}, embed_R = {32, 64, 128, 256}, hidden = {32, 64, 128, 256}, warm = 0, t_scale = 0.0001, w_scale = 0.1, num_epochs = {10, 50, 100, 500, 1000}. As mentioned in experiment section, we always train baselines with warmstart in a sliding window training fashion. The code provided by the authors was implemented in C++. GraphSage: The code was implemented in Tensorflow by the authors. We use only the unsupervised train module to generate embeddings. Node2Vec: We use the original python code with few changes in the hyper-parameters. We fix q in
Models Representation Learning over dynamic graphs as latent hidden process bridging two observed processes of Topological Evolution of and Interactions on dynamic graphs.
1,185
scitldr
With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-games), a class of n-player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes some GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods. As artificial agents proliferate, it is increasingly important to analyze, predict and control their collective behavior . Unfortunately, despite almost a century of intense research since von , game theory provides little guidance outside a few special cases such as two-player zero-sum, auctions, and potential games (; ; ; von). Nash equilibria provide a general solution concept, but are intractable in almost all cases for many different reasons (; ;). These and other negative suggest that understanding and controlling societies of artificial agents is near hopeless. Nevertheless, human societies -of billions of agents -manage to organize themselves reasonably well and mostly progress with time, suggesting game theory is missing some fundamental organizing principles. In this paper, we investigate how markets structure the behavior of agents. Market mechanisms have been studied extensively . However, prior work has restricted to concrete examples, such as auctions and prediction markets, and strong assumptions, such as convexity. Our approach is more abstract and more directly suited to modern machine learning where the building blocks are neural nets. Markets, for us, encompass discriminators and generators trading errors in GANs and agents trading wins and losses in StarCraft . The paper introduces a class of games where optimization and aggregation make sense. The phrase requires unpacking. "Optimization" means gradient-based methods. Gradient descent (and friends) are the workhorse of modern machine learning. Even when gradients are not available, gradient estimates underpin many reinforcement learning and evolutionary algorithms. "Aggregation" means weighted sums. Sums and averages are the workhorses for analyzing ensembles and populations across many fields. "Makes sense" means we can draw about the gradient-based dynamics of the collective by summing over properties of its members. As motivation, we present some pathologies that arise in even the simplest smooth games. Examples in section 2 show that coupling strongly concave profit functions to form a game can lead to uncontrolled behavior, such as spiraling to infinity and excessive sensitivity to learning rates. Hence, one of our goals is to understand how to'glue together agents' such that their collective behavior is predictable. Section 3 introduces a class of games where simultaneous gradient ascent behaves well and is amenable to analysis. In a smooth market (SM-game), each player's profit is composed of a personal objective and pairwise zero-sum interactions with other players. Zero-sum interactions are analogous to monetary exchange (my expenditure is your revenue), double-entry bookkeeping (credits balance debits), and conservation of energy (actions cause equal and opposite reactions). SM-games explicitly account for externalities. Remarkably, building this simple bookkeeping mechanism into games has strong implications for the dynamics of gradient-based learners. SM-games generalize adversarial games and codify a common design pattern in machine learning, see section 3.1. Section 4 studies SM-games from two points of view. Firstly, from that of a rational, profit-maximizing agent that makes decisions based on first-order profit forecasts. Secondly, from that of the game as a whole. SM-games are not potential games, so the game does not optimize any single function. A collective of profit-maximizing agents is not rational because they do not optimize a shared objective . We therefore introduce the notion of legibility, which quantifies how the dynamics of the collective relate to that of individual agents. Finally, section 5 applies legibility to prove some basic theorems on the dynamics of SM-games under gradient-ascent. We show that (i) Nash equilibria are stable; (ii) that if profits are strictly concave then gradient ascent converges to a Nash equilibrium for all learning rates; and (iii) the dynamics are bounded under reasonable assumptions. The are important for two reasons. Firstly, we identify a class of games whose dynamics are, at least in some respects, amenable to analysis and control. The kinds of pathologies described in section 2 cannot arise in SM-games. Secondly, we identify the specific quantities, forecasts, that are useful to track at the level of individual firms and can be meaningfully aggregated to draw about their global dynamics. It follows that forecasts should be a useful lever for mechanism design. A wide variety of machine learning markets and agent-based economies have been proposed and studied:;;;;; Kakade et al. (2003; 2005);;;;;;;;;. The goal of this paper is different. Rather than propose another market mechanism, we abstract an existing design pattern and elucidate some of its consequences for interacting agents. Our approach draws on work studying convergence in generative adversarial networks (; ; ; ;), related minimax problems , and monotone games (; ;). We consider dynamics in continuous time dw dt = ξ(w) in this paper. Discrete dynamics, w t+1 ← w t + ξ(w) require a more delicate analysis, e.g.. In particular, we do not claim that optimizing GANs and SM-games is easy in discrete time. Rather, our analyis shows that it is relatively easy in continuous time, and therefore possible in discrete time, with some additional effort. The contrast is with smooth games in general, where gradient-based methods have essentially no hope of finding local Nash equilibria even in continuous time. Figure 1: Effect of learning rates in two games. Note: x-axis is log-scale. Left: "half a game", e.g. 2. Right: minimal SM-game, e.g. 3. Top: Both players have same learning rate. Bottom: Second player has 1 8 learning rate of first (which is same as for top). Reducing the learning rate of the second player destabilizes the dynamics in "half a game", whereas the SM-game is essentially unaffected. Vectors are column-vectors. The notations S 0 and v 0 refer to a positive-definite matrix and vector with all entries positive respectively. Rather than losses, we work with profits. Proofs are in the appendix. We use economic terminology (firms, profits, forecasts, and sentiment) even though the examples of SM-games, such as GANs and adversarial training, are taken from mainstream machine learning. We hope the economic terminology provides an invigorating change of perspective. The underlying mathematics is no more than first and second-order derivatives. Smooth games model interacting agents with differentiable objectives. They are the kind of games that are played by neural nets. In practice, the differentiability assumption can be relaxed by replacing gradients with gradient estimates. Definition 1. A smooth game consists in n players [n] = {1, . . ., n}, equipped with twice continuously differentiable profit functions {π i : Player i controls the parameters w i . If players update their actions via simultaneous gradient ascent, then a smooth game yields a dynamical system specified by the differential equation . The setup can be recast in terms of minimizing losses by Smooth games are too general to be tractable since they encompass all dynamical systems. Lemma 1. Every continuous dynamical system on R d, for any d, arises as simultaneous gradient ascent on the profit functions of a smooth game. The next two sections illustrate some problems that arise in simple smooth games. Definition 2. We recall some solution concepts from dynamical systems and game theory: • A stable fixed point 1 w * satisfies ξ(w *) = 0 and v · J(w *) · v < 0 for all vectors v = 0. • A local Nash equilibrium w * has neighborhoods U i of w * i for all i, such that for all w i and all players i. Example 1 below shows that stable fixed points and local Nash equilibria do not necessarily coincide. The notion of classical Nash equilibrium is ill-suited to nonconcave settings. Intuitively, a fixed point is stable if all trajectories sufficiently nearby flow into it. A joint strategy is a local Nash if each player is harmed if it makes a small unilateral deviation. Local Nash differs from the classic definition in two ways. It is weaker, because it only allows small unilateral deviations. This is necessary since players are neural networks and profits are not usually concave. It is also stronger, because unilateral deviations decrease (rather than not increase) profits. A game is a potential game if ξ = ∇φ for some function φ, see for details. Example 1 (potential game). Fix a small > 0. Consider the two-player games with profit functions The game has a unique local Nash equilibrium at w = with π 1 = 0 = π 2. The game is chosen to be as nice as possible: π 1 and π 2 are strongly concave functions of w 1 and w 2 respectively. The game is a potential game since ξ = (w 2 − w 1, w 1 − w 2) = ∇φ for φ(w) = w 1 w 2 − 2 (w 2 1 + w 2 2). Nevertheless, the game exhibits three related problems. Firstly, the Nash equilibrium is unstable. Players at the Nash equilibrium can increase their profits via the joint update w ← + η ·, so π 1 (w) = η(1 − 2) = π 2 (w) > 0. The existence of a Nash equilibrium where players can improve their payoffs by coordinated action suggests the incentives are not well-designed. Secondly, the dynamics can diverge to infinity. Starting at w = and applying simultaneously gradient ascent causes the norm of vector w (t) 2 to increase without limit as t → ∞ -and at an accelerating rate -due to a positive feedback loop between the players' parameters and profits. Finally, players impose externalities on each other. The decisions of the first player affect the profits of the second, and vice versa. Obviously players must interact for a game to be interesting. However, positive feedback loops arise because the interactions are not properly accounted for. In short, simultaneous gradient ascent does not converge to the Nash -and can diverge to infinity. It is open to debate whether the fault lies with gradients, the concept of Nash, or the game structure. In this paper, we take gradients and Nash equilibria as given and seek to design better games. Gradient-based optimizers rarely follow the actual gradient. For example RMSProp and Adam use adaptive, parameter-dependent learning rates. This is not a problem when optimizing a function. Suppose f (w) is optimized with reweighted gradient (∇f) η:= (η 1 ∇ 1 f, . . ., η n ∇ n f) where η 0 is a vector of learning rates. Even though (∇f) η is not necessarily the gradient of any function, it behaves like ∇f because they have positive inner product when ∇f = 0: Parameter-dependent learning rates thus behave well in potential games where the dynamics derive from an implicit potential function ξ(w) = ∇φ(w). Severe problems can arise in general games. 1 use a different notion of stable fixed point that requires J has positive eigenvalues. Example 2 ("half a game"). Consider the following game, where the w 2 -player is indifferent to w 1: The dynamics are clear by inspection: the w 2 -player converges to w 2 = 0, and then the w 1 -player does the same. It is hard to imagine that anything could go wrong. In contrast, behavior in the next example should be worse because convergence is slowed down by cycling around the Nash: Example 3 (minimal SM-game). A simple SM-game, see definition 3, is. Figure 1 shows the dynamics of the games, in discrete time, with small learning rates and small gradient noise. In the top panel, both players have the same learning rate. Both games converge. Example 2 converges faster -as expected -without cycling around the Nash. In the bottom panels, the learning rate of the second player is decreased by a factor of eight. The SM-game's dynamics do not change significantly. In contrast, the dynamics of example 2 become unstable: although player 1 is attracted to the Nash, it is extremely sensitive to noise and does not stay there for long. One goal of the paper is to explain why SM-games are more robust, in general, to differences in relative learning rates. Tools for automatic differentiation (AD) such as TensorFlow and PyTorch include stop gradient operators that stop gradients from being computed. For example, let ). The use of stop gradient means f is not strictly speaking a function and so we use ∇ AD to refer to its gradient under automatic differentiation. which is the simultaneous gradient from example 2. Any smooth vector field is the gradient of a function augmented with stop gradient operators, see appendix D. Stop gradient is often used in complex neural architectures (for example when one neural network is fed into another leading to multiplicative interactions), and is thought to be mostly harmless. Section 2.2 shows that stop gradients can interact in unexpected ways with parameter-dependent learning rates. It is natural to expect individually well-behaved agents to also behave well collectively. Unfortunately, this basic requirement fails in even the simplest examples. Maximizing a strongly concave function is well-behaved: there is a unique, finite global maximum. However, example 1 shows that coupling concave functions can cause simultaneous gradient ascent to diverge to infinity. The dynamics of the game differs in kind from the dynamics of the players in isolation. Example 2 shows that reducing the learning rate of a well-behaved (strongly concave) player in a simple game destabilizes the dynamics. How collectives behave is sensitive not only to profits, but also to relative learning rates. Off-the-shelf optimizers such as Adam modify learning rates under the hood, which may destabilize some games. Let us restrict to more structured games. Take an accountant's view of the world, where the only thing we track is the flow of money. Interactions are pairwise. Money is neither created nor destroyed, so interactions are zero-sum. If we model the interactions between players by differentiable functions g ij (w i, w j) that depend on their respective strategies then we have an SM-game. All interactions are explicitly tracked. There are no externalities off the books. Positive interactions, g ij > 0, are revenue, negative are costs, and the difference is profit. The model prescribes that all firms are profit maximizers. More formally: Definition 3 (SM-game). A smooth market is a smooth game where interactions between players are pairwise zero-sum. The profits have the form The functions f i can act as regularizers. Alternatively, they can be interpreted as natural resources or dummy players that react too slowly to model as players. Dummy players provide firms with easy (non-adversarial) sources of revenue. Humans, unlike firms, are not profit-maximizers; humans typically buy goods because they value them more than the money they spend on them. Appendix C briefly discusses extending the model. SM-games codify a common design pattern: 1. Optimizing a function. A near-trivial case is where there is a single player with profit π 1 (w) = f 1 (w). 2. Generative adversarial networks and related architectures like CycleGANs are zero or near zero sum (; ;). 3. Zero-sum polymatrix games are SM-games where f i (w i) ≡ 0 and g ij (w i, w j) = w i A ij w j for some matrices A ij. Weights are constrained to probability simplices. The games have nice properties including: Nash equilibria are computed via a linear program and correlated equilibria marginalize onto Nash equilibria . 4. Intrinsic curiosity modules use games to drive exploration. One module is rewarded for predicting the environment and an adversary is rewarded for choosing actions whose outcomes are not predicted by the first module . The modules share some weights, so the setup is nearly, but not exactly, an SM-game. 5. Adversarial training is concerned with the minmax problem , y i obtains a star-shaped SM-game with the neural net (player 0) at the center and n adversaries -one per datapoint (x i, y i) -on the arms. 6. Task-suites where a population of agents are trained on a population of tasks, form a bipartite graph. If the tasks are parametrized and adversarially rewarded based on their difficulty for agents, then the setup is an SM-game. 7. Homogeneous games arise when all the coupling functions are equal up to sign (recall An example is population self-play which lives on a graph where g ij (w i, w j):= P (w i beats w j) − 1 2 comes from the probability that policy w i beats w j. Monetary exchanges in SM-games are quite general. The error signals traded between generators and discriminators and the wins and losses traded between agents in StarCraft are two very different special cases. How to analyze the behavior of the market as a whole? Adam Smith claimed that profit-maximizing leads firms to promote the interests of society, as if by an invisible hand (Smith, 1776). More formally, we can ask: Is there a measure that firms collectively increase or decrease? It is easy to see that firms do not collectively maximize aggregate profit (AP) or aggregate revenue (AR): Maximizing aggregate profit would require firms to ignore interactions with other firms. Maximizing aggregate revenue would require firms to ignore costs. In short, SM-games are not potential games; there is no function that they optimize in general. However, it turns out the dynamics of SM-games aggregates the dynamics of individual firms, in a sense made precise in section 4.3. Give an objective function to an agent. The agent is rational, relative to the objective, if it chooses actions because it forecasts they will lead to better outcomes as measured by the objective. In SM-games, agents are firms, the objective is profit, and forecasts are computed using gradients. Firms aim to increase their profit. Applying the first-order Taylor approximation obtains where {h.o.t.} refers to higher-order terms. Firm i's forecast of how profits will change if it modifies production by Forecasts encode how individual firms expect profits to change ceteris paribus 2. How does profit maximizing by individual firms look from the point of view of the market as a whole? Summing over all firms obtains where f v (w) = i f vi (w) is the aggregate forecast. Unfortunately, the left-hand side of Eq. is incoherent. It sums the changes in profit that would be experienced by firms updating their production in isolation. However, firms change their production simultaneously. Updates are not ceteris paribus and so profit is not a meaningful macroeconomic concept. The following minimal example illustrates the problem: Example 4. Suppose π 1 (w) = w 1 w 2 and π 2 (w) = −w 1 w 2. Fix w = (w 1, w 2) and let v = (w 2, −w 1). The sum of the changes in profit expected by the firms, reasoning in isolation, is whereas the actual change in aggregate profit is zero because π 1 (x) + π 2 (x) = 0 for any x. Tracking aggregate profits is therefore not useful. The next section shows forecasts are better behaved. Give a target function to every agent in a collective. The collective is legible, relative to the targets, if it increases or decreases the aggregate target according to whether its members forecast, on aggregate, they will increase or decrease their targets. We show that SM-games are legible. The targets are profit forecasts (note: not profits). Let us consider how forecasts change. Define the sentiment as the directional derivative of the forecast D vi f vi (w) = v i ∇f vi (w). The first-order Taylor expansion of the forecast shows that the sentiment is a forecast about the profit forecast: The perspective of firms can be summarized as: 1. Choose an update direction v i that is forecast to increase profit. 2. The firm is then in one of two main regimes: a. If sentiment is positive then forecasts increase as the firm modifies its productionforecasts become more optimistic. The firm experiences increasing returns-to-scale. b. If sentiment is negative then forecasts decrease as the firm modifies its productionforecasts become more pessimistic. The firm experiences diminishing returns-to-scale. Our main is that sentiment is additive, which means that forecasts are legible: Proposition 2 (forecasts are legible in SM-games). Sentiment is additive Thus, the aggregrate profit forecast f v increases or decreases according to whether individual forecasts f vi are expected to increase or decrease in aggregate. Section 5.1 works through an example that is not legible. Finally, we study the dynamics of gradient-based learners in SM-games. Suppose firms use gradient ascent. Firm i's updates are, infinitesimally, in the direction v i = ξ i (w) so that dwi dt = ξ i (w). Since updates are gradients, we can simplify our notation. Define firm i's forecast as f i (w):= We allow firms to choose their learning rates; firms with higher learning rates are more responsive. Define the η-weighted dynamics ξ η (w):= (η 1 ξ 1, . . ., η n ξ n) and η-weighted forecast as In this setting, proposition 2 implies that Proposition 3 (legibility under gradient dynamics). Fix dynamics dw dt:= ξ η (w). Sentiment decomposes additively: Thus, we can read off the aggregate dynamics from the dynamics of forecasts of individual firms. The pairwise zero-sum structure is crucial to legibility. It is instructive to take a closer look at example 1, where the forecasts are not legible. Suppose π 1 (w) = w 1 w 2 − 2 w 2 1 and π 2 (w) = w 1 w 2 − 2 w 2 2. Then ξ(w) = (w 2 − w 1, w 1 − w 2) and the firms' sentiments are df1 dt = − (w 2 − w 1) 2 and df2 dt = − (w 1 − w 2) 2 which are always non-positive. However, the aggregate sentiment is df dt which for small is dominated by w 1 w 2, and so can be either positive or negative. When w = we have Each firm expects their forecasts to decrease, and yet the opposite happens due to a positive feedback loop that ultimately causes the dynamics to diverge to infinity. We provide three fundamental on the dynamics of smooth markets. Firstly, we show that stability, from dynamical systems, and local Nash equilibrium, from game theory, coincide in SM-games: Theorem 4 (stability). A fixed point in an SM-game is a local Nash equilibrium iff it is stable. Thus, every local Nash equilibrium is contained in an open set that forms its basin of attraction. Secondly, we consider convergence. Lyapunov functions are tools for studying convergence. Given dynamical system dw dt = ξ(w) with fixed point w *, recall that V (w) is a Lyapunov function if: If a dynamical system has a Lyapunov function then the dynamics converge to the fixed point. Aggregate forecasts share properties (i) and (ii) with Lyapunov functions. (i) Shared global minima: f η (w) = 0 iff f η (w) = 0 for all η, η 0, which occurs iff w is a stationary point, ξ i (w) = 0 for all i. (ii) Positivity: f η (w) > 0 for all points that are not fixed points, for all η 0. We can therefore use forecasts to study convergence and divergence across all learning rates: Theorem 5. In continuous time, for all positive learning rates η 0, * is a stable fixed point (S ≺ 0), then there is an open neighborhood U w * where dfη dt (w) < 0 for all w ∈ U \ {w *}, so the dynamics converge to w * from anywhere in U. * is an unstable fixed point (S 0), there is an open neighborhood U w * such that dfη dt (w) > 0 for all w ∈ U \ {w *}, so the dynamics within U are repelled by w *. The theorem explains why SM-games are robust to relative differences in learning rates -in contrast to the sensitivity exhibited by the game in example 2. If a fixed point is stable, then for any dynamics dw dt = ξ η (w), there is a corresponding aggregate forecast f η (w) that can be used to show convergence. The aggregate forecasts provide a family of Lyapunov-like functions. Finally, we consider the setting where firms experience diminishing returns-to-scale for sufficiently large production vectors. The assumption is realistic for firms in a finite economy since revenues must eventually saturate whilst costs continue to increase with production. Theorem 6 (boundedness). Suppose all firms have negative sentiment for sufficiently large values of w i. Then the dynamics are bounded for all η 0. The theorem implies that the kind of positive feedback loops that caused example 1 to diverge to infinity, cannot occur in SM-games. One of our themes is that legibility allows to read off the dynamics of games. We make the claim visually explicit in this section. Let us start with a concrete game. Figure 3AB plots the dynamics of the SM-game in example 5, under two different learning rates for player 1. There is an unstable fixed point at the origin and an ovoidal cycle. Dynamics converge to the cycle from both inside and outside the ovoid. Changing player 1's learning rate, panel B, squashes the ovoid. Panels CD provide a cartoon map of the dynamics. There are two regions, the interior and exterior of the ovoid and the boundary formed by the ovoid itself. In general, the phase space of any SM-game is carved into regions where sentiment dfη dt (w) is positive and negative, with boundaries where sentiment is zero. The dynamics can be visualized as operating on a landscape where height at each point w corresponds to the value of the aggregate forecast f η (w). The dynamics does not always ascend or always descend the landscape. Rather, sentiment determines whether the dynamics ascends, descends, or remains on a level-set. Since sentiment is additive,, the decision to ascend or descend comes down to a weighted sum of the sentiments of the firms. 3 Changing learning rates changes the emphasis given to different firms' opinions, and thus changes the shapes of the boundaries between regions in a relatively straightforward manner. SM-games can thus express richer dynamics than potential games (cycles will not occur when performing gradient ascent on a fixed objective), which still admit a relatively simple visual description in terms of a landscape and decisions about which direction to go (upwards or downwards). Computing the landscape for general SM-games, as for neural nets, is intractable. Machine learning has got a lot of mileage out of treating differentiable modules like plug-and-play lego blocks. This works when the modules optimize a single loss and the gradients chain together seamlessly. Unfortunately, agents with differing objectives are far from plug-and-play. Interacting agents form games, and games are intractable in general. Worse, positive feedback loops can cause individually well-behaved agents to collectively spiral out of control. It is therefore necessary to find organizing principles -constraints -on how agents interact that ensure their collective behavior is amenable to analysis and control. The pairwise zero-sum condition that underpins SM-games is one such organizing principle, which happens to admit an economic interpretation. Our main is that SM-games are legible: changes in aggregate forecasts are the sum of how individual firms expect their forecasts to change. It follows that we can translate properties of the individual firms into guarantees on collective convergence, stability and boundedness in SM-games, see theorems 4-6. Legibility is a local-to-global principle, whereby we can draw qualitative about the behavior of collectives based on the nature of their individual members. Identifying and exploiting games that embed local-to-global principles will become increasingly important as artificial agents become more common. This section provides a physics-inspired perspective on smooth markets. Consider a dynamical system with n particles moving according to the differential equations: The kinetic energy of a particle is mass times velocity squared, mv 2, or in our case energy of i th particle = η where we interpret the learning rate squared η 2 i of particle i as its mass and ξ i as its velocity. The total energy of the system is the sum over the kinetic energies of the particles: For example, in a Hamiltonian game we have that energy is conserved:; for details. Energy is measured in joules (kg · m · s −2). The rate of change of energy with respect to time is power, measured in joules per second or watts (kg · m · s −3). Conservation of energy means that a (closed) Hamiltonian system, in aggregate, generates no power. The existence of an invariant function makes Hamiltonian systems easy to reason about in many ways. Smooth markets are more general than Hamiltonian games in that total energy is not necessarily conserved. Nevertheless, they are much more constrained than general dynamical systems. Legibility, proposition 3, says that the total power (total rate of energy generation) in smooth markets is the sum of the power (rate of energy generation) of the individual particles: Example where legibility fails. Once again, it is instructive to look at a concrete example where legibility fails. Recall the potential game in example 1 with profits and π 2 (w) = w 1 w 2 − 2 w 2 2. and sentiments Physically, the negative sentiments df1 dt < 0 and df2 dt < 0 mean that that each "particle" in the system, considered in isolation, is always dissipating energy. Nevertheless as shown in section 5.1 the system as a whole has df dt which is positive for some values of w. Thus, the system as a whole can generate energy through interaction effects between the (dissipative) particles. Proof of lemma 1. Lemma 1. Every continuous dynamical system on R d, for any d, arises as simultaneous gradient ascent on the profit functions of a smooth game. Proof. Specifically, we mean that every dynamical system of the form Proof of proposition 2. Before proving proposition 2, we first prove a lemma. Lemma 7 (generalized Helmholtz decomposition). The Jacobian decomposes into J(w) = S(w) + A(w) where S(w) and A(w) are symmetric and antisymmetric, respectively, for all w ∈ R d. Proof. Follows immediately. for details and explanation. Proposition 2. Sentiment is additive: Proof. For any collection of updates, we need to show that because A is antisymmetric and S is block-diagonal. Proof of proposition 3. First we prove a lemma. Proof. Observe by direct computation that It is then easy to see that where S = S since S is symmetric. By antisymmetry of A, we have that v A v = 0 for all v. The expression thus simplifies to by the block-diagonal structure of S. Proposition 3 (legibility under gradient dynamics). Fix dynamics Proof. Applying the chain rule obtains that where the second equality follows by construction of the dynamical system as dw dt = ξ η (w). Lemma 8 shows that for all i as required. Proof of theorem 4. Theorem 4. A fixed point in an SM-game is a local Nash equilibrium iff it is stable. Proof. Suppose that w * is a fixed point of the game, that is suppose ξ(w *) = 0. Recall from lemma 7 that the Jacobian of ξ decomposes uniquely into two components J(w) = S(w) + A(w) where S ≡ S is symmetric and A + A ≡ 0 is antisymmetric. It follows that v Jv = v Sv + v Av = v Sv since A is antisymmetric. Thus, w * is a stable fixed point iff S(w *) 0 is negative definite. In an SM-game, the antisymmetric component is arbitrary and the symmetric component is block diagonal -where blocks correspond to players' parameters. That is, S ij = 0 for i = j because the interactions between players i and j are pairwise zero-sum -and are therefore necessarily confined to the antisymmetric component of the Jacobian. Since S is block-diagonal, it follows that S is negative definite iff the submatrices S ii along the diagonal are negative definite for all players i. is strictly concave in the parameters controlled by player i at w *. The follows. Proof of theorem 5. Theorem 5. In continuous time, for all positive learning rates η 0, Proof. We prove the first part. The second follows by a symmetric argument. First, strict concavity implies ii π i is negative definite for all i. Second, since S is block-diagonal, with zeros in all blocks S ij for pairs of players i = j, it follows that S is also negative definite. Observe that for all ξ η = 0 since S is negative definite. Thus, simultaneous gradient ascent on the profits acts to infinitesimally reduce the function f η (w). Since ξ η reduces f η, it will converge to a stationary point satisfying ∇f η = 0. Observe that ∇f η = 0 iff ξ η = 0 since ∇f η = J ξ η and the symmetric component S of the Jacobian is negative definite. Finally, observe that all stationary points of f η, and hence ξ η, are stable fixed points of ξ η because S is negative definite, which implies that the fixed point is a Nash equilibrium. Proof of theorem 6. for the dynamical system defined by dw dt = ξ η. Since we are operating in continuous time, all that is required is to show that f η (w (t) ) = g < g implies that f η (w (t+) ) < g for all sufficiently small > 0. dt (w) < 0 for all w in a sufficiently small ball centered at w (t). In other words, the dynamics dw dt = ξ η reduce f η and the follows. Definition 3 proposes a model of monetary exchange in smooth markets. It ignores some major aspects of actual markets. For example, SM-games do not model inventories, investment, borrowing or interest rates. Moreover, in practice money is typically exchanged in return for goods or serviceswhich are ignored by the model. In this section, we sketch one way to extend SM-games to model the exchange of both money and goods -although still without accounting for inventories, which would more significantly complicate the model. The proposed extension is extremely simplistic. It is provided to indicate how the model's expressive power can be increased, and complications that . Suppose π i (w) = f i (w) + j =i α ij ω ij (w i, w j) − g ij (w i, w j). The functions ω ij measure the amount of goods (say, widgets) that are exchanged between firms i and j. We assume that ω ij + ω ji ≡ 0 since widgets are physically passed between the firms and therefore one firms increase must be the others decrease. For two firms to enter into an exchange it must be that they subjectively value the widgets differently, hence we introduce the parameters α ij. Note that if α ij = 1 for all ij then the model is equivalent to an SM-game. The transaction between firms i and j is net beneficial to both firms i and j if and, simultaneously α ji · ω ji (w i, w j) > g ji (w i, w j). We can interpret the inequalities as follows. First suppose that ω ij and g ij always have the same sign. The assumption is reasonable so long as firms do not pay to give away widgets. Further assume without loss of generality that ω ij and g ij are both greater than zero -in other words, firm i is buying widgets from firm j. The above inequalities can then be rewritten as α ij · ω ij (w i, w j) amount firm i values the widgets > g ij (w i, w j) amount firm i pays and α ji · ω ij (w i, w j) amount j values the widgets < g ij (w i, w j) amount j is paid It follows that both firms benefit from the transaction. Implications for dynamics. The off-block-diagonal terms of the symmetric and anti-symmetric components of the game Jacobian are and A ij = α ij + α ji 2 · ∇ 2 ij ω ij (w i, w j) where it is easy to check that S ij = S ji and A ij + A ji = 0. The off-block-diagonal terms of S has consequences for how forecasts behave: When are near SM-games well-behaved? If α ij = α ji for all i, j then the correction is zero; if α ij ∼ α ji then the corrections due to different valuations of goods will be negligible, and the game should be correspondingly well-behaved. What can go wrong? Eq implies that the dynamics of near SM-games -specifically whether the dynamics are increasing or decreasing the aggregate forecast -cannot be explained in terms of the sum of sentiments of individual terms. The correction terms involve interactions between dynamics of different firms and the (second-order) quantities of goods exchanged. In principle, these terms could be arbitrarily large positive or negative numbers. Concretely, the correction terms involving couplings between dynamics of different firms can lead to positive feedback loops, as in example 1, where the dynamics spiral off to infinity even though both players have strongly concave profit functions. Lemma 9. Any smooth vector field can be constructed as the gradient of a function augmented with stop gradient operators. Proof. Suppose ξ = (∂f1(w) ∂w1,..., f w i, stop gradient(wî) It follows that ∇ AD g(w) = ξ(w) as required.
We introduce a class of n-player games suited to gradient-based methods.
1,186
scitldr
While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP). Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters. Thus, one potential way to understand the sucess of these networks is to revisit the theory of counter computation. Therefore, we choose to study the abilities of real-time counter machines as formal grammars. We first show that several variants of the counter machine converge to express the same class of formal languages. We also prove that counter languages are closed under complement, union, intersection, and many other common set operations. Next, we show that counter machines cannot evaluate boolean expressions, even though they can weakly validate their syntax. This has implications for the interpretability and evaluation of neural network systems: successfully matching syntactic patterns does not guarantee that a counter-like model accurately represents underlying semantic structures. Finally, we consider the question of whether counter languages are semilinear. This work makes general contributions to the theory of formal languages that are of particular interest for the interpretability of recurrent neural networks. It is often taken for granted that modeling natural language syntax well requires a hierarchically structured grammar formalism. Early work in linguistics established that finite-state models are insufficient for describing the dependencies in natural language data . Instead, a formalism capable of expressing the relations in terms of hierarchical constituents ought to be necessary. Recent advances in deep learning and NLP, however, challenge this long-held belief. Neural network formalisms like the long short-term memory network (LSTM) have been shown to perform well on tasks requiring structure sensitivity , even though it is not obvious that such models have the capacity to represent hierarchical structure. This mismatch raises interesting questions for both linguists and practitioners of NLP. It is unclear what about the LSTM's structure lends itself towards good linguistic representations, and under what conditions these representations might fall short of grasping the structure and meaning of language. Recent work has suggested that the expressive capacity of LSTMs resembles that of counter machines (; ;). studied LSTMs with fully saturated weights (i.e. the activation functions evaluate to their asymptotic values instead of intermediate rational values) and showed that such models can express simplified counter languages. , on the other hand, showed that the general counter languages are an upper bound on the expressive capacity of saturated LSTMs. Thus, there seems to be a strong theoretical connection between LSTMs and the counter automata.;; also all report experimental suggesting that some class of counter languages matches the learnable capacity of LSTMs trained by gradient descent. Taking the counter machine as a simplified formal model of the LSTM, we study the formal properties of counter machines as grammars. We do this with the hope of understanding to what degree counter machines, and LSTMs by extension, have computational properties well-suited for representing the structure of natural language. The contributions of this paper are as follows: • We prove that general counter machines, incremental counter machines, and stateless counter machines have equivalent expressive capacity, whereas simplified counter machines are strictly weaker than the general class. • We demonstrate that counter languages are closed under complement, union, intersection, and many other common operations. • We show that counter machines are incapable of representing the deep syntactic structure or semantics of boolean expressions, even though they can validate whether a boolean expression is well-formed. • We prove that a certain subclass of the counter languages are semilinear, and conjecture that this holds for all counter languages. Informally, we can think of counter automata as finite-state automata that have been augmented by a finite number of integer-valued counters. While processing a string, the machine can update the values of the counters, and the counters can in turn inform the machine's state transitions. Early in theoretical computer science established that a 2-counter machine with unbounded computation time is Turing-complete . However, restricting computation to be realtime (i.e. one iteration of computation per input) severely limits the counter machine's computational capacity . A similar fact holds for recurrent neural networks like LSTMs . We study the capabilities of several types of real-time counter automata. The first counter automaton we introduce is the general counter machine. This machine can manipulate the counters by adding or subtracting from them. The other variants that we will go on to define are special cases of this general machine. For m ∈ Z, we write +m to denote the function λx.x + m. By ×0, we denote the constant zero function λx.0. Given an input string x and a counter machine, we perform computation by processing x one token at a time. For each token, we use u to update the counters and δ to update the state according to the current input token, the current state, and a finite mask of the current counter values. We formalize this in Definition 2.2. As a preliminary remark on notation, we use z(x) to denote the zero check function Given a vector x, we use z(x) to represent this function broadcasted over each element of the vector. Definition 2.2 (Counter machine computation). Let q, c ∈ Q × Z k be a configuration of machine M. Upon reading input x t ∈ Σ, we define the transition q, c → xt δ(x t, q, z(c)), u(x t, q, z(c))(c). Definition 2.3 (Real-time acceptance). For any string x ∈ Σ * with length n, a counter machine accepts x if there exist states q 1,.., q n and counter configurations c 1,.., c n such that Definition 2.4 (Real-time language acceptance). A counter machines accepts a language L if, for each x ∈ Σ *, it accepts x if and only if x ∈ L. We denote the set of languages that are acceptable in real time by a general counter machine as CL. We will use the terms "accept" and "decide" interchangeably, as accepting and deciding a language are equivalent for real-time automata. Now, we can can consider various restrictions of the general counter machine, and the corresponding classes of languages acceptable by such automata. First, we present the simplified counter machine discussed by. The counter update function in the simplified counter machine has two important constraints compared to the general machine. First, it can only be conditioned by the input symbol at each time step. Second, it can only increment or decrement its counters instead of being able to add or subtract arbitrary constants. Definition 2.5 (Simplified counter machine). A counter machine is simplified if u has the form Another variant that we consider is the incremental counter machine. This machine also is constrained to have only increments and decrements on its counters, but the counter update function is allowed to depend on the state and counter value. Definition 2.6 (Incremental counter machine). An counter machine is incremental if u has the form Finally, we define a stateless variant of the counter machine. Removing state from the counter machine is equivalent to allowing it to only have one state q 0. Definition 2.7 (Stateless counter machine). A counter machine is stateless if Our first relating counter classes is to show that the simplified counter languages are a proper subset of the general counter languages. The weakness of the simplified machine is that the update function is conditioned only by the input symbol. Thus, languages like a m b 2m, which require switching counting behavior, cannot be decided correctly. We formalize this in Theorem 3.1. Theorem 3.1 (Weakness of SCL). Let SCL be the set of languages acceptable in real time by a simplified counter machine. Then, SCL ⊂ CL. Proof. Consider the language a m b 2m. This is trivially acceptable by a 1-counter machine that adds 2 for each a and subtracts 1 for each b. On the other hand, we shall show that it cannot be accepted by any simplified machine. Assume by way of contradiction that such a simplified machine M exists. Tracking the ratio between a's and b's requires infinite state. Thus, the counters of M, as opposed to the finite state, must encode whether 2m = l for strings of the form a m b l. Let c be the value of some counter in M. We can decompose c into the update contributed by a's and the the update contributed by b's as follows: Exhausting all the possible functions that c can compute, we get We ignore the first four options for z(c), as they clearly do not relate m to l. The final option checks whether the ratio is 1, not 2. Thus, z(c) cannot distinguish whether 2m = l. Note that this argument breaks down if we allow the counter update to depend on the state. In that case, we can build a machine that has two counters and two states: q 0 adds 1 to the first counter while it reads a, and then decrements the first counter and increments the second counter when it reads b. When the first counter is empty and the second counter is not empty, q 0 transitions to q 1, which decrements the second counter. We accept if and only if both counters are 0 after x n. Unlike the simplified counter machine, the incremental machine has the same linguistic capacity as the general machine. We can simulate each counter on a general machine with a finite amount of overhead. This provides a reduction from general to incremental machines. Theorem 3.2 (Generality of ICL). Let ICL be the set of languages acceptable in real time by an incremental counter machine. Then, ICL = CL. Proof. Let d be the maximum that is ever added or subtracted from a counter c in M. We simulate c in M using a counter c and a value q ∈ Z mod d encoded in finite state. We will implement a "ring-counter" encoding of c such that To simulate a ×0 update on c, we apply ×0 to c, and transition state such that q:= 0. To simulate a +m update on c for some m ∈ Z, we first change state such that q:= (q + m) mod d. Next, we apply the following update to c: We can compute z(c) by checking whether z(c) = 0 and q = 0. Similarly, restricting a counter machine to be stateless does not weaken its expressive capacity. We show how to reduce an arbitrary stateful machine to a stateless machine that has been augmented with additional counters. The key idea here is that we can use the additional counters as a one-hot vector that tracks the state of the original machine. At initialization, q encodes the initial state since q = 0 = ω. Furthermore, we define the invariant that, at any given time, q = ω(i) for some state i. Thus, the additional counters now encode the current state. Let x y denote the concatenation of vectors x and y. We define the acceptance mask in M as An analogous transformation allows us to update the counters inherited from M. The last step is to properly update the new counters q. For each transition δ(x t, q i, b) = q j in M, we update q by adding −ω(i) + ω(j). This ensures that the updated value of q is one-hot since The general counter machine, incremental counter machine, and stateless counter machine all converge to the same linguistic capacity, which we call CL. The simplified counter machine defined by , however, has a linguistic capacity SCL that is strictly weaker than CL. Another way to understand the counter languages is through their closure properties. It turns out that the real-time counter languages are closed under a wide array of common operations, including complement, intersection, union, set difference, and symmetric set difference. The general in Theorem 4.1 implies these closure properties, as well as many others. Theorem 4.1 (General set operation closure). Let P be an m-ary operation over languages. If there exists an m-ary boolean function p such that then CL and SCL are both closed under P. Proof. First, we construct counter machines M 1,.., M m that decide the counter languages L 1,.., L m. We define a new machine M that, on input x, simulates M 1,.., M m in parallel, and accepts if and only if Let Λ be a placeholder for either CL or SCL. Let L 1, L 2 ∈ Λ. By Theorem 4.1, Λ is closed under the following operations: We now study the ability of counter machines to represent the language L m (Definition 5.1). Like natural language, L m has a deep structure recursively composed from hierarchical constituents. Definition 5.1 (L m ;). For any m, let L m be the language generated by: <exp> -> <VALUE> <exp> -> <UNARY> <exp> <exp> -> <BINARY> <exp> <exp>.. <exp> -> <m-ARY> <exp>.. <exp>, shows that, by implementing Algorithm 1, even a 1-counter machines can decide L m in real time. Algorithm 1 uses a counter to keep track of the depth at any given index. If the depth counter reaches −1 at the end of the string, the machine has verified that the string is well-formed. We define the arity of a <VALUE> as 0, and the arity of an <m-ARY> operation as m. Algorithm 1 Deciding L m 1: procedure DECIDE (x) 2: for each x t ∈ x do 4: return c = −1 While Algorithm 1 decides L m, we observe that it is agnostic to the deep structure of the input in that it does not represent the dependencies between tokens. This means that it could not be used to evaluate these expressions, for example. Based on this observation, we prove that no counter machine can evaluate boolean expressions due to the deep structural sensitivity that semantic evaluation (as opposed to syntactic acceptance) requires. We view boolean evaluation as a simpler formal analogy to evaluating the compositional semantics of natural language. To be more formal, consider an instance of L 2 with values {0, 1} and binary operations {∧, ∨}. We assign the following semantics to the terminals: Furthermore, our semantics evaluates each nonterminal by applying the denotation of each syntactic argument to the semantic arguments of the operation. For example, We also define semantics for non-constituent prefixes via function composition. For example, Finally, we define the language B as the set of expressions x where [[x]] = 1 under these semantics. Theorem 5.1 (Weak evaluation). For any k, a real-time k-counter machine cannot decide B. Proof. Assume by way of contradiction that such an evaluation can be performed. We consider an input x that contains a prefix of p operators followed by a suffix of p + 1 values. For the machine to evaluate x correctly, the configuration after x p must encode which boolean function x p specifies. However, a counter machine with k counters only has O(p k) configurations after reading p characters. We show by induction over p that an p-length prefix of operators can encode ≥ 2 p boolean functions. Since the counter machine does not have enough configurations to encode all the possibilities, we reach a contradiction. Base Case With p = 0, we have a null prefix followed by one value that determines [[x] ]. Therefore, we can represent exactly 1 (2 0) function, which is the identity. Inductive Case The expression has a prefix of operators x 1:p+1 followed by values x p+2:2p+3. We decompose the semantics of the full expression to To do this, consider the minimal sequence of values that will satisfy them according to a right-to-left ordering of the sequences. For f ∧, this minimal sequence ends in 1, whereas for f ∨ it must end in a 0. Therefore, f can have 2 unique values for each value of [[x 2:2p+2]]. Thus, a p + 1-length sequence of prefixes can encode ≥ 2 · 2 p = 2 p+1 boolean functions. Theorem 5.1 shows how counter machines cannot represent certain hierarchical dependencies, even when the generated language is within the counter machine's weak expressive capacity. This is analogous to how CFGs can weakly generate Dutch center embedding , even though they cannot assign the correct cross-serial dependencies between subjects and verbs . Semilinearity is a condition that has been proposed as a desired property for any formalism of natural language syntax . Intuitively, semilinearity ensures that the set of string lengths in a language is not unnaturally sparse. Regular, context-free, and a variety of mildly context-sensitive languages are known to be semilinear . The semilinearity of CL is an interesting open question if we aim to understand the abilities of counter machines as grammars. We first define semilinearity over sets of vectors before considering languages. To start, we introduce the notion of a linear set: Definition 6.1 (Linear set). A set S ⊆ N k is linear if there exist W ∈ N k×m and b ∈ N k such that Semilinearity, then, is a weaker condition that specifies that a set is made up of a finite number of linear components: Definition 6.2 (Semilinear set). A set S ⊆ N k is semilinear if it is the finite union of linear sets. To apply this definition to a language L, we translate each sentence x ∈ L into a vector by taking Ψ(x), the Parikh mapping of x. The Parikh mapping of a sentence is, in more familiar machine learning terms, just its bag of tokens representation. For example, the Parikh mapping of abaa with respect to Σ = {a, b} is 3, 1. Definition 6.3 (Semilinear language). A language L is semilinear if {Ψ(x) | x ∈ L} is semilinear. While we do not prove that the general counter languages are semilinear, we do prove it for a dramatically restricted subclass of the counter languages. We defineQSCL as the set of language acceptable by a counter machine that is both simplified (Definition 2.5) and stateless (Definition 2.7), and show that this class is indeed semilinear. Proof. We express L as Since semilinear languages are closed under finite union and intersection, the problem reduces to showing that {x | c i (x) = b i } is semilinear. We apply the following trick: where Z is the set of all tokens that set counter i to 0, and L(b, i) is the set of suffixes after the last occurence of some token in Z, for ever string in L. Since semilinear languages are closed under concatenation, and Σ * and the finite language Z are trivially semilinear, we just need to show that L(b, i) is semilinear. Counter i cannot be set to zero on strings of L(b, i), so we can write where # σ (x) is the number of occurrences of σ in x, and u i denotes the vector of possible updates to counter i where each index corresponds to a different σ ∈ Σ. So, L(b, i) is the linear language Although the proof of Theorem 6.1 is nontrivial, it should be noted thatQSCL is quite a weak class. Such languages have limited ability to even detect the relative order of tokens in a string. We hope this argument might be extended to show that SCL or CL is semilinear. We have shown that many variants of the counter machine converge to express the same class of formal languages, which supports that CL is a robustly defined class. We also proved that real-time counter languages are closed under a large number of common set operations. This provides tools for future work investigating real-time counter automata. We also showed that counter automata are incapable of evaluating boolean expressions, even though they are capable of verifying that boolean expressions are syntactically well-formed. This has a clear parallel in the domain of natural language, where deciding whether a sentence is grammatical is a different task than representing its deep syntactic or semantic structure. A general take-away from our is that just because a counter machine (or LSTM) is sensitive to surface patterns in linguistic data does not mean it can build correct semantic representations. Counter memory can be exploited to weakly match patterns in language, which might provide the wrong kinds of inductive bias for achieving sophisticated natural language understanding. Finally, we asked whether counter languages are semilinear as another way of studying their linguistic capacity. We concluded only that a quite weak subclass of the counter languages are semilinear, and encourage future work to address the general case.
We study the class of formal languages acceptable by real-time counter automata, a model of computation related to some types of recurrent neural networks.
1,187
scitldr
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the ing summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries. Text summarization is the process of automatically generating natural language summaries from an input document while retaining the important points. By condensing large quantities of information into short, informative summaries, summarization can aid many downstream applications such as creating news digests, search, and report generation. There are two prominent types of summarization algorithms. First, extractive summarization systems form summaries by copying parts of the input BID5 BID22. Second, abstractive summarization systems generate new phrases, possibly rephrasing or using words that were not in the original text BID4.Neural network models based on the attentional encoder-decoder model for machine translation BID0 were able to generate abstractive summaries with high ROUGE scores. However, these systems have typically been used for summarizing short input sequences (one or two sentences) to generate even shorter summaries. For example, the summaries on the DUC-2004 dataset generated by the state-of-the-art system by BID40 are limited to 75 characters. also applied their abstractive summarization model on the CNN/Daily Mail dataset BID8, which contains input sequences of up to 800 tokens and multisentence summaries of up to 100 tokens. But their analysis illustrates a key problem with attentional encoder-decoder models: they often generate unnatural summaries consisting of repeated phrases. We present a new abstractive summarization model that achieves state-of-the-art on the CNN/Daily Mail and similarly good on the New York Times dataset (NYT) BID30. To our knowledge, this is the first end-to-end model for abstractive summarization on the NYT dataset. We introduce a key attention mechanism and a new learning objective to address the FIG2: Illustration of the encoder and decoder attention functions combined. The two context vectors (marked "C") are computed from attending over the encoder hidden states and decoder hidden states. Using these two contexts and the current decoder hidden state ("H"), a new word is generated and added to the output sequence. repeating phrase problem: (i) we use an intra-temporal attention in the encoder that records previous attention weights for each of the input tokens while a sequential intra-attention model in the decoder takes into account which words have already been generated by the decoder. (ii) we propose a new objective function by combining the maximum-likelihood cross-entropy loss used in prior work with rewards from policy gradient reinforcement learning to reduce exposure bias. Our model achieves 41.16 ROUGE-1 on the CNN/Daily Mail dataset. Moreover, we show, through human evaluation of generated outputs, that our model generates more readable summaries compared to other abstractive approaches. In this section, we present our intra-attention model based on the encoder-decoder network BID33. In all our equations, x = {x 1, x 2, . . ., x n} represents the sequence of input (article) tokens, y = {y 1, y 2, . . ., y n} the sequence of output (summary) tokens, and denotes the vector concatenation operator. DISPLAYFORM0 At each decoding step t, we use an intra-temporal attention function to attend over specific parts of the encoded input sequence in addition to the decoder's own hidden state and the previouslygenerated word BID31. This kind of attention prevents the model from attending over the sames parts of the input on different decoding steps. have shown that such an intra-temporal attention can reduce the amount of repetitions when attending over long documents. We define e ti as the attention score of the hidden input state h e i at decoding time step t: DISPLAYFORM0 where f can be any function returning a scalar e ti from the h d t and h e i vectors. While some attention models use functions as simple as the dot-product between the two vectors, we choose to use a bilinear function: DISPLAYFORM1 We normalize the attention weights with the following temporal attention function, penalizing input tokens that have obtained high attention scores in past decoding steps. We define new temporal scores e ti: DISPLAYFORM2 Finally, we compute the normalized attention scores α e ti across the inputs and use these weights to obtain the input context vector c While this intra-temporal attention function ensures that different parts of the encoded input sequence are used, our decoder can still generate repeated phrases based on its own hidden states, especially when generating long sequences. To prevent that, we can incorporate more information about the previously decoded sequence into the decoder. Looking back at previous decoding steps will allow our model to make more structured predictions and avoid repeating the same information, even if that information was generated many steps away. To achieve this, we introduce an intradecoder attention mechanism. This mechanism is not present in existing encoder-decoder models for abstractive summarization. For each decoding step t, our model computes a new decoder context vector c d t. We set c d 1 to a vector of zeros since the generated sequence is empty on the first decoding step. For t > 1, we use the following equations: A closely-related intra-RNN attention function has been introduced by BID3 but their implementation works by modifying the underlying LSTM function, and they do not apply it to long sequence generation problems. This is a major difference with our method, which makes no assumptions about the type of decoder RNN, thus is more simple and widely applicable to other types of recurrent networks. DISPLAYFORM0 To generate a token, our decoder uses either a token-generation softmax layer or a pointer mechanism to copy rare or unseen from the input sequence. We use a switch function that decides at each decoding step whether to use the token generation or the pointer BID7. We define u t as a binary value, equal to 1 if the pointer mechanism is used to output y t, and 0 otherwise. In the following equations, all probabilities are conditioned on y 1,..., y t−1, x, even when not explicitly stated. Our token-generation layer generates the following probability distribution: DISPLAYFORM0 On the other hand, the pointer mechanism uses the temporal attention weights α e ti as the probability distribution to copy the input token x i. DISPLAYFORM1 We also compute the probability of using the copy mechanism for the decoding step t: DISPLAYFORM2 where σ is the sigmoid activation function. Putting Equations 9, 10 and 11 together, we obtain our final probability distribution for the output token y t: p(y t) = p(u t = 1)p(y t |u t = 1) + p(u t = 0)p(y t |u t = 0).The ground-truth value for u t and the corresponding i index of the target input token when u t = 1 are provided at every decoding step during training. We set u t = 1 either when y t is an out-ofvocabulary token or when it is a pre-defined named entity (see Section 5). In addition to using the same embedding matrix W emb for the encoder and the decoder sequences, we introduce some weight-sharing between this embedding matrix and the W out matrix of the tokengeneration layer, similarly to Inan et al. FORMULA1 and BID26. This allows the tokengeneration function to use syntactic and semantic information contained in the embedding matrix. DISPLAYFORM0 2.5 REPETITION AVOIDANCE AT TEST TIME Another way to avoid repetitions comes from our observation that in both the CNN/Daily Mail and NYT datasets, ground-truth summaries almost never contain the same trigram twice. Based on this observation, we force our decoder to never output the same trigram more than once during testing. We do this by setting p(y t) = 0 during beam search, when outputting y t would create a trigram that already exists in the previously decoded sequence of the current beam. In this section, we explore different ways of training our encoder-decoder model. In particular, we propose reinforcement learning-based algorithms and their application to our summarization task. The most widely used method to train a decoder RNN for sequence generation, called the teacher forcing" algorithm BID37, minimizes a maximum-likelihood loss at each decoding step. We define y * = {y * 1, y * 2, . . ., y * n} as the ground-truth output sequence for a given input sequence x. The maximum-likelihood training objective is the minimization of the following loss: DISPLAYFORM0 However, minimizing L ml does not always produce the best on discrete evaluation metrics such as ROUGE BID15. This phenomenon has been observed with similar sequence generation tasks like image captioning with CIDEr BID28 and machine translation with BLEU. There are two main reasons for this discrepancy. The first one, called exposure bias BID27, comes from the fact that the network has knowledge of the ground truth sequence up to the next token during training but does not have such supervision when testing, hence accumulating errors as it predicts the sequence. The second reason is due to the large number of potentially valid summaries, since there are more ways to arrange tokens to produce paraphrases or different sentence orders. The ROUGE metrics take some of this flexibility into account, but the maximum-likelihood objective does not. One way to remedy this is to learn a policy that maximizes a specific discrete metric instead of minimizing the maximum-likelihood loss, which is made possible with reinforcement learning. In our model, we use the self-critical policy gradient training algorithm BID28.For this training algorithm, we produce two separate output sequences at each training iteration: y s, which is obtained by sampling from the p(y s t |y s 1, . . ., y s t−1, x) probability distribution at each decoding time step, andŷ, the baseline output, obtained by maximizing the output probability distribution at each time step, essentially performing a greedy search. We define r(y) as the reward function for an output sequence y, comparing it with the ground truth sequence y * with the evaluation metric of our choice. DISPLAYFORM0 We can see that minimizing L rl is equivalent to maximizing the conditional likelihood of the sampled sequence y s if it obtains a higher reward than the baselineŷ, thus increasing the reward expectation of our model. One potential issue of this reinforcement training objective is that optimizing for a specific discrete metric like ROUGE does not guarantee an increase in quality and readability of the output. It is possible to game such discrete metrics and increase their score without an actual increase in readability or relevance BID16. While ROUGE measures the n-gram overlap between our generated summary and a reference sequence, human-readability is better captured by a language model, which is usually measured by perplexity. Since our maximum-likelihood training objective (Equation 14) is essentially a conditional language model, calculating the probability of a token y t based on the previously predicted sequence {y 1, . . ., y t−1} and the input sequence x, we hypothesize that it can assist our policy learning algorithm to generate more natural summaries. This motivates us to define a mixed learning objective function that combines equations 14 and 15: DISPLAYFORM0 where γ is a scaling factor accounting for the difference in magnitude between L rl and L ml. A similar mixed-objective learning function has been used by for machine translation on short sequences, but this is its first use in combination with self-critical policy learning for long summarization to explicitly improve readability in addition to evaluation metrics. Neural encoder-decoder models are widely used in NLP applications such as machine translation BID33, summarization BID4, and question answering BID8. These models use recurrent neural networks (RNN), such as long-short term memory network (LSTM) BID9 to encode an input sentence into a fixed vector, and create a new output sequence from that vector using another RNN.To apply this sequence-to-sequence approach to natural language, word embeddings BID20 BID25 are used to convert language tokens to vectors that can be used as inputs for these networks. Attention mechanisms BID0 ) make these models more performant and scalable, allowing them to look back at parts of the encoded input sequence while the output is generated. These models often use a fixed input and output vocabulary, which prevents them from learning representations for new words. One way to fix this is to allow the decoder network to point back to some specific words or sub-sequences of the input and copy them onto the output sequence BID35. BID7 and BID18 combine this pointer mechanism with the original word generation layer in the decoder to allow the model to use either method at each decoding step. Reinforcement learning (RL) is a way of training an agent to interact with a given environment in order to maximize a reward. RL has been used to solve a wide variety of problems, usually when an agent has to perform discrete actions before obtaining a reward, or when the metric to optimize is not differentiable and traditional supervised learning methods cannot be used. This is applicable to sequence generation tasks, because many of the metrics used to evaluate these tasks (like BLEU, ROUGE or METEOR) are not differentiable. In order to optimize that metric directly, BID27 have applied the REINFORCE algorithm BID36 to train various RNN-based models for sequence generation tasks, leading to significant improvements compared to previous supervised learning methods. BID1 also use a different kind of reinforcement learning algorithm on machine to optimize BLEU scores in machine translation tasks. While both these methods require an additional neural network, called a critic model, to predict the expected reward and stabilize the objective function gradients, Rennie et al. FORMULA1 designed a self-critical sequence training method that does not require this critic model and lead to further improvements on image captioning tasks. Most summarization models studied in the past are extractive in nature BID5 BID22 BID6, which usually work by identifying the most important phrases of an input document and re-arranging them into a new summary sequence. The more recent abstractive summarization models have more degrees of freedom and can create more novel sequences. Many abstractive models such as Rush et al. FORMULA1, BID4 and are all based on the neural encoder-decoder architecture (Section 4.1). BID19 extend the encoder-decoder architecture with a variational auto-encoder, and use REINFORCE to train it as well. A well-studied set of summarization tasks is the Document Understanding Conference (DUC) 1. These summarization tasks are varied, including short summaries of a single document and long summaries of multiple documents categorized by subject. Most abstractive summarization models have been evaluated on the DUC-2004 dataset, and outperform extractive models on that task BID5. However, models trained on the DUC-2004 task can only generate very short summaries up to 75 characters, and are usually used with one or two input sentences. applied different kinds of attention mechanisms for summarization on the CNN dataset, and Nallapati et al. FORMULA1 used different attention and pointer functions on the CNN and Daily Mail datasets combined. In parallel of our work, BID32 also developed an abstractive summarization model on this dataset with an extra loss term to increase temporal coverage of the encoder attention function. We evaluate our model on a modified version of the CNN/Daily Mail dataset BID8, following the same pre-processing steps described in. We refer the reader to that paper for a detailed description. The final dataset contains 287,113 training examples, 13,368 validation examples and 11,490 testing examples. After limiting the input length to 800 tokens and output length to 100 tokens, the average input and output lengths are respectively 632 and 53 tokens. The New York Times (NYT) dataset BID30 ) is a large collection of articles published between 1996 and 2007. Even though this dataset has been used to train extractive summarization systems BID6 BID10 or closely-related models for predicting the importance of a phrase in an article BID39 BID24 BID11, we are the first group to run an end-to-end abstractive summarization model on the article-abstract pairs of this dataset. While CNN/Daily Mail summaries have a similar wording to their corresponding articles, NYT abstracts are more varied, are shorter and can use a higher level of abstraction and paraphrase. Because of these differences, these two formats are a good complement to each other for abstractive summarization models. We describe the dataset preprocessing and pointer supervision in Section A of the Appendix. We evaluate the intra-decoder attention mechanism and the mixed-objective learning by running the following experiments on both datasets. We first run maximum-likelihood (ML) training with and without intra-decoder attention (removing c d t from Equations 9 and 11 to disable intraattention) and select the best performing architecture. Next, we initialize our model with the best ML parameters and we compare reinforcement learning (RL) with our mixed-objective learning (ML+RL), following our objective functions in Equation 15 and 16. The hyperparameters and other implementation details are described in the Appendix. We report the full-length F-1 score of the ROUGE-1, ROUGE-2 and ROUGE-L metrics with the Porter stemmer option. For RL and ML+RL training, we use the ROUGE-L score as a reinforcement reward. We also tried ROUGE-2 but we found that it created summaries that almost always reached the maximum length, often ending sentences abruptly. Our for the CNN/Daily Mail dataset are shown in Table 1, and for the NYT dataset in TAB2. We observe that the intra-decoder attention function helps our model achieve better ROUGE scores on the CNN/Daily Mail but not on the NYT dataset. Further analysis on the CNN/Daily Mail test set shows that intra-attention increases the ROUGE-1 score of examples with a long ground truth summary, while decreasing the score of shorter summaries, as illustrated in FIG3. This confirms our assumption that intra-attention improves performance on longer output sequences, and explains why intra-attention doesnt improve performance on the NYT dataset, which has shorter summaries on average. In addition, we can see that on all datasets, both the RL and ML+RL models obtain much higher scores than the ML model. In particular, these methods clearly surpass the state-of-the-art model from on the CNN/Daily Mail dataset, as well as the lead-3 extractive baseline (taking the first 3 sentences of the article as the summary) and the SummaRuNNer extractive model BID22. BID32 also reported their on a closely-related abstractive model the CNN/DailyMail but used a different dataset preprocessing pipeline, which makes direct comparison with our numbers difficult. However, their best model has lower ROUGE scores than their lead-3 baseline, while our ML+RL model beats the lead-3 baseline as shown in Table 1. Thus, we conclude that our mixedobjective model obtains a higher ROUGE performance than theirs. We also compare our model against extractive baselines (either lead sentences or lead words) and the extractive summarization model built by BID6, which was trained using a smaller version of the NYT dataset that is 6 times smaller than ours but contains longer summaries. We trained our ML+RL model on their dataset and show the on TAB4. Similarly to BID6, we report the limited-length ROUGE recall scores instead of full-length F-scores. For BID22 39.2 15.7 35.5 SummaRuNNer BID22 39.6 16.2 35.3 words-lvt2k-temp-att Table 5: Comparison of human readability scores on a random subset of the CNN/Daily Mail test dataset. All models are with intra-decoder attention. DISPLAYFORM0 each example, we limit the generated summary length or the baseline length to the ground truth summary length. Our show that our mixed-objective model has higher ROUGE scores than their extractive model and the extractive baselines. We perform human evaluation to ensure that our increase in ROUGE scores is also followed by an increase in human readability and quality. In particular, we want to know whether the ML+RL training objective did improve readability compared to RL.Evaluation setup: To perform this evaluation, we randomly select 100 test examples from the CNN/Daily Mail dataset. For each example, we show the original article, the ground truth summary as well as summaries generated by different models side by side to a human evaluator. The human evaluator does not know which summaries come from which model or which one is the ground truth. Two scores from 1 to 10 are then assigned to each summary, one for relevance (how well does the summary capture the important parts of the article) and one for readability (how well-written the summary is). Each summary is rated by 5 different human evaluators on Amazon Mechanical Turk and the are averaged across all examples and evaluators. Results: Our human evaluation are shown in Table 5. Even though RL has the highest ROUGE-1 and ROUGE-L scores, it produces the least readable summaries among our experiments. The most common readability issue observed in our RL , as shown in the example of TAB3, is the presence of short and truncated sentences towards the end of sequences. This confirms that optimizing for single discrete evaluation metric such as ROUGE with RL can be detrimental to the model quality. On the other hand, our RL+ML summaries obtain the highest readability and relevance scores among our models, hence solving the readability issues of the RL model while also having a higher ROUGE score than ML. This shows the value of the RL+ML training method. We also report perplexity scores in Table 5. Even though the ML model has the lowest perplexity, it doesn't have the highest readability. This indicate that perplexity measurements cannot replace human judgment for readability evaluation. We presented a new model and training procedure that obtains state-of-the-art in text summarization for the CNN/Daily Mail, improves the readability of the generated summaries and is better suited to long output sequences. We also run our abstractive model on the NYT dataset for the first time. We saw that despite their common use for evaluation, ROUGE scores have their shortcomings and should not be the only metric to optimize on summarization model for long sequences. Our intra-attention decoder and combined training objective could be applied to other sequence-tosequence tasks with long inputs and outputs, which is an interesting direction for further research. A NYT DATASET We remove all documents that do not have a full article text, abstract or headline. We concatenate the headline, byline and full article text, separated by special tokens, to produce a single input sequence for each example. We tokenize the input and abstract pairs with the Stanford tokenizer. We convert all tokens to lower-case and replace all numbers with "0", remove "(s)" and "(m)" marks in the abstracts and all occurrences of the following words, singular or plural, if they are surrounded by semicolons or at the end of the abstract: "photo", "graph", "chart", "map", "table" and "drawing". Since the NYT abstracts almost never contain periods, we consider them multisentence summaries if we split sentences based on semicolons. This allows us to make the summary format and evaluation procedure similar to the CNN/Daily Mail dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens per example, after limiting the input and output lengths to 800 and 100 tokens. We created our own training, validation, and testing splits for this dataset. Instead of producing random splits, we sorted the documents by their publication date in chronological order and used the first 90% (589,284 examples) for training, the next 5% for validation, and the remaining 5% for testing. This makes our dataset splits easily reproducible and follows the intuition that if used in a production environment, such a summarization model would be used on recent articles rather than random ones. We run each input and abstract sequence through the Stanford named entity recognizer (NER). For all named entity tokens in the abstract if the type "PERSON", "LOCATION", "ORGANIZATION" or "MISC", we find their first occurrence in the input sequence. We use this information to supervise p(u t) (Equation 11) and α e ti (Equation 4) during training. Note that the NER tagger is only used to create the dataset and is no longer needed during testing, thus we're not adding any dependencies to our model. We also add pointer supervision for out-of-vocabulary output tokens if they are present in the input. For ML training, we use the teacher forcing algorithm with the only difference that at each decoding step, we choose with a 25% probability the previously generated token instead of the ground-truth token as the decoder input token y t−1, which reduces exposure bias BID34. We use a γ = 0.9984 for the ML+RL loss function. We use two 200-dimensional LSTMs for the bidirectional encoder and one 400-dimensional LSTM for the decoder. We limit the input vocabulary size to 150,000 tokens, and the output vocabulary to 50,000 tokens by selecting the most frequent tokens in the training set. Input word embeddings are 100-dimensional and are initialized with GloVe BID25. Based on these dimensions and sizes, our final model has 16.9M trainable parameters, 15M of which are word embeddings. We train all our models with Adam BID13 ) with a batch size of 50 and a learning rate α of 0.001 for ML training and 0.0001 for RL and ML+RL training. At test time, we use beam search of width 5 on all our models to generate our final predictions.
A summarization model combining a new intra-attention and reinforcement learning method to increase summary ROUGE scores and quality for long sequences.
1,188
scitldr
Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated. Due to the difference in model capacities, the student may not benefit fully from the same data points on which the teacher is trained. On the other hand, a human teacher may demonstrate a piece of knowledge with individualized examples adapted to a particular student, for instance, in terms of her cultural and interests. Inspired by this behavior, we design data augmentation agents with distinct roles to facilitate knowledge distillation. Our data augmentation agents generate distinct training data for the teacher and student, respectively. We focus specifically on KD when the teacher network has greater precision (bit-width) than the student network. We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student. We compare our approach with existing KD methods on training popular neural architectures and demonstrate that role-wise data augmentation improves the effectiveness of KD over strong prior approaches. The code for reproducing our will be made publicly available. Background and Motivation. In the educational psychology literature, it is generally considered beneficial if teachers can adapt curricula based upon students' prior experiences (; ; ;). These vary widely depending on students' cultural s, previous educational experiences, interests, and motivations. To help students with different prior experiences to comprehend, memorise, and consolidate a piece of knowledge, teachers may provide extra and customized teaching material during their teaching processes. For instance, when teaching the concept of the color pink, a teacher may choose flamingos, sakura (cherry blossoms), or ice cream cones as the example, depending on a student's . Knowledge distillation (KD) is a common framework for training machine learning models. It works by transferring knowledge from a higher-capacity teacher model to a lower-capacity student model. Most KD methods can be categorized by how they define the knowledge stored in the teacher (i.e., the "soft targets" of training as defined in existing literature). For instance, originally proposed KD for neural networks, and they define the output class probabilities (i.e., soft labels) generated by the teacher as the targets for assisting the training of students. In a follow up work, defined the soft targets via the feature maps in the teacher model's hidden layers. To train a student network with KD effectively, it is important to distill as much knowledge from the teacher as possible. However, previous methods overlook the importance of the medium by which the teacher's knowledge is demonstrated: the training data points. We conjecture that there exist examples, not necessarily seen and ingested by the teacher, that might make it easier for the student to absorb the teacher's knowledge. Blindly adding more training examples may not be beneficial because it may slow down training and introduce unnecessary biases . The analogy with how human teachers adjust their teaching to their students' particular situations (e.g., with the feedback gathered from the students during teaching) suggests that a reasonable yet uninvestigated approach might be to augment the training data for both the teacher and student according to distinct policies. In this paper, we study whether and how adaptive data augmentation and knowledge distillation can be leveraged synergistically to better train student networks. We propose a two-stage, rolewise data augmentation process for KD. This process consists of: training a teacher network till convergence while learning a schedule of policies to augment the training data specifically for the teacher; distilling the knowledge from the teacher into a student network while learning another schedule of policies to augment the training data specifically for the student. It is worth noting that this two-stage framework is orthogonal to existing methods for KD, which focus on how the knowledge to be distilled is defined; thus, our approach can be combined with previous methods straighforwardly. Although our proposed method can in principle be applied to any models trained via KD, we focus specifically on how to use it to transfer the knowledge from a full-precision teacher network into a student network with lower bit-width. Network quantization is crucial when deploying trained models on embedded devices, or in data centers to reduce energy consumption . KD-based quantization jointly trains a full-precision model, which acts as the teacher, alongside a low-precision model, which acts as the student. Previous work has shown that distilling a full-precision teacher's knowledge into a low-precision student, followed by fine-tuning, incurs noticeable performance degradation, especially when the bit-widths are below four . We show that it is advantageous to use adaptive data augmentation to generate more training data for the low-precision network based on its specific weaknesses. For example, low-precision networks may have difficulties learning rotationrelated patterns, 1 and the data augmentation agent should be aware of this and generate more such data points. One positive side-effect for demonstrating the effectiveness of our method is that the improvement brought by our proposed method is more significant compared to the experiments on all full-precision models. Knowledge distillation. KD is initially proposed for model compression, where a powerful wide/deep teacher distills knowledge to a narrow/shallow student to improve its performance . In terms of the definition of knowledge to be distilled from the teacher, existing models typically use teacher's class probabilities and/or intermediate features (; ; ;). Among those KD methods that utilize intermediate feature maps, Relational KD (RKD) considers the intra-relationship in the same feature map, while Multi-Head KD (MHKD) and KD using SVD (KD-SVD) utilize the inter-relationship across feature maps. By contrast, we propose to incorporate both the intra-and inter-relationships within and across feature maps. Automated data augmentation. Manually applying data augmentation rules such as random rotating, flipping, and scaling are common practices for training neural models on image classification tasks (; . Several recent works attempt to automate the data augmentation process. Generative adversarial networks and Bayesian optimization have been used for this process. augment training data in the learned feature space by injecting noise and interpolation. learn how to combine pairs of images for data augmentation. AutoAugment searches for the optimal data augmentation policies (e.g., how to rotate) based on reinforcement learning. However, the search process is computationally expensive. Population-based augmentation (PBA) uses an evolution-based algorithm to automatically augmenting data in an efficient way. In contrast to previous approaches, we study the effect of the training data for KD and propose to use automatic data augmentation to train the student better from her teacher. PBA , as an evolutionary search algorithm, learns a dynamic per-epoch schedule of augmentation policies, denoted as A. Since this schedule is epoch-based, it will re-create the augmented dataset every epoch. More concretely, PBA begins with a population of models that are trained in parallel on a small subset of the original training data. The weights of the worst performing models in the population are replaced by those from better performing models (i.e., exploitation), and the policies are mutated to new ones within the pre-defined policy search space (i.e., exploration). After training, PBA usually keeps the learned augmentation schedule of policies but discards the elementary parameters of the models. A different model (e.g., a larger one) can then use the learned schedule to improve its training on the same task. Following the notations in , a KD method aims to minimize the objective function where λ is a hyper-parameter to balance the impact of the KD loss term. In this paper, for classification tasks,, y truth ), where X refers to training sample space, y truth ∈ Y are the ground-truth labels, F S (·) is the student network, and H(·) denotes the cross-entropy. The KD term can be defined as where F(·) is the function of the network and l(·) is a loss function to compute the difference between the teacher network and the student network. For KD methods that use soft labels , the objective can be defined as where F final (x i) is the feature map of the final layer. There exist some KD methods that utilize the intermediate feature maps in complementary ways. For example, Relational KD considers the intra-relationships. That is, given the feature map of layer j, the KD loss can be formulated as: where Φ(·) refers to the potential function measuring the pairwise relationship inside a feature map from student network or teacher network and F j (x i) is the feature map of layer j (which may include the final logits layer). Therefore, this feature-based KD method includes the benefits of using soft labels. On the other hand, some works consider the interrelationships, where the KD term can be formulated as: Here, ϕ(·) measures the inter-relationship between feature maps of different layers, i.e. k = j. In this work, we use DoReFa 2 to quantize both weights and activations. The quantization function Q(·) is defined as: where r is the full-precision value, r q indicates the quantized value, n bits refers to the number of bits to represent this value. With this quantization function, the quantization on weights w is defined as: The back-propagation is approximated by the straight-through estimator and the partial gradient ∂l ∂r w.r.t. the loss l is computed as: 4 THE PROPOSED METHOD Our proposed method has two stages, which will be described in the following subsections. In the first stage, we train a teacher network, denoted as N T, with the help of PBA-based augmentation. In the second stage, we further distill the knowledge from N T (pre-trained in the first stage) to the student network, denoted as N S, while learning another augmentation schedule to augment the training data for N S. In general, a teacher can provide better training signals for the student if the teacher's performance increases . As shown in Fig. 1, we apply PBA to learn a dynamic per-epoch schedule of augmentation policies, A T, for N T on a small subset of training data. That is, the augmentation agent's training signal is defined as the feedback of N T's accuracy on a subset of the dataset. After this, we use the discovered schedule A T to augment the whole training dataset and re-train N T on it till convergence. KD methods have shown to be effective at improving the performance of lower-capacity networks using the knowledge from higher-capacity networks. In order to take advantage of this functionality, we apply the KD methods together with data augmentation in stage-β, as shown in Fig. 2. More concretely, we first use PBA to learn an epoch-based augmentation schedule A S for N S on a subset of the dataset. Different from the schedule A T learned in stage-α, A S is learned based on the feedback (i.e., accuracy) from N S, which is trained with KD. In other words, N S receives additional training signals from N T that is pre-trained in stage-α. We then use the learned A S to augment the whole training dataset, and re-train N S on it with the distilled knowledge from N T. Note that, because the learned schedule is epoch-based, we do not use the discovered schedule A T from stage-α to augment the training data as initialization. Figure 2: Concept diagram (stage-β) to augment training datapoints for both N T and N S. The N T has been pre-trained using the method shown in Section 4.1, and is fixed during training. The augmentation agent in stage-β is designed to learn schedules of polices that are different from those learned in stage-α, and thus the agent only receives the feedback from N S. When N S is a low-precision network, following , we share the same network architecture 3 between N T and N S. When N S is a full-precision network, it will have fewer layers compared to N T. We evaluate our approach on two benchmark datasets: CIFAR-10 and CIFAR-100. CIFAR-10 consists of 60,000 32×32 color images in 10 classes, with 6000 images per class and CIFAR-100 has 100 classes with 600 images per class. Both have 50,000 training images and 10,000 test images. We search over a "reduced" CIFAR-10/CIFAR-100 with 4,000 training images and 36,000 validation images, which is the same as in . All the data augmentation models are run with 16 total trials to generate augmentation schedules. Following PBA, in stage-α, we run PBA to create schedules over separate models and then transfer the CIFAR-10 policy to CIFAR-100. However, for student network training in stage-β, we empirically use the respective "reduced" dataset. The data augmentation approaches for the baselines include random crop and horizontal flipping operations. Following , our policy search space has a total of 15 operations, each having two magnitude and discrete probability values. We use discrete probability values from 0% to 100%, in increments of 10%. Magnitudes range from 0 to 9. The models we evaluate on include AlexNet and ResNet18. The number of epochs is 200 and the batch size is set to 128. For full-precision network N T, the learning rate starts from 0.1 and is decayed by 0.1 after every 30% of the total epochs. we use SGD with a Nesterov momentum optimizer. The weight decay is set to 5 × 10 −4. For quantization, the learning rate is set to 10 −3 and is divided by 10 every 30% of the total epochs. We use the pre-trained teacher network model as the initial point of student network. We use a smaller weight decay 10 −5 assuming that less regularization is needed by the lower-precision networks. Following DoReFa , the first layer and last layer are not quantized. , during training, we gradually transit the student from learning based on the teacher to training based on the ground-truth labels. This heuristic provides the student with more rich training signals in the early stage but does not force the student to strictly mimic the teacher's behaviors. As for the implementation, we decay the balancing hyper-parameter λ in the KD loss by 0.5 every 60 epochs. As mentioned in Section 3.2, there exist complementary KD methods considering both intra-and inter-relationships within and across feature maps. A natural question is if it would be beneficial to combine them to further boost the performance together with data augmentation. Therefore, we propose a simple extension to these complementary KD methods, dubbed as II-KD, by incorporating intra-relationships inside the feature map and inter-relationships across different feature maps. We incorporate the two relationships into the final objective function as follows: where we only use a single balancing hyper-parameter λ between the original loss and the distillation loss, which does not introduce extra hyper-parameters. More precisely, our KD method incorporates components of three conventional KD methods: RKD , MHGD and KD-SVD . As shown in Eq., we add the three KD terms together with equal coefficients. We use the loss function l(·) following their approaches. For the back-propagation, we clip the gradient for KD loss as in KD-SVD, because this will smoothly post-processes the gradient to limit the impact of KD loss in training. For AlexNet we select the feature maps of ReLU layers after the convolution/max pooling layer. For ResNet18, we select the feature maps of the last ReLU layer of each residual block. We evaluate our proposed KD extension on CIFAR-100 with ResNet18 for different bit-width settings by comparing with various KD methods. For the baseline methods, we use their default settings with a fixed and pre-trained teacher network in the training stage and λ = 1 for the knowledge distillation loss. We set λ = 0.4 for II-KD in Eq., as we have two KD terms. Tab. 1 reports the on various augmented KD methods. We observe that our proposed methods clearly outperforms the other KD methods on all the settings, though the improvements over MHGD and KD-SVD are not huge. The also reveal that only relying soft labels is not as effective as utilizing multiple supervising signals from the teacher. In this subsection, we aim to answer this question: is our two-stage role-wise augmentation with KD effective for network quantization? We conduct experiments on CIFAR-10 and CIFAR-100 datasets under full-precision, 4-bit, and 2-bit settings. From Tab. 2, we can observe that training with learned data augmentation schedules does not improve the performance of low-precision networks too much. Similar to the obtained in , transferring knowledge from the full-precision to the low-precision student usually helps the training of students, which is especially obvious on the CIFAR-100 dataset. Tab. 2 also clearly shows that our proposed pipeline consistently improves the performance of the lowprecision student networks. For example, the 4-bit N S is comparable with full-precision reference without loss of accuracy for CIFAR-10 and with loss of accuracy within 1.0% on CIFAR-100. When decreasing the precision to 2-bit, the are still promising as compared with other baselines, even though there is a performance gap between the 2-bit and the full-precision models. For instance, our approach usually outperforms the strong baseline, only using II-KD, by more than 1.0%. Here we aim to answer this question: how effective is it if we use A T, learned based on the feedback from N T in stage-α, to dynamically augment the training dataset and train N S on it? Tab. 3 reports the accuracy comparison with different KD methods and augmentation schedules. We can clearly see that augmenting the training dataset for N S with A S consistently outperforms those using the transferred schedules A T among different KD methods. This observation is consistent with our assumption that N S has her own optimal augmentation schedule, A S, that is different from A T for N T. In particular, blindly applying the teacher augmentation schedule A T may negatively influence Table 2: Accuracy (%) on CIFAR-10 and CIFAR-100 datasets with different bit-widths. Vanilla Training for 4-bit and 2-bit refers to training a network based on DoReFa from scratch without learned data augmentation. Teacher after Stage-α refers to using learned schedules discovered by PBA to re-train N T as described in Section 4.1. Student with only II-KD refers to training N S using II-KD but without the learned data augmentation. Student after Stage-β refers to training N S using II-KD and the learned data augmentation. For Vanilla Training and Teacher after Stage-α, we report the accuracy of N T, and for the rest we report the accuracy of N S. the training of N S as compared to only using KD. For example, the learned schedule based on the teacher A T degrades the performance of N S by 0.58% for AlexNet on CIFAR-100 as compared to applying KD methods, as shown in Tab. 2. To analyze the difference on the discovered schedules between N T (i.e., full-precision ResNet18) and N S (i.e., 4-bit ResNet18), we report their augmented schedules quantitatively in terms of normalized probability and magnitude on CIFAR-100 in Fig. 3. We normalize the probability of each epoch by dividing the maximal summation of probabilities for all operations across all epochs. It can be seen that the discovered schedules A S for N S is quite different from A T for N T. In particular, for A T, there is an emphasis on Brightness, Posterize, Rotate, Sharpness and TranslateY, while A S cares more about Contrast, ShearX and TranslateY. Furthermore, we observe that the probability and magnitude increase as the epoch evolves. For A S, in the beginning, KD plays a more important role, and there is no augmentation operation before about epoch 50. As the training continues, the augmentation policies become more important. One possible reason is that, for lowprecision networks, KD methods can provide rich training signals such that data augmentation does not help in the early training phases. Furthermore, we observe that, compared to A T, the schedule for student A S evolves more smoothly in the sense that the policy updating frequency is lower. For example, the probability and magnitude values change about every 40 epochs for student, while the policies for teacher update about every 15 epochs. One possible reason is that for the low-precision N S, KD methods make the training process more smooth and it is not necessary to change the augmentation policies too frequently. This is consistent with the observations shown in Tab. 2 that KD can already provide useful training signals. Also, this validates our assumption that N S has her own optimal augmentation schedule A S that is different from A T. (a) Normalized plot of operation probability parameters over time for the teacher network NT. (b) Operation magnitude parameters over time for the teacher network NT. (c) Normalized plot of operation probability parameters over time for the student network NS. (d) Operation magnitude parameters over time for the student network NS. Figure 3: Evolution of magnitude and probability parameters in the learned schedules. Each operation appears in the parameter list twice, and we take the mean values of the parameter. This subsection aims to verify the effectiveness of our proposed methods on more conventional settings where both N T and N S are full-precision networks. We select ResNet18 as the teacher, and ResNet8 as the student to check how our proposed methods affect the student network. Tab. 4 shows that our proposed method outperforms the standard baseline training. The discovered augmentation schedule further boosts the performance of shallow N S based on II-KD, though the improvement is not that significant compared with the obtained when N S is low-precision network. This shows that our proposed method can be used for full-precision training tasks. Table 4: Accuracy (%) on CIFAR-100 with full-precision ResNet8 as N S and full-precision ResNet18 as N T under different settings. Vanilla Training refers to training a full-precision network from scratch. Re-Training with PBA refers to using learned schedules discovered by PBA to re-train N T as described in Section 4.1. Student with only II-KD refers to training N S using II-KD but without the learned data augmentation. Student after Stage-β refers to training N S using II-KD and the learned data augmentation. Previous literature on KD focuses on exploring the knowledge representation and the strategies for distillation. However, both the teacher and student learn from the same training data without adapting the different learning capabilities. To address this issue, we propose customizing distinct agents to automatically augment the training data for the teacher and student, respectively. We have extensively studied the effect of combining data augmentation and knowledge distillation. Furthermore, we propose a simple feature-based KD variant that incorporates both intra-and inter-relationships within and across feature maps. We have empirically observed that the student can learn better from the teacher with the proposed approach, especially in the challenging low-precision scenario, and the learned schedules are different for the teacher and student.
We study whether and how adaptive data augmentation and knowledge distillation can be leveraged simultaneously in a synergistic manner for better training student networks.
1,189
scitldr
Neural network models have shown excellent fluency and performance when applied to abstractive summarization. Many approaches to neural abstractive summarization involve the introduction of significant inductive bias, such as pointer-generator architectures, coverage, and partially extractive procedures, designed to mimic human summarization. We show that it is possible to attain competitive performance by instead directly viewing summarization as language modeling. We introduce a simple procedure built upon pre-trained decoder-transformers to obtain competitive ROUGE scores using a language modeling loss alone, with no beam-search or other decoding-time optimization, and instead rely on efficient nucleus sampling and greedy decoding. Neural network approaches to abstractive summarization generally encode the source document into some hidden state or representation, then decode this representation into a summarized, abstracted version of the source document. These approaches usually rely on a sequence-to-sequence style architecture, and tend to produce fluent, well formed natural language summaries when coupled with beam search or other decoding techniques. A weakness of traditional sequence-to-sequence learning when applied to summarization is the lack of a direct copy mechanism, leading to missing or misrepresented details in decoded summaries. Though attention helps ameliorate this issue by directly learning to focus on specific words or phrases in a source document, many have allowed for an explicit copy mechanism inspired by Pointer Networks, by optimizing a differentiable decision whether to generate new text or directly copy from the source. Peters et al., Devlin et al., Radford et al., and many others have shown the benefits of large-scale pretraining on large, unlabeled corpora on a variety of downstream tasks in transfer learning settings. In particular, it has been shown that large-scale, attention-only language modeling via decoder-only transformers as an unsupervised pretraining task admits the ability to perform zero-shot learning on meaningful tasks involving natural language generation. Motivated by this, we propose a simple method that exhibits competitive performance on abstractive summarization without using sequence-to-sequence architectures or other standard tools in the neural abstractive summarization toolbox, and instead using a decoder-only transformer language model with transfer learning. This further illustrates the utility of finetuning language models trained on open domain text. Transformer Preliminaries Our model builds on previous work utilizing decoder-only Transformers for jointly learning language modeling and sequence transduction in aligned domains, which limits attention to tokens 0, 1,..., n − 1 for predicting token n. Formally, a decoder-only Transformer considers a sequence of one-hot token vectors T = [t 0, t 1, . . ., t n−1] ∈ {0, 1} V ×n, with each t i ∈ {0, 1} V where V is the size of the vocabulary. Given an embedding matrix W E ∈ R d×V and a positional encoding matrix W P ∈ R d×(n−1), the model computes where TRF is the transformer block with self-attention, first introduced in Vaswani et al.. We utilize the modifications provided in Radford et al., such as moving Layer Normalization to the beginning of each transformer block. Decoder-only Sequence Transduction for Summarization Formally, consider a set of paired documents C = {(x, y)}, |C| = N. For a source-summary pair (x, y) ∈ C, the source document x = [x 0, . . ., x m] and reference summary y = [y 0, . . ., y k] are sequences of one-hot token vectors. To learn this mapping using a language model, we combine x and y using special learnable vectors corresponding to control tokens. In addition, we augment Eq. 1 to include a segment-specific (i.e., source or summary) embedding. Finally, we reset the positional encoding for the summary. Our model is fed three sequences (see Eq. 2): a concatenation of the source document and the summary (S), positional encodings that reset for the summary component (P), and segment-specific encodings for the source and the summary (Q). We represent the start of the source document with α, the beginning of the summary with β, and the end of sequence with δ. Additionally, we encode the source segment with σ and the summary segment with τ. Thus, our model changes Eq. 1 by adding the position encoding modification from Eq. 2 and an additional trainable weight W Q representing the segment encoding Q. The model is trained via maximum likelihood, where we take into account the full likelihood of the source-summary pair. Input Representation Given recent trends moving away from purely word-or character-level representations, we utilize data-driven subword encoding via Byte Pair Encoding (BPE), following the procedure outlined in Radford et al.. For experiments in which we finetune the 117M parameter model from Radford et al., we utilize their prebuilt vocabulary; in ablation studies, we utilize SentencePiece to learn BPE merges. Datasets We train and evaluate our models on the CNN/Daily Mail (CNN-DM) corpus of news articles and summaries, utilizing the non-anonymized version. We use the predefined training, validation, and test splits, and limit source articles to 400 tokens and summaries to 100 tokens at training time. As an additional test, we train and evaluate the best model configuration from the ablation studies above on the Extreme Summarization (XSum) corpus, which contains single sentence summaries of BBC articles. As shown in Narayan et al., the XSum corpus requires models to perform a much higher degree of semantic distillation, as indicated by low n-gram overlap, high n-gram novelty, and poorly performing LEAD-3 baselines. We conduct experiments in two regimes for CNN-DM: first, we finetune the model outlined in Sec. 2 on top of the 117M parameter model release from Radford et al., and second, we perform a full training from scratch in order to ablate the effect of transfer learning. We utilize a context size of 1024 with an embedding dimension of 768, 12 attention heads, and a batch size of 10. We train using the Adam optimizer with a learning rate of 5 × 10 −5 until the loss ceases to decrease on the validation set. For XSum, we use the highest-performing setup from CNN-DM experiments. In lieu of beam search, we compare greedy decoding and nucleus sampling. In both cases, we decode until we reach the stop-token δ (Eq. 2). In the case of nucleus sampling, we perform 5 Table 1: Comparison of our method with select existing methods on the CNN-DM dataset. independent decodings 1 with p = 0.3, then pick the decoding that reports the lowest negative log likelihood score of the completed summary. We use 1/k 0.6 as a likelihood normalization term to avoid a preference for shorter summaries, borrowing directly from Wu et al.. Table 2: Ablation of model components on CNN-DM (Decoded via nucleus sampling procedure). We evaluate all models using the ROUGE metric, in particular the F1 variants of ROUGE-1, ROUGE-2, and ROUGE-L which measure unigram overlap, bigram overlap, and longest common subsequence respectively. CNN-DM Our main are displayed in Table 1, where we compare our method (in the bottom section of the table) to existing methods (in the upper portion) on the CNN-DM dataset, and show ablations in Table 2. We note that our models (for ROUGE-1 and -2) are competitive even when using greedy decoding, and without any sequence-to-sequence style architectures or coverage terms, illustrating the power of this approach for abstractive summarization. We note that using a well trained language model and then finetuning yields a significant performance jump (as shown via ablation in Table 2), motivating this method in practical contexts given the recent trends toward large-scale, self-supervised learning approaches. XSum As a secondary evaluation of our approach, we train our best model on the XSum dataset and report ROUGE scores in a direct comparison to the benchmarks reported. Results for these experiments are shown in Table 3. We achieve highly competitive performance relative to models reported in Narayan et al. building on a finetuning approach without using many of the inductive biases traditionally present in summarization methods. This work puts forward a simple approach to abstractive summarization by viewing sequence transduction as a language modeling problem. We show the effectiveness of using decoder-only transformers for this task, in particular, when coupled with recent advances in large-scale language modeling and transfer learning. We show that competitive performance on two benchmark datasets is possible without many of the standard tools in neural abstractive summarization, such as sequence-tosequence modeling, coverage mechanisms, direct ROUGE optimization via reinforcement learning, or beam search, instead relying on a purely language modeling loss and simple decoding mechanisms such as nucleus sampling and greedy decoding. This approach yields highly fluent text, and illustrates the power of unsupervised representation learning-based transfer learning for downstream tasks. Table 3: Comparison of with existing methods on XSum, reported in Narayan et al..
We introduce a simple procedure to repurpose pre-trained transformer-based language models to perform abstractive summarization well.
1,190
scitldr
This paper addresses the problem of incremental domain adaptation (IDA). We assume each domain comes sequentially, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the encountered domains. We propose to augment a recurrent neural network (RNN) with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experiments show that our approach significantly outperforms naive fine-tuning and previous work on IDA, including elastic weight consolidation and the progressive neural network. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical . Domain adaptation aims to transfer knowledge from a source domain to a target domain in a machine learning system. This is important for neural networks, which are data-hungry and prone to overfitting. In this paper, we focus on incremental domain adaptation (IDA), where we assume different domains come one after another. We only have access to the data in the current domain, but hope to build a unified model that performs well on all the domains that we have encountered (; ;).Incremental domain adaptation is useful in various scenarios. Suppose a company is doing business with different partners over a long period of time. The company can only access the data of the partner with a current contract. However, the machine learning model is the company's property (if complying with the contract). Therefore, it is desired to preserve as much knowledge as possible in the model and not to rely on the availability of the data. Another important application of IDA is a quick adaptation to new domains. If the environment of a deployed machine learning system changes frequently, traditional methods like jointly training all domains require the learning machine to be re-trained from scratch every time. Fine-tuning a neural network by a few steps of gradient updates does transfer quickly, but it suffers from the catastrophic forgetting problem . Suppose we do not know the domain of a data point when predicting; the (single) finetuned model cannot predict well for samples in previous domains, as it tends to "forget" quickly during fine-tuning. A recent trend of domain adaptation in the deep learning regime is the progressive neural network , which progressively grows the network capacity if a new domain comes. Typically, this is done by enlarging the model with new hidden states and a new predictor (FIG0). To avoid interfering with existing knowledge, the newly added hidden states are not fed back to the previously trained states. During training, they fix all existing parameters, and only train the newly added ones. For inference, they use the new predictor for all domains. This is sometimes undesired as the new predictor is trained with only the last domain. In this paper, we propose a progressive memory bank for incremental domain adaptation. Our model augments a recurrent neural network (RNN) with a memory bank, which is a set of distributed, real-valued vectors capturing domain knowledge. The memory is retrieved by an attention mechanism. When our model is adapted to new domains, we progressively increase the slots in the memory bank. But different from , we fine-tune all the parameters, including RNN and the existing memory slots. Empirically, when the model capacity increases, the RNN does not forget much even if the entire network is fine-tuned. Compared with expanding RNN hidden states, the newly added memory slots do not contaminate existing knowledge in RNN states, as will be shown by a theorem. We evaluate our approach 1 on Natural Language Inference and Dialogue Response Generation. Experiments support our hypothesis that the proposed approach adapts well to target domains without catastrophic forgetting of the source. Our model outperforms the naïve fine-tuning method, the original progressive neural network, as well as other IDA techniques including elastic weight consolidation (EWC) .Detailed related work is provided in Appendix A. Our model is based on an RNN. At each time step, the RNN takes the embedding of the current word as input, and changes its states accordingly. This can be represented by DISPLAYFORM0 where h i and h i−1 are the hidden states at time steps i and i − 1, respectively. x i is the input at the ith step. Typically, long short term memory (LSTM) or Gated Recurrent Units (GRU) are used as RNN transitions. In the rest of this section, we will describe a memory augmented RNN, and how it is used for incremental domain adaptation (IDA). We enhance the RNN with an external memory bank, as shown in FIG0. The memory bank augments the overall model capacity by storing additional parameters in memory slots. At each time step, our model computes an attention probability to retrieve memory content, which is then fed to the computation of RNN transition. Particularly, we adopt a key-value memory bank, inspired by. Each memory slot contains a key vector and a value vector. The former is used to compute the attention weight for memory retrieval, whereas the latter is the value of memory content. For the ith step, the memory mechanism computes an attention probability α i by DISPLAYFORM0 where m (key) j is the key vector of the jth slot of the memory (among N slots in total). Then the model retrieves memory content by a weighted sum of all memory values, where the weight is the attention probability, given by DISPLAYFORM1 is the value vector of the jth memory slot. We call c i the memory content. Then, c i is concatenated with the current word x i, and fed to the RNN at step i to compute RNN state transition. The memory bank in our model captures distributed knowledge; this is different from other work where memory slots correspond to specific entities . The attention mechanism enables us to train both memory content and its retrieval end-to-end, along with other parameters. The memory bank in Subsection 2.1 can be progressively expanded to adapt a model in a source domain to new domains. This is done by adding new memory slots to the bank which are learned exclusively from the target data. Suppose the memory bank is expanded with another M slots in a new domain, in addition to previous N slots. We then have N + M slots in total. The model computes attention probability over the expanded memory and obtains the attention vector in the same way as Equations -, except that the summation is computed from 1 to N + M. To initialize the expanded model, we load all previous parameters, including RNN weights and the learned N slots, but randomly initialize the progressively expanded M slots. During training, we update all parameters by gradient descent. The process is applied whenever a new domain comes, as shown in Algorithm 1 in Appendix A.We would like to discuss the following issues. DISPLAYFORM0 where h We evaluate our approach on natural language inference. This is a classification task to determine the relationship between two sentences, the target labels being entailment, contradiction, and neutral. we train a bi-directional LSTM (BiLSTM), following the original MultiNLI paper . Our BiL-STM achieves an accuracy of 68.37 on the official MultiNLI test set, which is better than 67.51 reported in the original MultiNLI paper using BiLSTM. This shows that our implementation and tuning are fair for the basic BiLSTM, and that our model is ready for the study of IDA. The details of network architecture, training and hyper-parameter tuning are given in Appendix C. We want to compare our approach with a large number of baselines and variants, and thus choose two domains as a testbed: Fic as the source and Gov as the target. We show in Table 1.First, we analyze the performance of RNN and the memoryaugmented RNN (Lines 1-2 vs. Lines 3-4). They have generally similar performance, showing that, in the non-transfer setting, the memory bank does not help the RNN much, and thus is not a typical RNN architecture in previous literature. However, This later confirms that the performance improvement is indeed due to our IDA technique, instead of simply a better neural architecture. We then apply two straightforward methods of domain adaptation: multi-task learning (Line 5) and fine-tuning (Line 6). Multi-task learning jointly optimizes source and target objectives, denoted by "S+T." On the other hand, the fine-tuning approach trains the model on the source first, and then finetunes on the target. In our experiments, these two methods perform similarly on the target domain, which is consistent with (performs significantly worse than multi-task learning, as it suffers from the catastrophic forgetting problem. We notice that, in terms of source performance, the fine-tuning approach (Line 6) is slightly better than trained on the source domain only (Line 3). This is probably because our domains are highly correlated as opposed to , and thus training with more data on target improves the performance on source. However, fine-tuning does achieve the worst performance on source compared with other domain adaptation approaches (among Lines 5-8). Thus, we nevertheless use the terminology "catastrophic forgetting", and our research goal is still to improve IDA performance. The main of our approach are Lines 7 and 8. We see that on both source and target domains, our approach outperforms the fine-tuning method alone where the memory size is not increased (comparing Lines 7 and 6). This verifies our conjecture that, if the model capacity is increased sufficiently, the new domain does not override the learned knowledge much in the neural network. Our proposed approach is also "orthogonal" to the expansion of the vocabulary size, where target-specific words are randomly initialized and learned on the target domain. As seen, this combines well with our memory expansion and yields the best performance on both source and target (Line 8).We now compare an alternative way of increasing model capacity, i.e., expanding hidden states (Lines 9 and 10). For fair comparison, we ensure that the total number of model parameters after memory expansion is equal to the number of model parameters after hidden state expansion. We see that the performance of hidden state expansion is poor especially on the source domain, even if we fine-tune all parameters. This experiment provides empirical evidence to our theorem that expanding memory is more robust than expanding hidden states. We also compare the with previous work on IDA. EWC does not achieve satisfactory . We investigate other published papers using the same method and find inconsistent : EWC works well in some applications but performs poorly on others ; even report near random performance with EWC. We also re-implement the progressive neural network . We use the target predictor to do inference for both source and target domains. Progressive neural network also yields low performance, particularly on source, probably because the predictor is trained with only the target domain. We measure the statistical significance of the with one-tailed Wilcoxon's signed-rank test . Each method is compared with Line 8: ↑ and ⇑ denote "significantly better" with p < 0.05 and p < 0.01 respectively. ↓ and ⇓ similarly denote "significantly worse". The absence of an arrow indicates that the performance difference compared with Line 8 is statistically insignificant with p < 0.05. The test shows that our approach is significantly better than others, both on source and target. Having analyzed our approach, baselines, and variants on two domains in detail, we test the performance of IDA with multiple domains, namely, Fic, Gov, Slate, Tel, and Travel. We assume these domains come one after another, and our goal is to achieve high performance on both new and previous domains. TAB3 shows that our approach of progressively growing memory bank achieves the same performance as fine-tuning on the last domain (both with vocabulary expansion). But for all previous 4 domains, we achieve significantly better performance. Our model is comparable to multi-task learning on all domains. It also outperforms EWC and the progressive neural network in all domains; the are consistent with Table 1. This provides evidence of the effectiveness for IDA with more than two domains. It should also be mentioned that multi-task learning requires data from all domains to be available at the same time. It is not an incremental approach for domain adaptation, and thus cannot be applied to the scenarios introduced in Section 1. We include this setting mainly because we are curious about the performance of non-incremental domain adaptation. In this paper, we propose a progressive memory network for incremental domain adaptation (IDA). We augment an RNN with an attention-based memory bank. During IDA, we add new slots to the memory bank and tune all parameters by back-propagation. Empirically, the progressive memory network does not suffer from the catastrophic forgetting problem as in naïve fine-tuning. Our intuition is that the new memory slots increase the neural network's model capacity, and thus, the new knowledge less overrides the existing network. Compared with expanding hidden states, our progressive memory bank provides a more robust way of increasing model capacity, shown by both a theorem and experiments. We also outperform previous work for IDA, including elastic weight consolidation (EWC) and the original progressive neural network. Bayer, J., Osendorfer, C., Korhammer, D., Chen, N., Urban, S., and van der Smagt, P. On fast dropout and its applicability to recurrent networks. Rusu et al. FORMULA0 propose a progressive neural network that progressively increases the number of hidden states (FIG0). To avoid overriding existing information, they propose to fix the weights of the learned network, and do not feed new states to old ones. This in multiple predictors, requiring that a data sample is labeled with its domain during the test time. Should different domains be highly correlated to each other, the predictor of a previous domain cannot make use of new data to improve performance. If we otherwise use the last predictor to predict samples from all domains, its performance may be low for previous domains, as the predictor is only trained with the last domain. propose an extension of the progressive network. They identify which existing hidden units are relevant for the new task (with their sparse penalty), and finetune only the corresponding subnetwork. However, sparsity is not common for RNNs in NLP applications, as sparse recurrent connections are harmful. A similar phenomenon is that dropout of recurrent connections is harmful . Our work is related to memory-based neural networks. propose an end-to-end memory network that assigns a slot for an entity, and aggregates information by multiple attention-based layers. In their work, they design the architecture for bAbI question answering, and assign a memory slot for each sentence. Such idea can be extended to various scenarios, for example, assigning slots to external knowledge for question answering and assigning slots to dialog history for a conversation system .Another type of memory in the neural network regime is the neural Turing machine (NTM) . Their memory is not directly parameterized, but is read or written by a neural controller. Therefore, such memory serves as temporary scratch paper, but does not store knowledge itself. In NTM, the memory information and operation are fully distributed/neuralized, as they do not correspond to the program on a true (non-neural) Turing machine. Zhang et al. (2018b) combine the above two styles of memory for task-oriented dialog systems, where they have both slot-value memory and read-and-write memory. Different from the above work, our memory bank stores knowledge in a distributed fashion, where each slot does not correspond to a concrete entity. Our memory is directly parameterized, interacting in a different way from RNN weights. Thus, it provides a natural way of incremental domain adaptation. Our proposed IDA process is shown in Algorithm 1. It is noted that the following theorem does not explicitly prove for IDA, but shows that expanding memory is more stable than expanding hidden states. This is particularly important at the beginning steps of IDA, as the progressively growing parameters are randomly initialized and are basically noise. Although our theoretical analysis uses a restricted setting (i.e., vanilla RNN transition and linear activation), it provides the key insight that our approach is appropriate for IDA. tions. That is, DISPLAYFORM0 where h Proof: We first make a few assumptions. Let h i−1 be the hidden state of the last step. We focus on one step of transition and assume that h i−1 is the same when the model capacity is increased. We consider a simplified case where the RNN has vanilla transition with the linear activation function. We measure the effect of model expansion quantitatively by the expected norm of the difference on h i before and after model expansion. Suppose the original hidden state h i is D-dimensional. We assume each memory slot is d-dimensional, and that the additional RNN units when expanding the hidden state are also d-dimensional. We further assume every variable in the expanded memory and expanded weights (W in Figure 2) are iid with zero mean and variance σ 2. This assumption is reasonable as it enables a fair comparison of expanding memory and expanding hidden states. Finally, we assume every variable in the learned memory slots, i.e., m jk, follows the same distribution (zero mean, variance σ 2). This assumption may not be true after the network is trained, but is useful for proving theorems. We compute how the original dimensions in the hidden state are changed if we expand RNN. We denote the expanded hidden states by h i−1 and h i for the two time steps. We denote the weights connecting from h i−1 to h i by W ∈ R D×d. We focus on the original D-dimensional space, denoted as h (s) i. The connection is shown in Figure 2a. We have DISPLAYFORM1 where FORMULA0 is due to the independence and zero-mean assumptions of every element in W and h i−1. FORMULA0 is due to the independence assumption between W and h i−1.Next, we compute the effect of expanding memory slots. DISPLAYFORM2 i is the RNN hidden state after memory expansion. ∆c def = c − c, where c and c are the attention content vectors before and after memory expansion, respectively, at the current time step. 4 W (c) is the weight matrix connecting attention content to RNN states. The connection is shown in Figure 2b. Reusing the of, we immediately obtain DISPLAYFORM3 where ∆c k is an element of the vector ∆c. To prove Equation, it remains to show that Var(∆c k) ≤ σ 2. We now analyze how attention is computed. Let α 1, · · ·, α N +M be the unnormalized attention weights over the N + M memory slots. We notice that α 1, · · ·, α N remain the same after memory expansion. Then, the original attention probability is given by α j = α j /(α 1 + · · · + α N) for j = 1, · · ·, N. After memory expansion, the attention probability becomes α j = α j /(α 1 + · · · + α N +M), illustrated in FIG7. We have 4 We omit the time step in the notation for simplicity. Unnormalized measure DISPLAYFORM0 where DISPLAYFORM1 By our assumption of total attention DISPLAYFORM2 Then, we have DISPLAYFORM3 Here, is due to the assumption that m jk is independent and zero-mean, and is due to the independence assumption between β j and m jk. To obtain, we notice that DISPLAYFORM4 2 ≤ 1, concluding our proof. Note: In the theorem (and in experiments), memory expansion and hidden state expansion are done such that the total number of model parameters remain the same. The condition DISPLAYFORM5 α i,j in our theorem requires that the total attention to existing memory slots is larger than to the progressively added slots. This is a reasonable assumption because: During training, attention is trained in an ad hoc fashion to align information, and thus some of α i,j for 1 ≤ j ≤ N might be learned so that it is larger than a random memory slot; and For a new domain, we do not add a huge number of slots, and thus N +M j=N +1 α i,j will not dominate. We follow the original MultiNLI paper to choose the base model and most of the settings: 300D RNN hidden states, 300D pretrained GloVe embeddings for initialization, batch size of 32, and the Adam optimizer for training. The initial learning rate for Adam is tuned over the set {0.3, 0.03, 0.003, 0.0003, 0.00003}. It is set to 0.0003 based on validation performance. For the memory part, we set each slot to be 300D, which is the same as the RNN and embedding size. We tune the number of progressive memory slots in FIG9, which shows the validation performance on the source (Fic) and target (Gov) domains. We see that the performance is close to fine-tuning alone if only one memory slot is added. It improves quickly between 1 and 200 slots, and tapers off around 500. We thus choose to add 500 slots for each domain. Table 3 shows the dynamics of IDA with our progressive memory network. Comparing the upper-triangular values (in gray, showing out-of-domain performance) with diagonal values, we see that our approach can be quickly adapted to the new domain in an incremental fashion. Comparing lower-triangular values with the diagonal, we see that our approach does not suffer from the catastrophic forgetting problem as the performance of previous domains is gradually increasing if trained with more domains. After all data are observed, our model achieves the best performance in most domains (last row in Table 3), despite the incremental nature of our approach. We evaluate our approach on the task of dialogue response generation. Given an input text sequence, the task is to generate an appropriate output text sequence as a response in human-computer dialogue. For the target domain, we manually construct a very small dataset to mimic the scenario where quick adaptation has to be done to a new domain with little training data. In particular, we choose a random subset of 15k messageresponse pairs from the Ubuntu Dialog Corpus , a dataset of conversations about Ubuntu. We use a 9k-3k-3k data split. The base model is a sequence-to-sequence (Seq2Seq) neural network Following previous work, we use BLEU-2 and average Word2Vec embedding similarity (W2V-Sim) (; a) as the evaluation metrics. BLEU-2 is the geometric mean of unigram and bigram word precision penalized by length, and correlates with human satisfaction to some extent . W2V-Sim is defined as the cosine similarity between the averaged Word2Vec embeddings of the model outputs and the ground truths. Intuitively, BLEU measures hard word-level overlap between two sequences, whereas W2V-Sim measures soft similarity in a distributed semantic space. The for dialogue response generation are shown in Table 4. We see that BLEU-2 and W2V similarity are not necessarily consistent. For example, the memory-augmented RNN trained solely on source achieves the best source BLEU-2, whereas the proposed progressive memory has the highest W2V cosine similarity on S. However, our model variants (either expanding the vocabulary or not) achieve the best performance on most metrics (Lines 7 and 8). Moreover, it consistently outperforms all other IDA approaches. Following Experiment I, we conduct statistical test compared with Line 8. The test shows that our method is significantly better than the other IDA methods. you should be able to install the grub cd at the drive TAB9. Sample outputs of our IDA model S→T (F+M+V) from TAB9. In general, the evaluation of dialogue systems is noisy due to the lack of appropriate metrics . Nevertheless, our experiment provides additional evidence of the effectiveness of our approach. It also highlights our model's viability for both classification and generation tasks.
We present a neural memory-based architecture for incremental domain adaptation, and provide theoretical and empirical results.
1,191
scitldr
In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations. In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte CarloTree Search (MCTS). Similarly to orthogonal matching pursuit (OMP), our RL+MCTS algorithm chooses the support of the signal sequentially. The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP. Empirical are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms. We consider the compressed sensing (CS) problem [1; 2; 3], where for a given matrix A ∈ R m×n, m n, and a (noiseless) observation vector y = Ax 0, we want to recover a k-sparse vector/signal x 0 (k < m). Formally, it can be formulated as: subject to Ax = Ax 0 Related work There is a large collection of algorithms for solving the CS problem. Some foundational and classic algorithms include convex relaxation, matching and subspace pursuit [4; 5; 6] and iterative thresholding [7; 8]. In particular, two well-established methods are (i) Orthogonal Matching Pursuit (OMP) and (ii) Basis Pursuit (BP). OMP recovers x 0 by choosing the columns of A iteratively until we choose k columns. BP recovers x 0 by solving min Ax=y ||x|| 1. Because OMP and BP are extremely well studied theoretically [1; 2] and empirically, we use these two algorithms as the main baseline methods to compare against when evaluating the proposed RL+MCTS algorithm. Recent advancements in machine learning have opened a new frontier for signal recovery algorithms. Specifically, these algorithms take a deep learning approach to CS and the related error correction problem. The works in,, and apply ANNs and RNNs for encoding and/or decoding of signals x 0. Modern generative models such as Autoencoder, Variational Autoencoder, and Generative Adversarial Networks have also been used to tackle the CS problem with promising theoretical and empirical [15; 16; 17]. These works involve using generative models for encoding structured signals, as well as for designing the measurement matrix A. Notably, the empirical in these works typically use structured signals in x 0. For example, in and, MNIST digits and celebrity images are used for training and testing. Our contribution Differently from the above learning-based works, our innovation with machine learning is on signal recovery algorithms (as opposed to signal encoding or measurement matrix design). We do not assume the signals to be structured (such as images), but cope with general sparse signals. This underlying model for x 0 is motivated by the same assumptions in the seminal work on universal phase transitions by Donoho and Tanner in. Moreover, we assume the measurement matrix A is given. Extending to varying matrices A is left for future investigation. In this work, we approach the signal recovery problem using reinforcement learning (RL). Specifically, we leverage the Monte Carlo Tree Search (MCTS) technique with RL, which was shown to achieve outstanding performance in the game of Go [18; 19]. We further introduce special techniques to reduce the computational complexity for dealing with higher signal sparsity in CS. Experimental show that the proposed RL+MCTS algorithm significantly outperforms OMP and BP for matrix A of various sizes. In this section, we formulate the sparse signal recovery problem as a special sequential decision making problem, which we will solve using RL and MCTS. In the context of compressed sensing, a key challenge is to correctly choose the columns of A, or equivalently, the support of x 0, such that the problem is solved. To address this problem, we formulate it as a sequential decision making problem: an agent sequentially chooses one column of A at a time until it selects up to k columns such that the constraint in holds and the 0 -loss in is minimized. The MDP for compressed sensing can then be defined as follows. A state s ∈ S is a pair (y, S), where y is the observed signal generated according to x 0, and S ⊆ [n] is the set of the already selected columns of A, where [n] {1, . . ., n}. In our current setup, we assume the matrix A is fixed, so a state is not dependent on the sensing matrix. Terminal states are states s = (y, S) which satisfy one or more of the following conditions: (i) |S| = k (the maximum possible signal sparsity), or (ii) ||A S x s − y|| 2 2 < for some pre-determined. Here, A S stands for the submatrix of A that is constructed by the columns of A indexed by the set S, and x s is the optimal solution given that the signal support is S, For the action space, the set of all feasible actions at state Note that in compressed sensing, when an action a is taken (i.e., a new column of A is selected) for a particular state s = (y, S), the next state s is determined; that is, the MDP transition is deterministic. Finally, we define our reward function R: where α, γ > 0 are fixed hyperparameters, and x s is determined by. Different from existing compressed sensing algorithms, we propose to learn, via RL and MCTS, a policy to sequentially select the columns of A and reconstruct the sparse signal x 0, based on data generated for training. We generate the training data by generating k-sparse signals x 0 and computing the corresponding vectors y = Ax 0 (each k is randomly generated from 1 to m). For each signal y, we then use a "policy network" (to be explained in details later) along with MCTS to choose columns sequentially until k columns have been chosen. The traversed states will be used as our new training data for updating the policy network. Such a strategy allows us to move as much of the computational complexity as possible in testing (i.e., performing the sparse signal recovery task) into training, which shares a similar spirit to the work in. 3 The RL+MCTS Algorithm To learn a policy in the above sequential decision making formulation of CS, we employ a single neural network f θ to jointly model the policy π θ (a|s) and the state-value function V θ (s), where θ is the model parameter (i.e., the weights in a neural network). The policy π θ (a|s) defines a probability over all actions for a given state s, where the action set includes the possible next columns of A to pick and a stopping action. The value V θ (s) defines the long-term reward that an agent receives when we start from the state s and follow the given policy. We design two sets of input features for the policy/value network. The first set of input features is x s extended to a vector in R n with zeros in components whose indices are not in s. The second set of features is motivated by OMP, which is given by λ s:= A T (y − A S x s) ∈ R n, where y − A S x s is the residual vector associated with the solution x s. For the root state r in which no columns are chosen, x r is set to be the n-dimensional zero vector, and λ r:= A T y. Note that the OMP rule is exactly choosing the next column index whose corresponding component in |λ s | is the largest, where | · | is the absolute value taken component wise. The goal of the RL+MCTS algorithm is to iteratively train the policy network f θ. The high-level training structure is given in Algorithm 1. Algorithm 1 High-Level Training Procedure 1: initialize: j = 0, θ = θ0, θ0 random, fixed matrix A ∈ R m×n 2: while j < i (where i is a hyperparameter) do 3: 1) generate training samples from each (y, x0) pair by building a tree using Monte Carlo Tree Search (MCTS) and current f θ 2) train/update neural network parameters to getθ using the training samples from step 1. θ ←θ 6: Most of the details arise in step 1) of Algorithm 1. Similar to the AlphaGo Zero algorithm, the proposed RL+MCTS algorithm uses Monte Carlo Tree Search (MCTS) as a policy improvement operator to iteratively improve the policy network in the training phase. For a randomly generated pair (y, x 0), we use MCTS and the current f θ to generate new training samples to feed back into the neural network. We note that in the testing phase, MCTS can also be combined with the policy/value network to further boost the performance. Specifically, for each given observation vector y and the desired sparsity k, we run MCTS simulations multiple times to construct a search tree [21; 22; 23]. When training the proposed RL+MCTS algorithm, we employ the following technique for reducing the training complexity. First, we remark that using MCTS as a policy improvement operator can potentially be computationally expensive for relatively large matrix A (depending on the available computation resources). To address this challenge, we fix the maximum depth d of the MCTS tree; that is, we build the MCTS tree until we reach a depth of d. From then on, we roll-out the remaining levels of the tree by simply using the OMP rule to select all remaining columns until a total of k columns are chosen. This technique will be evaluated in the experiments in the next section. In this section, we present experimental for evaluating our proposed RL+MCTS algorithm and comparing it against two baseline methods: (i) OMP and (ii) BP (i.e., 1 minimization). We first present on the proposed RL+MCTS algorithm without limiting the tree depth. In this setting, we will be training and testing on matrices of size 7 × 15 and 15 × 50. The training parameters we use in our experiment is given in Table 2 We next show the using the RL+MCTS algorithm with reduced complexity as described in Section 3.3. Specifically, in a single MCTS search, we expand the tree to depth d, and then proceed to follow the OMP rule until a terminal state is reached. We now show the experiment for this version of the RL+MCTS algorithm. Specifically, we consider the 10 × 100 matrix in our evaluation. The training details of this experiment can be found in Table 2 in Appendix A. We train two models. A) We train a policy value network using the vanilla RL+MCTS algorithm without tree depth constraint. We train a policy value network by limiting the tree depth d = 6, which leads to a 40% reduction in training time per sample. Next, we first test the policy/value network trained from A) above. This policy/value network will select each column without MCTS. We then test the policy/value network trained from B) above: First, we test the policy/value network to pick the first column; For all subsequent columns up to k, we invoke the OMP rule. This is equivalent to setting the tree depth during testing to d = 2 and with no MCTS (M = 0). Using the same policy/value network, we also conduct an experiment where d = 6 and MCTS simulations is set to 1500 during testing. From Figure 1 (c), note that the vanilla RL+MCTS policy π θ (a|s) still performs slightly better than both OMP and BP. We see that training the RL+MCTS algorithm with a fixed tree depth gives us favorable versus OMP, vanilla RL+MCTS policy π θ (a|s), and BP. Average Prediction Times In Table 1, we give the average prediction times per signal in seconds. For OMP and BP, we use python libraries sklearn and cvx respectively. To illustrate the speed during testing, we measure the prediction times on a much less powerful machine than what was used during training. While training was accomplished on a i7 4790 (3.6 GHz) with a single GTX 780, the testing speeds in Table 2 were conducted on a Macbook Air with an Intel i5 clocked at 1.4 GHz and an integrated Intel HD 5000. We predict that the testing speeds can be greatly improved with a more powerful machine and further optimization in the source code. In general, we see that using just the policy/value network for prediction is in general slower than OMP, but on par with or better than BP. We have shown that the proposed RL+MCTS algorithm is a highly effective sparse signal decoder for the compressed sensing problem assuming no signal structure other than sparsity. Even without using MCTS in testing, the RL+MCTS algorithm's performance exceeds that of existing sparse signal recovery algorithms such as OMP and BP. The flexibility in the RL+MCTS algorithm's design further offers many interesting avenues for future research. For one, it is possible that the features chosen in our model can be further improved. Secondly, since the true signal x 0 is known in training, one may be able to leverage the information about x 0 to increase training sample efficiency. The training hyper-parameters may also be further tuned to improve performance. Broader settings of problems such as noisy observations and varying observation matrices A are under active investigation. In this appendix, we include the hyper-parameters of our experiments -see Table 2.
Formulating sparse signal recovery as a sequential decision making problem, we develop a method based on RL and MCTS that learns a policy to discover the support of the sparse signal.
1,192
scitldr
Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control. Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments , scaling these methods to high-dimensional environments remains an open challenge. The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (; ;). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (; ; ; ; ; ; ;). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (; ; ;). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion , and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control . This can be later combined with RL for extra fine-tuning of the model and control. In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C and RCE on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature. We are interested in controlling the non-linear dynamical systems of the form s t+1 = f S (s t, u t) + w, over the horizon T. In this definition, s t ∈ S ⊆ R ns and u t ∈ U ⊆ R nu are the state and action of the system at time step t ∈ {0, . . ., T − 1}, w is the Gaussian system noise, and f S is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation x t ∈ X ⊆ R nx of each state s t (n x n s). This scenario has application in many real-world problems, such as visual-servoing , in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = {u t} T −1 t=0, the observation sequence {x t} T t=0 is generated by a stationary Markov process, i.e., x t+1 ∼ P (·|x t, u t), ∀t ∈ {0, . . ., T − 1}. A common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem that minimizes expected cumulative cost: where c t: X × U → R ≥0 is the immediate cost function at time t, c T ∈ R ≥0 is the terminal cost, and x 0 is the observation at the initial state s 0. Note that all immediate costs are defined in the observation space X, and are bounded by c max > 0 and Lipschitz with constant c lip > 0. For example, in visualservoing, (SOC1) can be formulated as a goal tracking problem , where we control the robot to reach the goal observation x goal, and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error Since the observations x are high dimensional and the dynamics in the observation space P (·|x t, u t) is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space Z ⊆ R nz (n z n x) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C , RCE , and SOLAR . The main idea behind the LCE approach is to learn a triplet, (i) an encoder E: X → P(Z); (ii) a dynamics in the latent space F: Z × U → P(Z); and (iii) a decoder D: Z → P(X). These in turn can be thought of as defining a (stochastic) mapping P: X × U → P(X) of the form P = D • F • E. We then wish to solve the SOC in latent space Z: such that the solution of (SOC2), U 3 PCC MODEL: A CONTROL PERSPECTIVE As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics P, as shown at the bottom row of Figure 1 (a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations x, LCE proposes to learn a mapping P by solving (SOC2) that consists of a loss function, whose states evolve under dynamics F (after an initial transition by encoder E), as depicted in Figure 1 (b), and a regularization term. The role of the regularizer R 2 is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn P of the particular form P = D • F • E, described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer R 2 to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning P, designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space Z. In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model. Note that these two SOCs evolve in two different spaces, one in the observation space X under dynamics P, and the other one in the latent space Z (after an initial transition from X to Z) under dynamics F. Unlike P and F that only operate in a single space, X and Z, respectively, P can govern the evolution of the system in both X and Z (see Figure 1 (c)). Therefore, any recipe to learn P, and as a the latent space Z, should have at least two terms, to guarantee that the evolution paths ed from P in X and Z are consistent with those generated by P and F. We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the of learning P in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR ) to solve SOC, and add the third term, curvature, to our recipe for learning P. Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under P and P, where x t is the current observation, and x t+1 andx t+1 are the next observations under these two dynamics, respectively. Instead of learning a P with minimum mismatch with P in terms of some distribution norm, we propose to learn P by solving the following SOC: whose loss function is the same as the one in (SOC1), with the true dynamics replaced by P. In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term R 3 in (SOC3), such that the control sequence ed from solving (SOC3), U In Section 3.1, we provided a recipe for learning P (in form of D • F • E) by introducing an intermediate (SOC3) that evolves in the observation space X according to dynamics P. In this section we first connect (SOC2) that operates in Z with (SOC3) that operates in X. For simplicity and without loss generality, assume the initial cost c 0 (x, u) is zero. 4 Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models. Lemma 2. Let (U * 3, P * 3) be a solution to (SOC3) and (U * 2, P * 2) be a solution to (SOC2) with and Similar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, are the probability over the next latent state z, given the current observation x and action u, in (SOC2) and (SOC3) (see the paths x t → z t →z t+1 and x t → z t →z t+1 →x t+1 →ẑ t+1 in Figures 1(b)(green) and 1(c)(red)). Therefore R 2 (P) can be interpreted as the measure of discrepancy between these models, which we term as consistency loss. Although Lemma 2 provides a recipe to learn P by solving (SOC2) with the regularizer, unfortunately this regularizer cannot be computed from the data -that is of the form (x t, u t, x t+1) -because the first term in the D KL requires marginalizing over current and next latent states (z t andz t+1 in Figure 1 (c)). To address this issue, we propose to use the (computable) regularizer in which the expectation is over (x, u, x) sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss ed from using R 2 (P) instead of R 2 (P), and shows that it could be still a reasonable choice. Corollary 1. Let (U * 3, P * 3) be a solution to (SOC3) and (U * 2, P * 2) be a solution to (SOC2) with R 2 (P) and and λ 2 defined by and. Then, we have L(U * where R 3 ( P) and R 2 (P) are defined by and. Then, we have 3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION In Sections 3.1 and 3.2, we derived a loss function to learn the latent space Z. This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a , to the loss function of our PCC model. The main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate the next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics P and assume that it is not only of the form P = D • F • E, but F, the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function where w is a Gaussian noise. Consider the following SOC problem: where R 2 is defined by; U is optimized by a LLC algorithm, such as iLQR; R LLC (P) is given by, where = (z, u) ∼ N (0, δ 2 I), δ > 0 is a tunable parameter that characterizes the "diameter" of latent state-action space in which the latent dynamics model has low curvature., where 1/X is the minimum non-zero measure of the sample distribution w.r.t. X, and 1 − η ∈ is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how δ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn P, and also the latent space Z. Lemma 4. Let (U * LLC, P * LLC) be a LLC solution to (SOC-LLC) and U * 1 be a solution to (SOC1). Suppose the nominal latent state-action trajectory {(z z z t, u u u t)} T −1 t=0 satisfies the condition: t=0 is the optimal trajectory of (SOC2). Then with proba- In practice, instead of solving (SOC-LLC) jointly for U and P, we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for P, i.e., where R 3 (P) = −E x,u,x [log P (x |x, u)] is the negative log-likelihood, 5 and then, solve the outer optimization problem, min U L(U, F *,c, z 0), where P * = D * • F * • E *, to obtain the optimal control sequence U *. Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter λ LLC is large. Note that we leave the regularization parameters (λ p, λ c, λ cur) as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning P * in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over f Z, through the regularizers R 3, R 2, and R LLC, respectively, we refer to it as the prediction-consistency-curvature (PCC) loss. The PCC-Model objective in (PCC-LOSS) introduces the optimization problem min P λ p R 3 (P) + λ c R 2 (P) + λ cur R LLC (P). To instantiate this model in practice, we de- In this section, we propose a variational approximation to the intractable negative log-likelihood R 3 and batch-consistency R 2 losses, and an efficient approximation of the curvature loss R LLC. The negative log-likelihood 6 R 3 admits a variational bound via Jensen's Inequality, which holds for any choice of recognition model Q. For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as Q(z t,ẑ t+1 |x t, x t+1, u t) = Q(ẑ t+1 |x t+1)Q(z t |ẑ t+1, x t, u t). The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in via Monte Carlo simulation. To reduce the variance of the estimator, we decompose R 3,NLE-Bound further into log P (ẑt+1 | zt, ut), and note that the Entropy H(·) and Kullback-Leibler D KL (· ·) terms are analytically tractable when Q is restricted to a suitably chosen variational family (i.e. in our experiments, Q(ẑ t+1 | x t+1) and Q(z t |ẑ t+1, x t, u t) are factorized Gaussians). The derivation is provided in Appendix C.1. Interestingly, the consistency loss R 2 admits a similar treatment. We note that the consistency loss seeks to match the distribution ofẑ t+1 | x t, u t with z t+1 | x t+1, which we represent below as Here, P (ẑ t+1 | x t, u t) is intractable due to the marginalization of z t. We employ the same procedure as in to construct a tractable variational bound We now make the further simplifying assumption that ). This allows us to rewrite the expression as which is a subset of the terms in. See Appendix C.2 for a detailed derivation. In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated When n z is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss where A and B are function approximators to be optimized. Intuitively, the amortized curvature loss seeks-for any given (z, u)-to find the best choice of linear approximation induced by A(z, u) and B(z, u) such that the behavior of F µ in the neighborhood of (z, u) is approximately linear. In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC's use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed "locally-linear dynamics" of the form z = A(z,ū)z + B(z,ū)u + c(z,ū) wherez andū are auxiliary random variables meant to be perturbations of z and u. When contrasted with, it is clear that neither A and B in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR. Furthermore, RCE does not incorporate PCC's consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrapancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC's consistency loss. However, as noted by the authors of RCE, E2C's maximization of pair-marginal log-likelihoods of (x t, x t+1) as opposed to the conditional likelihood of x t+1 given x t means that E2C does not properly minimize the prediction loss prescribed by the PCC framework. In this section, we compare the performance of To generate our training and test sets, each consists of triples (x t, u t, x t+1), we: sample an underlying state s t and generate its corresponding observation x t, sample an action u t, and obtain the next state s t+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ 2 I ns, and generate corresponding observation x t+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (s t, u t) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models' performance under various degree of noise. Each task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely,c(z where z goal is obtained by encoding the goal observation, and Q = κ · I nz, R = I nu . Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region 10 . A Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we train 10 models independently, and solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of 10 × 10 = 100 control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported. 7 Code will become available with the camera-ready version. 8 For the RCE implementation, we directly optimize the ELBO loss in Equation of the paper. We also tried the approach reported in the paper on increasing the weights of the two middle terms and then annealing them to 1. However, in practice this method is sensitive to annealing schedule and has convergence issues. 9 See a control demo on the TORCS simulator at https://youtu.be/GBrgALRZ2fw 10 Another possible metric is the average distance to goal, which has a similar behavior. 18.8 ± 2.1 9.1 ± 1.5 13.1 ± 1.9 11.5 ± 1.8 Results Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with 64% improvement over RCE, and 2% improvement over E2C. Ablation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical , one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of 6%, 9%, and 15% for Planar System, Inverted Pendulum, and Cartpole, respectively). In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and the embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods. where D TV is the total variation distance of two distributions. The first inequality is based on the of the above lemma, the second inequality is based on Pinsker's inequality , and the third inequality is based on Jensen's inequality of (·) function. Now consider the expected cumulative KL cost: t=0 KL(P (·|x t, u t)|| P (·|x t, u t)) | P, x 0 with respect to some arbitrary control action sequence {u t} T −1 t=0. Notice that this arbitrary action sequence can always be expressed in form of deterministic policy u t = π (x t, t) with some nonstationary state-action mapping π. Therefore, this KL cost can be written as: where the expectation is taken over the state-action occupation measure t=0 P(x t = x, u t = u|x 0, U) of the finite-horizon problem that is induced by data-sampling policy U. The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) π is a deterministic policy, (ii) dU (u t) is a sampling policy with lebesgue measure 1/U over all control actions, (iii) the following bounds for importance sampling factor holds: To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P and control sequence U: For the second part of the proof, consider the solution of (SOC3), namely (U * 3, P * 3). Using the optimality condition of this problem one obtains the following inequality: Using the in and, one can then show the following chain of inequalities: where U * 1 is the optimizer of (SOC1) and (U * 3, P * 3) is the optimizer of (SOC3). Therefore by letting λ 3 = √ 2T 2 · c max U and R 3 (P) = E x,u KL(P (·|x, u)|| P (·|x, u)) and by combining all of the above arguments, the proof of the above lemma is completed. A.2 PROOF OF LEMMA 2 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {u t} T −1 t=0, and any model P, consider the following decomposition of the expected cost:. Now consider the following cost function: E[c(x t−1, u t−1) + c(x t, u t) | P, x 0 ] for t > 2. Using the above arguments, one can express this cost as By continuing the above expansion, one can show that where the last inequality is based on Jensen's inequality of (·) function. For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2): where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. To start with, the total-variation distance D TV x ∈X d P (x |x, u)E(·|x)||(F • E)(·|x, u) can be bounded by the following inequality using triangle inequality: where the second inequality follows from the convexity property of the D TV -norm (w.r.t. convex weights E(·|x), ∀x ). Then by Pinsker's inequality, one obtains the following inequality: We now analyze the batch consistency regularizer: and connect it with the inequality in. Using Jensen's inequality of convex function x log x, for any observation-action pair (x, u) sampled from U τ, one can show that Therefore, for any observation-control pair (x, u) the following inequality holds: By taking expectation over (x, u) one can show that is the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that The inequality is based on the property that Equipped with the above additional , the rest of the proof on the performance bound follows directly from the from Lemma 2, in which here we further upper-bound A.4 PROOF OF LEMMA 3 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {u t} T −1 t=0 and for any model P, consider the following decomposition of the expected cost: Under review as a conference paper at ICLR 2020 Now consider the following cost function: E[c(x t−1, u t−1) + c(x t, u t) | P, x 0 ] for t > 2. Using the above arguments, one can express this cost as Continuing the above expansion, one can show that where the last inequality is based on the fact that and is based on Jensen's inequality of (·) function. For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2): where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A Recap of the Result: Let (U * LLC, P * LLC) be a LLC solution to (SOC-LLC) and U * 1 be a solution to (SOC1). Suppose the nominal latent state-action pair {(z z z t, u u u t)} T −1 t=0 satisfies the condition: (z z z t, u u u t) ∼ N ((z * 2,t, u * 2,t), δ 2 I), where {(z * 2,t, u * 2,t} T −1 t=0 is the optimal trajectory of problem (SOC2). Then with probability 1 − η, we have L(U * Discussions of the effect of δ on LLC Performance: The of this lemma shows that when the nominal state and actions are δ-close to the optimal trajectory of (SOC2), i.e., at each time step (z z z t, u u u t) is a sample from the Gaussian distribution centered at (z * 2,t, u * 2,t) with standard deviation δ, then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss R LLC. To quantify the above condition, one can use Mahalanobis distance to measure the distance of (z z z t, u u u t) to distribution N ((z * 2,t, u * 2,t), δ 2 I), i.e., we want to check for the condition: for any arbitrary error tolerance > 0. While we cannot verify the condition without knowing the optimal trajectory {(z * 2,t, u * 2,t)} T −1 t=0, the above condition still offers some insights in choosing the parameter δ based on the trade-off of designing nominal trajectory {(z z z t, u u u t)} T −1 t=0 and optimizing R LLC. When δ is large, the low-curvature regularization imposed by the R LLC regularizer will cover a large portion of the state-action space. In the extreme case when δ → ∞, R LLC can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss R LLC is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when δ is small the low-curvature regularization in R LLC only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance will only hold when (z z z t, u u u t) happens to be close to (z * 2,t, u * 2,t) at each time-step t ∈ {0, . . ., T − 1}. Proof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., Σ w = 0). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward. First, consider any arbitrary latent state-action pair (z, u), such that the corresponding nominal state-action pair (z z z, u u u) is constructed by z z z = z − δz, u u u = u − δu, where (δz, δu) is sampled from the Gaussian distribution N (0, δ 2 I). (The random vectors are denoted as (δz, δu)) By the two-tailed Bernstein's inequality , for any arbitrarily given η ∈ one has the following inequality with probability 1 − η: The second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step t ∈ {0, . . ., T − 1} by the Lipschitz property of the immediate cost, the value function V t (z) = min U t: is also Lipchitz with constant (T − t + 1)c lip. Using the Lipschitz property of V t+1, for any (z, u) and (δz, δu), such that (z z z, u u u) = (z − δz, u − δu), one has the following property: Therefore, at any arbitrary state-action pair (z,ũ), for z z z = z − δz, and u u u =ũ − δu with Gaussian sample (δz, δu) ∼ N (0, δ 2 I), the following inequality on the value function holds w.p. 1 − η: which further implies Now letũ * be the optimal control w.r.t. Bellman operator T t [V t+1](z) at any latent statez. Based on the assumption of this lemma, at each statez the nominal latent state-action pair (z z z, u u u) is generated by perturbing (z,ũ *) with Gaussian sample (δz, δu) ∼ N (0, δ 2 I) that is in form of z z z =z − δz, u u u =ũ − δu. Then by the above arguments the following chain of inequalities holds w.p. 1 − η: Recall the LLC loss function is given by Also consider the Bellman operator w.r.t. latent SOC:, and the Bellman operator w.r.t. LLC:. Utilizing these definitions, the inequality in can be further expressed as This inequality is due to the fact that all latent states are generated by the encoding observations, i.e., z ∼ E(·|x), and thus by following analogous arguments as in the proof of Lemma 1, one has Therefore, based on the dynamic programming that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in), the above inequality implies the following bound in the value function, w.p. 1 − η: Notice that here we replace η in the in with η/T. In order to prove, we utilize for each t ∈ {0, . . ., T − 1}, and this replacement is the of applying the Union Probability bound (to ensure holds with probability 1 − η). Therefore the proof is completed by combining the above with that in Lemma 3. We follow the same control scheme as in. Namely, we use the iLQR solver to plan in the latent space. Given a start observation x start and a goal observation x goal, corresponding to underlying states {s start, s goal}, we encode the observations to retrieve z start and z goal. Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon. Consider the latent state SOC problem At each time instance t ∈ {0, . . ., T} the value function of this problem is given by Recall that the nonlinear latent space dynamics model is given by: where F µ (z t, u t) is the deterministic dynamics model and F σ F σ is the covariance of the latent dynamics system noise. Notice that the deterministic dynamics model F µ (z t, u t) is smooth, and therefore the following Jacobian terms are well-posed: By the Bellman's principle of optimality, at each time instance t ∈ {0, . . ., T − 1} the value function is a solution of the recursive fixed point equation where the state-action value function at time-instance t w.r.t. state-action pair (z t, u t) = (z, u) is given by In the setting of the iLQR algorithm, assume we have access to a trajectory of latent states and actions that is in form of {(z z z t, u u u t, z z z t+1)} T −1 t=0. At each iteration, the iLQR algorithm has the following steps: 1. Given a nominal trajectory, find an optimal policy w.r.t. the perturbed latent states 2. Generate a sequence of optimal perturbed actions that locally improves the cumulative cost of the given trajectory 3. Apply the above sequence of actions to the environment and update the nominal trajectory 4. Repeat the above steps with new nominal trajectory Denote by δz t = z t − z z z t and δu t = u t − u u u t the deviations of state and control action at time step t respectively. Assuming that the nominal next state z z z t+1 is generated by the deterministic transition F µ (z z z t, u u u t) at the nominal state and action pair (z z z t, u u u t), the first-order Taylor series approximation of the latent space transition is given by To find a locally optimal control action sequence u * t = π * δz,t (δz t) + u u u t, ∀t, that improves the cumulative cost of the trajectory, we compute the locally optimal perturbed policy (policy w.r.t. perturbed latent state) {π * δz,t (δz t)} T −1 t=0 that minimizes the following second-order Taylor series approximation of Q t around nominal state-action pair (z z z t, u u u t), ∀t ∈ {0, . . ., T − 1}: where the first and second order derivatives of the Q−function are given by and the first and second order derivatives of the value functions are given by Notice that the Q-function approximation Q t in is quadratic and the matrix is positive semi-definite. Therefore the optimal perturbed policy π * δz,t has the following closed-form solution: where the controller weights are given by Furthermore, by putting the optimal solution into the Taylor expansion of the Q-function Q t, we get where the closed-loop first and second order approximations of the Q-function are given by. Notice that at time step t the optimal value function also has the following form: Therefore, the first and second order differential value functions can be V t,z (z z z t, u u u t) = Q * t,21 (z z z t, u u u t), V t,zz (z z z t, u u u t) = Q * t,22 (z z z t, u u u t), and the value improvement at the nominal state z z z t at time step t is given by While iLQR provides an effective way of computing a sequence of (locally) optimal actions, it has two limitations. First, unlike RL in which an optimal Markov policy is computed, this algorithm only finds a sequence of open-loop optimal control actions under a given initial observation. Second, the iLQR algorithm requires the knowledge of a nominal (latent state and action) trajectory at every iteration, which restricts its application to cases only when real-time interactions with environment are possible. In order to extend the iLQR paradigm into the closed-loop RL setting, we utilize the concept of model predictive control (MPC) and propose the following iLQR-MPC procedure. Initially, given an initial latent state z 0 we generate a single nominal trajectory: {(z z z t, u u u t, z z z t+1)} We derive the bound for the conditional log-likelihood log P (x t+1 |x t, u t). log P (x t+1 |x t, u t) = log zt,ẑt+1 Where (a) holds from the log function concavity, (b) holds by the factorization Q(z t,ẑ t+1 |x t, x t+1, u t) = Q(ẑ t+1 |x t+1)Q(z t |ẑ t+1, x t, u t), and (c) holds by a simple decomposition to the different components. We derive the bound for the consistency loss Consistency (P). Where (a) holds by the assumption that Q(ẑ t+1 | x t+1) = P (z t+1 | x t+1), (b) holds from the log function concavity, and (c) holds by a simple decomposition to the different components. In the following sections we will provide the description of the data collection process, domains, and implementation details used in the experiments. To generate our training and test sets, each consists of triples (x t, u t, x t+1), we: sample an underlying state s t and generate its corresponding observation x t, sample an action u t, and obtain the next state s t+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ 2 I ns, and generate it's corresponding observation x t+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (s t, u t) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models' performance under various degree of noise. Planar System In this task the main goal is to navigate an agent in a surrounded area on a 2D plane , whose goal is to navigate from a corner to the opposite one, while avoiding the six obstacles in this area. The system is observed through a set of 40 × 40 pixel images taken from the top view, which specifies the agent's location in the area. Actions are two-dimensional and specify the x − y direction of the agent's movement, and given these actions the next position of the agent is generated by a deterministic underlying (unobservable) state evolution function. Start State: one of three corners (excluding bottom-right). Goal State: bottom-right corner. Agent's Objective: agent is within Euclidean distance of 2 from the goal state. Inverted Pendulum -SwingUp & Balance This is the classic problem of controlling an inverted pendulum from 48 × 48 pixel images. The goal of this task is to swing up an under-actuated pendulum from the downward resting position (pendulum hanging down) to the top position and to balance it. The underlying state s t of the system has two dimensions: angle and angular velocity, which is unobservable. The control (action) is 1-dimensional, which is the torque applied to the joint of the pendulum. To keep the Markovian property in the observation (image) space, similar to the setting in E2C and RCE, each observation x t contains two images generated from consecutive time-frames (from current time and previous time). This is because each image only shows the position of the pendulum and does not contain any information about the velocity. Start State: Pole is resting down (SwingUp), or randomly sampled in ±π/6 (Balance). Agent's Objective: pole's angle is within ±π/6 from an upright position. CartPole This is the visual version of the classic task of controlling a cart-pole system . The goal in this task is to balance a pole on a moving cart, while the cart avoids hitting the left and right boundaries. The control (action) is 1-dimensional, which is the force applied to the cart. The underlying state of the system s t is 4-dimensional, which indicates the angle and angular velocity of the pole, as well as the position and velocity of the cart. Similar to the inverted pendulum, in order to maintain the Markovian property the observation x t is a stack of two 80 × 80 pixel images generated from consecutive time-frames. Start State: Pole is randomly sampled in ±π/6. Agent's Objective: pole's angle is within ±π/10 from an upright position. 3-link Manipulator -SwingUp & Balance The goal in this task is to move a 3-link manipulator from the initial position (which is the downward resting position) to a final position (which is the top position) and balance it. In the 1-link case, this experiment is reduced to inverted pendulum. In the 2-link case the setup is similar to that of arcobot , except that we have torques applied to all intermediate joints, and in the 3-link case the setup is similar to that of the 3-link planar robot arm domain that was used in the E2C paper, except that the robotic arms are modeled by simple rectangular rods (instead of real images of robot arms), and our task success criterion requires both swing-up (manipulate to final position) and balance. 12 The underlying (unobservable) state s t of the system is 2N -dimensional, which indicates the relative angle and angular velocity at each link, and the actions are N -dimensional, representing the force applied to each joint of the arm. The state evolution is modeled by the standard Euler-Lagrange equations . Similar to the inverted pendulum and cartpole, in order to maintain the Markovian property, the observation state x t is a stack of two 80 × 80 pixel images of the N -link manipulator generated from consecutive time-frames. In the experiments we will evaluate the models based on the case of N = 2 (2-link manipulator) and N = 3 (3-link manipulator). Start State: 1 st pole with angle π, 2 nd pole with angle 2π/3, and 3 rd pole with angle π/3, where angle π is a resting position. Agent's Objective: the sum of all poles' angles is within ±π/6 from an upright position. TORCS Simulaotr This task takes place in the TORCS simulator (specifically in michegan f1 race track, only straight lane). The goal of this task is to control a car so it would remain in the middle of the lane. We restricted the task to only consider steering actions (left / right in the range of [−1, 1]), and applied a simple procedure to ensure the velocity of the car is always around 10. We pre-processed the observations given by the simulator (240 × 320 RGB images) to receive 80 × 80 binary images (white pixels represent the road). In order to maintain the Markovian property, the observation state x t is a stack of two 80 × 80 images (where the two images are 7 frames apart -chosen so that consecutive observation would be somewhat different). The task goes as follows: the car is forced to steer strongly left (action=1), or strongly right (action=-1) for the initial 20 steps of the simulation (direction chosen randomly), which causes it to drift away from the center of the lane. Then, for the remaining horizon of the task, the car needs to recover from the drift, return to the middle of the lane, and stay there. Start State: 20 steps of drifting from the middle of the lane by steering strongly left, or right (chosen randomly). Agent's Objective: agent (car) is within Euclidean distance of 1 from the middle of the lane (full width of the lane is about 18). In the following we describe architectures and hyper-parameters that were used for training the different algorithms. All the algorithms were trained using: • Batch size of 128. • ADAM with α = 5 · 10 −4, β 1 = 0.9, β 2 = 0.999, and = 10 −8. • L 2 regularization with a coefficient of 10 −3. • Additional VAE loss term given by VAE t = −E q(z|x) [log p(x|z)] + D KL (q(z|x) p(z)), where p(z) ∼ N. The term was added with a very small coefficient of 0.01. We found this term to be important to stabilize the training process, as there is no explicit term that governs the scale of the latent space. • λ from the loss term of E2C was tuned using a parameter sweep in {0.25, 0.5, 1}, and was chosen to be 0.25 across all domains, as it performed the best independently for each domain. PCC training specifics: • λ p was set to 1 across all domains. • λ c was set to be 7 across all domains, after it was tuned using a parameter sweep in {1, 3, 7, 10} on the Planar system. • λ cur was set to be 1 across all domains without performing any tuning. • {z,ū}, for the curvature loss, were generated from {z, u} by adding Gaussian noise N (0, 0.1 2), where σ = 0.1 was set across all domains without performing any tuning. • Motivated by , we added a deterministic loss term in the form of cross entropy between the output of the generative path given the current observation and action (while taking the means of the encoder output and the dynamics model output) and the observation of the next state. This loss term was added with a coefficient of 0.3 across all domains after it was tuned using a parameter sweep over {0.1, 0.3, 0.5} on the Planar system. • E ADDITIONAL E.1 PERFORMANCE ON NOISY DYNAMICS Table 3 shows for the noisy cases. 1.2 ± 0.6 0.6 ± 0.3 17.9 ± 3.1 5.5 ± 1.2 6.1 ± 0.9 44.7 ± 3.6 Planar2 0.4 ± 0.2 1.5 ± 0.9 14.5 ± 2.3 1.7 ± 0.5 15.5 ± 2.6 29.7 ± 2.9 Pendulum1 6.4 ± 0.3 23.8 ± 1.2 16.4 ± 0.8 8.1 ± 0.4 36.1 ± 0.3 29.5 ± 0.2 Cartpole1 8.1 ± 0.6 6.6 ± 0.4 9.8 ± 0.7 20.3 ± 11 16.5 ± 0.4 17.9 ± 0.8 3-link1 0.3 ± 0.1 0 ± 0 0.5 ± 0.1 1.3 ± 0.2 0 ± 0 1.8 ± 0.3 Under review as a conference paper at ICLR 2020 The following figures depicts 5 instances (randomly chosen from the 10 trained models) of the learned latent space representations for both the noiseless and the noisy planar system from PCC, RCE, and E2C models.
Learning embedding for control with high-dimensional observations
1,193
scitldr
The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks. Curiously, the deep learning revolution that revived neural networks has not paid much attention to topological aspects. The architectures of deep neural networks (DNNs) do not resemble their biological counterparts in the topological sense. We bridge this gap by presenting initial of Deep Connectomics Networks (DCNs) as DNNs with topologies inspired by real-world neuronal networks. We show high classification accuracy obtained by DCNs whose architecture was inspired by the biological neuronal networks of C. Elegans and the mouse visual cortex. Recent advancements in neural network models have emerged through research in network architectures (; ;), optimization (; ;), and generalization techniques (; ;), with convolutional layers inspired from receptive fields and functional architecture in cats' visual cortex. However, the field of deep neural networks, with all its neuro-biologically inspired building blocks, has mostly left the topology story out. 1 Curiously, in the Cambrian explosion of neural network architectures in the post-AlexNet era, none seem to be inspired by the ideas prevalent in the domain of brain connectomics. The field of neuroscience was drawn into the network sciences when introduced the small-world network model, an example of which is the neuronal network of the nematode Caenorhabditis elegans (C. Elegans) 3. This idea was a sharp departure from the literature at the time as it considered a network model which was neither completely regular nor completely random -a model of complex networks. Complex networks with small world topology serve as an attractive model for the organization of brain anatomical and functional networks because a small-world topology can support both segregated/specialized and distributed/integrated information processing . Interestingly, while applications of complex networks based modeling have been well explored by the neuroscience community, they have been largely unexplored by the machine learning community as an avenue for designing and understanding deep learning architectures. Bridging the gap between neural connectomics and deep learning, we propose initial findings of designing neural network wiring based on connectomic structures as an intersection between network sciences, neuroscience and deep learning, and test our findings on image classification. Small-World Networks. By rewiring regular networks to introduce higher entropy and disorder, Watts and Strogatz proposed small-world networks with high clustering and small average path length. The model is analogous to six degrees of separation in the small-world phenomenon . Small-world networks are present in C. Elegans's connectome, power grid networks and protein interactions . Small-world models in neuroscience. Human brain structural and functional networks follow small-world configuration and this small-world model captures individual cognition and exhibits physiological basis . In the field of development psychology, literature shows that small-world modules and hubs are present during the mid-gestation period, and early brain network topology can predict later behavioral and cognitive performance . Erdos-Renyi (ER) (Erdős & Rényi, 1960), Barabasi-Albert (BA) (Albert & Barabási, 2002), and Watts-Strogatz (WS) models, Xie et. al. applied these graphs for image classification and showed that randomly wired neural networks achieve competitive accuracy on the ImageNet benchmark . Successes of ResNets and DenseNets in their performance as the first convolutional neural network (CNNs) that surpasses human-level performance on ImageNet were largely attributed to creative wiring patterns, with skip connections between multiple localized convolutional layers that is analogous to long-range connections across dense localized clusters, akin to small-world networks in neuronal networks. Inspired by small-world structures in deep CNNs, we construct DCNs based on biological neuronal network patterns, and determine their effectiveness in image classification 4. Figure 1: Wired graphs adopt connectomics structure from mouse and C. Elegans connectomes. Colored nodes indicate localized clusters in DCNs with small-world structures. We obtain small-world models of the mouse visual cortex 5 and C. Elegans connectomes . C. Elegans neuronal network includes 2D spatial positions of the rostral ganglia neurons 6, while the mouse primary visual cortex was characterized with electron microscopy. We treat the neuronal networks of both C. Elegans and the mouse visual cortex as undirected graphs which we convert to directed acyclic graphs (DAGs) in the same manner as by randomly assigning vertices indices, and set the edge to point from the smaller index to the larger index, which enforces a partial ordering on vertices. We introduce source and sink nodes by connecting vertices with indegree 0 and outdegree 0 respectively to ensure singular input and singular output through the DAG graph. The source broadcasts copies to the input nodes, and the sink performs an unweighted average on output nodes. With the exception of our choices in graph topology and number of layers, we inherit our architecture from the "small regime" RandWire architecture described in , which includes two conv layers, followed by randomly wired modules, and a fully connected softmax layer. Our modifications to this include the use of one conv layer instead of two, and the use of a single'random wiring' module as opposed to the three. Our single'random wiring' module consists of one of the topologies described in the previous subsection. As per the RandWire architecture, each node in the graph performs a weighted sum of the input, followed by a ReLU-conv-BN triplet transformation with a 3x3 separable convolution. Copies of the processed output are then broadcast to other nodes. We train for 50 epochs with Adam optimizer with learning rate of 0.001, batch size of 32 and with half-period cosine learning rate decay. For the first conv block before the DAG, we use a 2D conv with kernel size of 3, stride of 2 and padding of 1, followed by BN and ReLU. We evaluated a model with one conv block and a fully connected layer without any DAG as a baseline. Furthermore, we evaluated the C. Elegans and mouse visual cortex DCNs and compared these with the best graph structure in , the Watts-Strogatz network. These were evaluated on MNIST, while the C. Elegans model was also evaluated on Fashion-MNIST and KMNIST. In the generation of random graphs, Xie et. al. found that WS graphs performed better than ER and BA graphs. We thus compared DCNs with a simple convolutional graph-free CNN, and that with WS graphs as a baseline, and we observe that biologically wired DCNs outperform baselines without graphs, and with WS graphs. It could be argued that the accuracy improvement could be attributed to the increased number of parameters, so we further conduct experiments where we freeze the graph, thus keeping the same number of parameters as the graph-free CNN baseline. While the frozen WS model performs worse than the baseline, the Mouse visual cortex model performs better than the baseline even when frozen, suggesting that the graph topology is significant and independent of the number of parameters. For the C. Elegans DCN, we evaluated the performance across different datasets and showed consistently competitive on MNIST, KMNIST, and Fashion MNIST. Figure 3 shows the distribution of our compiled from 10 training trials on each dataset. The mean test accuracy on MNIST was 99%, while we achieved 93% on KMNIST, and 90% on Fashion MNIST. We demonstrated initial findings from applying networks inspired by real-world neuronal network topologies to deep learning. Our experiments show the trainability of a DNN based on the neuronal network C.Elegans and the visual cortex of a mouse with and without freezing the parameters of the graph modules, which outperforms WS graphs with good theoretical small-world properties. In future work, we will examine more principled methods for constructing a DAG from these networks and examine the impact of spectral properties of the graph topologies used both in the architectures we proposed and in the architecture proposed by , while extending to other connectomes.
Initial findings in the intersection of network neuroscience and deep learning. C. Elegans and a mouse visual cortex learn to recognize handwritten digits.
1,194
scitldr
Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively. However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlations in data. In addition, the number of layers and the number of neurons are manually tuned on validation data, which is time-consuming and may lead to suboptimal networks. In this paper, we propose an unsupervised structure learning method for learning parsimonious deep FNNs. Our method determines the number of layers, the number of neurons at each layer, and the sparse connectivity between adjacent layers automatically from data. The ing models are called Backbone-Skippath Neural Networks (BSNNs). Experiments on 17 tasks show that, in comparison with FNNs, BSNNs can achieve better or comparable classification performance with much fewer parameters. The interpretability of BSNNs is also shown to be better than that of FNNs. Deep neural networks have made breakthroughs in all kinds of machine learning tasks BID13 BID22, specifically with convolutional neural networks (CNNs) for tasks with spacial data BID17 and recurrent neural networks (RNNs) for tasks with sequential data. One of the key reasons for the effectiveness of CNNs and RNNs is the well-designed network structures together with the parameter sharing schemes. For example, in the convolution layers of CNNs, each neuron is connected to a local region in the input volume instead of all the input neurons. Besides, the neurons in the same channel share the same set of weights. This design utilizes the local and "stationary" properties of spacial data and consequently forms effective feature extractors. In addition, it also prevents CNNs from having an exploding number of parameters when the networks become deeper and deeper. However, in practice, there are also many data which are neither spacial nor sequential, and hence the only applicable neural networks are the standard feed-forward neural networks (FNNs). In contrast to CNN and RNN, FNN's network structure is simple. It consists of multiple layers of neurons and each layer is fully connected to the next layer up, without considering any correlations in data or among neurons. The network structure has two main shortcomings. The first is that, there can be high connection redundancies. As the number of layers and the number of neuron at each layer increase, the number of parameters increases quickly, which can cause severe overfitting. The other shortcoming is that, ignoring all the correlations existing in data weakens the model's strength (as a feature extractor) and hurts the model's interpretability. We are interested in learning parsimonious deep feed-forward neural networks. The goal is to learn FNNs which contain as few parameters as possible. Parsimonious FNNs are desirable for several reasons. Firstly, fewer parameters can ease overfitting. Secondly, parsimonious FNNs require less storage and computation than FNNs, which makes it possible to be run on devices like mobile phones. Lastly, parsimonious FNNs can have very flexible and different structures from each other depending on the specific tasks and data. This would help the models fit the data well and also have good interpretability. In general, it is desirable to solve a problem using the simplest model possible because it implies a good understanding of the problem. connections (x − h 1, h 1 − h 2) form the Backbone path. The narrow fully-connected layers (x − h 3, h 1 − h 3, h 2 − h 3) are the Skip-paths. The number of units at h 3 is relatively smaller than that at x, h 1 and h 2.Learning parsimonious FNNs is challenging mainly because we need to determine the sparse connectivity between layers. Network pruning is a potential way to achieve this. However, it requires to start from a network which is much larger than necessary for the task at hand. This can cause a lot of computations wasted on those useless connections. In addition, network pruning is not able to learn the number of units and number of layers. In this paper, we assume that data are generated by a sparse probabilistic model with multiple layers of latent variables, and view the feed-forward network to be built as a way to approximate the relationships between the observed variables and the top-level latent variables in the probabilistic model. The level 1 latent variables induce correlations among the observed variables. Therefore, it is possible to determine them by analysing how the observed variables are correlated. Similarly, by analysing how the level 1 latent variables are correlated, we can determine the level 2 latent variables, and so on. We empirically show that our method can significantly reduce the number of parameters in FNNs, and the ing model still achieves better or comparable than FNNs in 17 classification tasks. Network Structure Learning One early attempt to learn network structure for FNNs is the approach based on constructive algorithms BID1 BID3 BID18. These algorithms start from a small network and gradually add new neurons to the network until some stopping criterion are met (e.g. no more performance gain is observed). They require manuallydesigned strategies to decide how to connect new neurons to the existing network. Besides, each time when new neurons are introduced, the network needs to be retrained completely or partially. Lately, BID0 proposes to learn the structure of deep belief networks by using cascading Indian buffet process, which is very time-consuming. In BID6, the authors propose a structure learning method, based on hierarchical latent tree analysis BID21 BID5, for RBM-like models. The method automatically determines the number of hidden units and the sparse connections between layers. However, it is not tested on deep models and in supervised learning tasks. Recently, reinforcement learning BID2 BID34 and genetic algorithms BID27 BID31 are also applied to learning complex structures for CNNs. Generally, these methods require tens of thousands of full training runs before giving a feasible network structure, which is prohibitive for many applications. Network Pruning In contrast to constructive algorithms, network pruning starts from a large network and prune connections or neurons to achieve structure learning. Optimal Brain Damage BID8 and Optimal Brain Surgeon BID12 prune connections based on the Hessian matrix of the loss function. Recently, BID11 proposes to conduct pruning by iteratively pruning connections with absolute weight value smaller than a threshold and retraining the network. One drawback of the method is that the retraining process is time-consuming. BID10 proposes Dynamic Network Surgery which conducts parameter learning and connection pruning simultaneously and avoids the retraining process. Moreover, it also allows mistakenly pruned connections to be rebuilt in subsequent training. Similar to connection pruning, neurons pruning methods are proposed and tested in BID28; BID20. The main drawback of all these pruning methods is that, they require to start from a network which is larger than necessary for the task at hand. This causes some wasted computations on the useless connections or neurons. In addition, the number of layers is still set manually instead of learned from data. In this section, we present a method for learning parsimonious deep FNNs. The method is called Parsimonious Structure Analysis (PSA). PSA learns a model which contains two parts as shown in FIG0. The first is the main part of the model, called the Backbone. It is a wide, deep but sparse feed-forward path in the network. The second part is the Skip-paths. It consists of multiple narrow paths, each of which is a fully-connected layer. We call the ing model Backbone-Skippath Neural Network (BSNN). We will introduce how PSA learns the Backbone and the Skip-paths in Section 3.1 and Section 3.2 respectively. Structure learning for neural networks is challenging since generally the features in data do not always have apparent relationships as the units in convolutional networks. In a convolutional layer, units in a feature map are only connected to a group of units strongly correlated in the spacial dimension at the layer below. This significantly reduces the number of parameters in CNNs and is essential if we want to learn a very sparse structure. The same intuition can be applied to general data other than images in feed-forward neural networks. A hidden unit, detecting one particular feature such as co-occurrence pattern, should only be connected to a group of units that are strongly correlated in the layer below. However, unlike CNNs where the spatial correlation is apparent, the correlations of units in feed-forward neural networks are not easy to discover. In PSA, we propose to apply Hierarchical Latent Tree Analysis (HLTA) BID21 BID5 to identify the co-occurrence patterns among units and construct hidden units to explain the co-occurrence patterns. PSA treats the input features as a set of isolated random variables as in Figure 3 (a). Although no apparent spacial or sequential relationships exist among the variables, PSA seeks to discover the correlations among the variables and groups the highly correlated ones together. It starts from finding two most correlated variables to form one group and keeps expanding the group if necessary. Let S denotes the set of observed variables which haven't been included into any variable groups. PSA firstly computes the mutual information between each pair of observed variables. Then it picks the pair in S with the highest mutual information and uses them as the seeds of a new variable group G. New variables from S are then added to G one by one in descending order of their mutual information with variables already in G. Each time when a new variable is added into G, PSA builds two models (M 1 and M 2) with G as the observed variables. The two models are the best models with one single latent variable and two latent variables respectively, as shown in Figure 2. PSA computes the BIC scores of the two models and tests whether the following condition is met: DISPLAYFORM0 where D is the dataset and δ is a threshold which is usually set at 3 BID5. When the condition is met, the two latent variable model M 2 is not significantly better than the one latent variable model M 1. Correlations among variables in G are still well modeled using a single latent variable. Then PSA keeps on adding new variables to G. If the test fails, PSA takes the subtree in M 2 which doesn't contain the newly added variable and identifies the observed variables in it as a finalized variable group. The group is then removed from S. And the above process is repeated on S until all the variables in S are partitioned into disjoint groups. An efficient algorithm progressive EM ) is used to estimate the parameters in M 1 and M 2.As shown in Figure 2 (b), after the above process, all the observed variables are partitioned into disjoint groups such that the variables in each group are strongly correlated and their correlations can be explained using a single latent variables. Then PSA introduces a latent variable for each group and computes the mutual information among the latent variables. After that, it links up the Figure 4: Expanding the tree structure for the Backbone path: A three-layer structure is first learned (left). New connections are added to all the layers according to empirical conditional mutual information (middle). The connections between variables at the top layer are removed and the structure is finalized (right). latent variables to form a Chow-Liu tree BID7. The is a latent tree model BID26 BID32, as shown in Figure 2 (c). Parameter estimation for the model is done using the EM algorithm. Since the model is tree-structured, EM is efficient in this process. While the above procedure gives us a one-layer network, we seek to build deep model to capture the long-range correlations among variables. We perform the construction of deep structure in a layerwise manner. Using the obtained one-layer model, PSA converts the latent variables into observed ones through data completion. With this, another layer of latent variables can be learned in the same manner as the first layer by grouping the first-layer latent variables and linking up the groups, as in Figure 2 (d). Then the two models can be stacked up to form a three-layer network, with the latent variables in the higher layer capturing longer-range correlations of the observed variables. This procedure can be recursively conducted to build deep hierarchy until the number of variables at the top layer falls below a threshold K. And it in a hierarchical latent tree model BID21 BID5. While the above deep structure captures the most important correlations among the observed variables, the tree structure might cause underfitting for discovering non-trivial correlations. Thus we introduce additional links to model the salient interactions that are not captured by the tree model. For each latent variable V l at level l, PSA considers adding connections to link it to more nodes at level l − 1. To do so, PSA considers how closely V l is related to each node V l−1 at level l − 1 given the parent variable Z of V l−1. The strength of correlation is measured using the conditional mutual information: DISPLAYFORM0 The top N nodes with the highest I(V l, V l−1 |Z) are then connected to V l. After expanding the connections for all the layers, PSA removes the links among the variables at the top layer and uses the ing structure for the Backbone. The process of expanding tree structure is illustrated in Figure 4. Although the Backbone path is deep and wide, its sparsity can easily lead to model which cannot capture global features. For example, suppose there is an essential feature which is correlated to all the input features. When the Backbone path is very sparse, even after multiple layers of projections, it is still unlikely that there will be a feature in the model which is projected from all the input features. To tackle the above problem, we introduce Skip-paths to our BSNN. FIG0 shows the whole model structure of BSNN. The path from x to h 2 illustrates the Backbone path whose sparse structure is learned using the method we propose. To complement the the model's power of extracting features, narrow Skip-paths (x − h 3, h 1 − h 3, h 2 − h 3) are added to the model. The Skip-paths take all the feature layers in the Backbone as input and compress them to layers with a small number of units through fully-connected projections. After the structure for the Backbone path and the Skip-paths are determined, a classification layer or regression layer can then be added to the top of all the paths, utilizing all the features extracted. The network can then be trained using back-propagation algorithms as in normal neural networks. In experiment, we evaluate our method in 17 classification tasks. We consider applications where the data is neither spacial nor sequential. Unlike CNNs or RNNs where the structure is designed to exploit spatial or sequential correlation, few effort has been put to learn the structure of feedfoward neural networks, which have highly redudant parameters and is prone to overfit. Our proposed method learns the structure of feedforward neural network from data. It significantly reduces the model complexity and parameters while achieving better or comparable classification performance, and leads to models which are more interpretable. TAB0 gives a summary of all the datasets used in the experiment. We choose 12 tasks for chemical compounds classification and 5 tasks for text classification. All the datasets are published by previous researchers and are available to the public. 1 There are about 12,000 environmental chemical compounds in the dataset, each represented as its chemical structure. The tasks are to predict 12 different toxic effects for the chemical compounds. We treat them as 12 binary classification tasks. We filter out sparse features which are present in fewer than 5% compounds, and rescale the remaining 1,644 features to zero mean and unit variance. The dataset contains a training set and a test set, and we randomly sample 500 compounds from training data to build the validation set. All the experiments are run for three times and we report the average AUC together with the standard deviations. 2 We use 5 text classification datasets from BID33. After removing stop words, the top 10,000 frequent words in each dataset are selected as the vocabulary respectively and each document is represented as bag-of-words over the vocabulary. The validation set is randomly sampled from the original training samples. We run all the experiments for three times and report the average classification accuracies with the standard deviations. We compare our model with standard feed-forward neural networks (FNNs) and sparse neural networks whose weak connections are pruned (Pruned FNNs) in the 17 classification tasks. The models involved in the experiment are as follows:• BSNN: Backbone-Skippath Neural Network is the ing model of our method PSA. For all the tasks, we keep only 5% of the connections in the Backbone path and limit the number of units in the narrow Skip-paths to 100.• FNN: Feed-forward Neural Network is a standard fully-connected neural network. It is mainly composed of linear layers and activation functions. Each hidden unit is connected to all neurons in the previous layer. Information flows from low layers to high layers in a feed-forward manner.• Pruned FNN: Pruned Feed-forward Neural Network is trained by using the method proposed in BID11. We Firstly train a fully-connected FNN from scratch, and then prune out the weak connections with small absolute weight values. The pruned network is then retrained from the initial training phase by keeping the surviving weight parameters. We learn the structure of BSNNs using PSA. The number of layers, the number of hidden units in each layer and the sparse connections between adjacent layers are automatically determined. After structure learning, we train the sparse model from scratch by random initialization of weights. As for FNNs, we treat the number of hidden units and number of layers as hyper-parameters of network and determine the best structure by grid-search over all the combinations using validation data. TAB1 shows the space of network structures considered. Following the method in BID16, both "rectangle " and "conic" network shapes are tested. In FNNs with rectangle shape, all the hidden layers have constant number of units. FNNs with conic shape start with the given number of hidden units and decrease it layer by layer in a geometric progression manner towards the output layer. For Pruned FNNs, we take the best FNNs as the initial model and perform pruning as in BID11. The pruned model is then retrained for final model. We implement all the experiments using PyTroch 3 which is a flexible deep learning framework. We use ReLUs BID25 BID9 as the non-linear activation functions in all the networks. Dropout BID14 BID29 ) with rate 0.5 is applied after each non-linear projection. We use Adam BID15 as the optimizer to optimize the training objective function. During training, we select models by monitoring validation loss. Codes will be released after the paper is accepted to the conference. TAB2 shows the classification of BSNNs and FNNs on Tox21 dataset. The structures of FNNs are tuned individually for each task. It is clear that BSNNs achieve better AUC scores on 10 out of the 12 classification tasks. Even when it is not better, the average AUC value of BSNNs, e.g. on task SR.MMP, is also very close to that of FNNs. More importantly, BSNNs always contain much fewer parameters than FNNs, with the ratios of parameter number ranging from 7% to 40.11%. TAB3 shows the of BSNNs and FNNs over the 5 text classification tasks. Although BSNNs contain much fewer parameters than FNNs, BSNNs still achieve higher classification accuracy in the first two tasks, and comparable accuracy in the remaining tasks. Note that the ratios of parameter number ranges from 6.25% to 32.07%. This again confirms that our method learns good parsimonious deep models which can achieve high classification performance with much fewer parameters than standard FNNs. To validate our assumption that the backbone path in BSNNs captures most of the information in data and acts as a main part of the model, we remove the narrow skip-paths in BSNNs and train the model to test its performance in classification tasks. TAB4 shows the . As we can see from the , the backbone path alone already achieves AUC scores or accuracies which are only slightly worse than BSNNs. Note that the number of parameters in the sparse path is even much smaller than BSNNs. Compared with FNNs, the number of parameters is only 2% 11%, significantly smaller than that of FNNs. However, without the backbone, the performance of the model will be significantly worse due to the insufficient capability of the other narrow path. The not only show the importance of the backbone path in BSNNs, but also shows that our structure learning method in the backbone path is effective enough. To further show the effectiveness of our structure learning method, we introduce a new model called BSNN-FC. For each specific task, the structure of BSNN-FC is completely the same as that of BSNN, except that the layers in the sparse Backbone path are changed to fully-connected layers. We train BSNN-FC for all the tasks in Tox21 dataset and the are shown in TAB5. From the table we can see that, although BSNN keeps only 5% of the connections in the sparse path, it gives classification which are very similar to that of BSNN-FC. It shows that our structure learning method successfully removes the useless connections in BSNN-FC.We also compare BSNNs with Pruned FNNs whose weak connections are pruned using the method in BID11. We start from the fully pretrained FNNs reported in TAB2, and prune the connections with the smallest absolute weight values. After pruning, the number of remaining parameters in each FNN is the same as that in the corresponding BSNN for the same task. The comparison between BSNNs and pruned FNNs is shown in TAB5. Again BSNNs give higher AUC scores than pruned FNNs in 10 of the 12 classification tasks. Next we compare the interpretability of BSNNs with FNNs and Pruned FNNs on the text datasets. Here is how we interpret hidden units. We feed the data to the networks and do forward propagation to get the values of the hidden units corresponding to each data sample. Then for each hidden unit, we sort the words in descending order of the correlations between the words and the hidden unit. The top 10 words with the highest correlations are chosen to characterize the hidden unit. Following BID6, we measure the interpretability of a hidden unit by considering how similar pairs of words in the top-10 list are. The similarity between two words is determined using a word2vec model BID23 b) trained on part of the Google News datasets, where each word is mapped to a high dimensional vector. The similarity between two words is defined as the cosine similarity of the two corresponding vectors. High similarity suggests that the two words appear in similar contexts. The interpretability score of a hidden unit is defined as the compactness of its characterizing words and is computed as the average similarity of all pairs of words. The interpretability score of a model is defined as the average of interpretability scores of all hidden units. Table 7 reports the interpretability scores of BSNNs, FNNs and Pruned FNNs for different datasets. Sogounews dataset is not included in the experiment since its vocabulary are Chinese pingyin characters and most of them do not appear in the Google News word2vec model. We measure the interpretability scores by considering the top-layer hidden units. For the fair of comparison, all models have approximately the same number of top-layer hidden units. As it can be seen that BSNNs significantly outperform the FNNs and Pruned FNNs in most cases and is comparable if not better, showing superior coherency and compactness in the characterizations of the hidden units and thus better model interpretability. Pruned FNNs, on the other hand, reduce the interpretability of FNNs with the pruning strategy. Table 8 shows the qualitative interpretability by presenting the characterization words of hidden units with high interpretability scores in BSNNs. The hidden units are very meaningful for different datasets. For example, in Yelp Review dataset, the first hidden unit represents negative opinions on food with words "tasteless" and"flavorless"; the second hidden unit is more related to food like "paprika", "crust" and "unagi". In DBPedia, the first hidden unit is found out to have closer relationship with music, while the second one is more closely related to sport. Similar phenomena can be found in the rest of the table. This shows that the proposed BSNNs, with the statistical property, have better model interpretability and make a step further towards understandable deep learning models. Structure learning for deep neural network is a challenging and interesting research problem. We have proposed an unsupervised structure learning method which utilizes the correlation information in data for learning parsimonious deep feed-forward networks. In comparison with standard FNN, although the ing model of our method contains much fewer parameters, it achieves better or comparable classification performance in all kinds of tasks. Our method is also shown to learn models with better interpretability, which is also an important problem in deep learning. In the future, we will generalize our method to other networks like RNNs and CNNs.
An unsupervised structure learning method for Parsimonious Deep Feed-forward Networks.
1,195
scitldr
Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically. In this work we demonstrate how the approximate distribution learned by a generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges. We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in a physics-based inverse problem and an inverse problem arising in computer vision. In this latter example, we also demonstrate how the knowledge of the spatial variation of uncertainty may be used to select an optimal strategy of placing the sensors (i.e. taking measurements), where information about the image is revealed one sub-region at a time. Bayesian inference is a principled approach to quntify uncertainty in inverse problems that are constrained by mathematical model (Kaipio and Somersalo, Dashti and Stuart, Polpo et al. ). It has found applications in diverse fields such as geophysics (Gouveia and Scales, Martin et al., Isaac et al. ), climate modeling (Jackson et al. ), chemical kinetics ), heat conduction (Wang and Zabaras ), astrophysics (Loredo, Asensio Ramos et al. ), materials modeling (Sabin et al. ) and the detection and diagnosis of disease (Siltanen et al., Kolehmainen et al. ). The two critical ingredients of a Bayesian inference problem are -an informative prior representing the prior belief about the parameters and an efficient method for sampling from the posterior distribution. In this manuscript we describe how a deep generative model (generative adversarial networks (GANs)) can be used in these roles. In a typical inverse problem, we wish to infer a vector of parameters x ∈ R N from the measurement of a related vector y ∈ R P, where the two are related through a forward model y = f (x). A noisy measurement of y is denoted byŷ = f (x) + η, where η ∈ R P represents noise. While the forward map is typically well-posed, its inverse is not, and hence to infer x from the measurementŷ requires techniques that account for this ill-posedness. Classical techniques based on regularization tackle this ill-posedness by using additional information about the sought parameter field explicitly or implicitly (Tarantola ). Bayesian inference offers a different solution to this problem by modeling the unknown parameter and the measurements as random variables and allows for the characterization of the uncertainty in the inferred parameter field. For additive noise, the posterior distribution of x, determined using Bayes' theorem after accounting for the observationŷ is given by where Z is the prior-predictive distribution of y, p prior X (x) is the prior distribution of x, and p l (y|x) is the likelihood, often determined by the distribution of the error in the model, denoted by p η. Despite its numerous applications, Bayesian inference faces significant challenges. These include constructing a reliable and informative prior distribution from a collection of prior measurements denoted by the S = {x, · · ·, x (S) }, and efficiently sampling from the posterior distribution when the dimension of x is large. In this work we consider the use of GANs (Goodfellow et al. ) in addressing these challenges. These networks are useful in this role because of (a) they are able to generate samples of x from p gen X (x) while ensuring closeness (in an appropriate measure) between p gen X (x) and the true distribution, and (b) because they accomplish this by sampling from the much simpler distribution of the latent vector z, whose dimension is much smaller than that of x. Related work and our contribution: The main idea in this work involves training a GAN using the sample set S, and then using the distribution learned by the GAN as the prior distribution in Bayesian inference. This leads to a useful method for representing complex prior distributions and an efficient approach for sampling from the posterior distribution in terms of the latent vector z. The solution of inverse problems using sample-based priors has a rich history (see Vauhkonen et al., Calvetti and Somersalo for example). As does the idea of dimension reduction in parameter space, Lieberman et al. ). However, the use of GANs in these tasks is novel. Recently, a number of authors have considered the use machine learning-based methods for solving inverse problems. These include the use of convolutional neural networks (CNNs) to solve physics-driven inverse problems (Adler and Öktem, Jin et al., Patel et al. ), and GANs to solve problems in computer vision (Chang et al., Kupyn et al., Yang et al., Ledig et al., Anirudh et al., Isola et al., Zhu et al., Kim et al. ). There is also a growing body of work on using GANs to learn regularizers in inverse problems (Lunz et al. ) and in compressed sensing (Bora et al. [2017 (Bora et al. [, 2018, Kabkab et al., Wu et al., Shah and Hegde ). However, these approaches differ from ours in that they solve the inverse problem as an optimization problem and do not quantify uncertainty in a Bayesian framework. More recently, the approach described in (Adler and Öktem ) utilizes GANs in a Bayesian setting; however the GAN is trained to approximate the posterior distribution, and training is done in a supervised fashion with paired samples of the measurementŷ and the corresponding true solution x. Let z ∼ p Z (z) characterize the latent vector space and g(z) be the generator of a GAN trained using S. Then with infinite capacity and sufficient data, the generator learns the true distribution (Goodfellow et al. ). That is, p Here p Z is the multivariate distribution of the latent vector whose components are iid and typically conform to a Gaussian or a uniform distribution. Now consider a measurementŷ for which we would like to infer the posterior distribution of x. For this we use and set the prior distribution to be equal to the true distribution, that is p Using this it is easy to show that for any l(x), where E is the expectation operator, and Note that the distribution p post Z is the analog of p post X in the latent vector space. The measurement y updates the prior distribution for x to the posterior distribution; similarly, it updates the prior distribution for z, p Z, to the posterior distribution, p post Z. Equation implies that sampling from the posterior distribution for x is equivalent to sampling from the posterior distribution for z and transforming the sample through the generator g. That is, Since the dimension of z is typically smaller than that of x, and since the operation of the generator is typically inexpensive, this represents an efficient approach to sampling from the posterior of x. As mentioned in section 1, we wish to infer and characterize the uncertainty in the vector of parameters x from a noisy measurementŷ, where f is a known map that connects x and y. We also have several prior measurements of x, contained in the set S. To solve this problem we train a GAN with a generator g(z) on S, and then sample x from p post X (x|y) given in. Since GANs can be used to represent complex distributions efficiently, this algorithm provides a means of including complex priors that are defined by samples. It also leads to an efficient approach to sampling from p post X (x|y) since the dimension of z is typically smaller (10 1 -10 2) than that of x (10 4 -10 7). In Appendix A we describe approaches based on Monte-Carlo, Markov-Chain Monte-Carlo and MAP estimation for estimating population parameters of the posterior that make use of this observation. A problem motivated by physics We apply our approach to the problem of determining the initial temperature distribution of a solid from a measurement of its current temperature. The inferred field (x) is represented on a 32 2 grid on a square and the forward operator is defined by the solution of the time-dependent heat conduction problem with uniform conductivity. This operator maps the initial temperature to the temperature at time t = 1, and its discrete version is generated by approximating the time-dependent linear heat conduction equation using central differences in space and backward difference in time. It is assumed that the initial temperature is zero everywhere except in a rectangular region, and it is parameterized by the horizontal and vertical coordinates of two corners of the rectangular region and the value of the temperature field within it. 50,000 initial temperature fields sampled from this distribution are included in the sample set S used to train a Wasserstein GAN (WGAN-GP (Gulrajani et al. )) with an 8-dimensional latent space with batch size of 64 and learning rate of 0.0002. The target field we wish to infer is shown in Figure 1a. This field is passed through the forward map to generate the noise-free and the noisy versions (Gaussian with zero mean and unit variance) of the measured field shown in Figure 1b and 1c. We apply the algorithms developed in the previous section to probe the posterior distribution. We first use these to determine the MAP estimate for the posterior distribution of the latent vector (denoted by z map). The value of g(z map) is shown in Figure 1d. By comparing this with the true value of the inferred field, shown in Figure 1a, we observe that the MAP estimate is very close to the true value. This agreement is remarkable if we recognize that the ratio of noise to signal is around 30%, and also compare the MAP estimates obtained using an H 1 or an L 2 prior (see Figures 1e and 1f) with the true value. Figure 2: Iterative image recovery with very sparse measurements using uncertainty information: for each digit left most column represents true signal (x *) and its noisy version. The following columns represent the sparse measurement, the estimated MAP, and the estimated variance, respectively at each iteration. The red window in the variance map is the sub-region with maximum variance. Next, we consider the obtained by sampling from the MCMC approximation to the posterior distribution of z defined in. The MCMC approximation to the mean of the inferred field computed using is shown in Figure 1g. We observe that the edges and the corners of the temperature field are smeared out. This indicates the uncertainty in recovering the values of the initial field along these locations, which can be attributed to the smoothing nature of the forward operator especially for the higher modes. A more precise estimate of the uncertainty in the inferred field is provided by the variance of the inferred initial temperature at each spatial location. In Figure 1h we have plotted the point-wise standard deviation (square-root of the diagonal of co-variance) of the inferred fieldour metric of quantified uncertainty. We observe that it is largest along the edges and at the corners, where the forward operator has smoothed out the initial data, and thus introduced large levels of uncertainty in the location of these features. Additional examples of this inverse heat conduction problem with different target fields is shown in Appendix B. A problem in computer vision: Next we consider a problem in computer vision that highlights the utility of estimating the uncertainty in an inference problem: one of determining the noise-free version of an image from a noisy version of a sub-region of the image. In particular, we consider an iterative version of this problem, where one sub-region is revealed in each iteration, and the user is given the freedom to select this sub-region. We use a strategy that is based on selecting a region where the variance is maximum, and conclude that we arrive at a very good guess for the image in very few iterations. This task falls under active learning regime of machine learning and is useful when measurements are expensive. We use 55,000 images from the MNIST data set to train a WGAN-GP and use it as a prior in Bayesian inference. We select an image from the complementary set, add Gaussian noise with 0.8 variance, mask regions within this image, and use it to infer the original image. We utilize a forward map that is zero in the masked region and identity everywhere else. We begin by masking the entire image, and allow the user to select the sub-region (which is a square with edge length equal to 1/7th of the original image) in each iteration. We report when the user selects the sub-region with maximum variance as the sub-region to be revealed in the next iteration. For computing the variance we utilize the algorithm developed in this work. In Figure 2 we have shown the true image and from several iterations for two different MNIST digits from test set. For each iteration, we have shown the image that was used as measurement, the corresponding MAP and variance determined using our algorithms. We observe that in the 0 th iteration, when nothing is revealed in the measurement, the variance is largest in the center of the image where most digits assume different intensities. This leads to the user requesting a measurement in this region in the subsequent iteration. Thereafter, the estimated variance reduces with each iteration, and we converge to an image which is very close to the true image in very few iterations. Additional for MNIST and CelebA dataset are provided in Appendix B. Here we provide additional examples of iterative image recovery scheme described in section 3 for MNIST (figure 4) and CelebA (figure 6) dataset. We also compare the performance of this variance-driven iterative strategy to random sampling scheme, where the next sub-region is selected randomly (figure 5). Figure 6: Estimate of the MAP (3rd row), mean (4th row) and variance (5th row) from the limited view of a noisy image (2nd row) using the proposed method. The window to be revealed at a given iteration (shown in red box) is selected using a variance-driven strategy. Top row indicates ground truth. For all images additive Gaussian noise with variance=1 is used.
Using GANs as priors for efficient Bayesian inference of complex fields.
1,196
scitldr
We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”. Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time. Our suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation. The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark. Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset. Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method . Supervised few-shot classification, sometimes simply called few-shot learning, consists in learning a classifier from a small number of examples. Being able to quickly learn new classes from a small number of labeled examples is desirable from a practical perspective because it removes the need to label large datasets. Typically, supervised few-shot classification is formulated as meta-learning on episodes, where each episode corresponds to two small sets of labeled examples called support and query sets. The goal is to train a classifier on the support set and to classify the query set with maximum accuracy. The Omniglot and miniImageNet benchmarks have been heavily used to evaluate and compare supervised few-shot classification methods in the last few years (; ; ; ;). Despite their popularity and their important role in pioneering the few-shot learning field, we argue that the Omniglot and miniImageNet benchmarks should not be taken as gold standards for evaluating supervised few-shot classification because they rely on consistent class semantics across episodes. Specifically, Omniglot classes always correspond to alphabet characters, while miniImageNet classes always correspond to object categories as defined by the WordNet taxonomy . One consequence is that benchmarks with consistent class semantics have similar class semantics between meta-training and meta-evaluation 1. Therefore, they are too "easy" because they do not test the ability of supervised few-shot classification methods to adapt to new class semantics. From an applications perspective, being able to adapt to changing class semantics is a desirable feature. For instance, if the application is to organize users' personal photo gallery, different users might want to sort their personal photo gallery according to the different semantics, such as person identity, place or time. From a methodological perspective, we argue that supervised few-shot classification becomes an awkward task in the ideal case where the class semantics are perfectly consistent. Indeed, if the end goal of every episode is to classify the query set according to the same class semantics, do we even need the support set to define the classes, once the semantics are learned? Consider the characters below, extracted from the "Mongolian" alphabet of Omniglot. How would you group the characters below? This task is not particularly hard, even if the reader was never shown labeled examples prior to the task, simply because the reader was already familiar with the class semantics of interest (characters), and can generalize them to new classes. This simple observation suggests that when class semantics are consistent, few-shot learning algorithms might not actually need labels during metaevaluation. To show this, we introduce a new learning-to-cluster 2 method called Centroid Networks which achieves surprisingly high accuracies on Omniglot and miniImageNet without using any labels at meta-evaluation time. 3 The method is very similar to Prototypical Networks , but the key difference is that the labels of the support set can be reliably recovered through clustering whenever the cluster semantics are consistent across tasks. A harder benchmark would involve selecting different cluster semantics across episodes. For example, consider the following set of shapes: In this case, the task remains ambiguous because clustering semantics (e.g. shape, color, border style) have not been specified. To classify such a set requires either supervision, such as a labeled support set, or to somehow know the class semantics beforehand. Following that spirit, the Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder and more realistic few-shot classification benchmark . Among other things such as variable numbers of ways and shots, a key difficulty of the Meta-Dataset is that class semantics vary across episodes, since episodes are generated from a randomly selected dataset. We propose to use Centroid Networks to benchmark how hard this dataset is. In particular, we suggest looking at the gap between the performance of Prototypical Networks and Centroid Networks, which we call the class semantics consistency criterion (CSCC). • We first show that Centroid Networks, our proposed approach to perform clustering without labels at meta-evaluation time, can beat a state-of-the-art learning-to-cluster method in the setting of a known number of equally-sized clusters, while being easier to train and orders of magnitude faster to run. • We show that it is possible to achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at meta-evaluation time, using Centroid Networks. This is captured by our proposed metric, class semantics consistency criterion (CSCC), which is the first to quantify how easy a few-shot classification benchmark is. This highlights the need for harder benchmarks which actually test the ability of supervised few-shot classification methods to adapt to new class semantics • We report CSCC on the recently proposed Meta-Dataset, to assess whether it is indeed a harder benchmark for few-shot classification. Supervised clustering. Supervised clustering is defined in as "learning how to cluster future sets of items [...] given sets of items and complete clusterings over these sets". They use structured SVM to learn a similarity-metric between pairs of items, then run a fixed clustering algorithm which optimizes the sum of similarities of pairs in the same cluster. In follow-up work , they use K-Means as the clustering algorithm. A main difference with our work is that we learn a nonlinear embedding function, whereas they assume linear embeddings. The work of is also called supervised clustering, although they solve a very different problem. They propose a clustering algorithm which repetitively presents candidate clusterings to a "teacher" and actively requests feedback (supervision). Learning to cluster. Recent deep learning literature has preferred the term "learning to cluster" to "supervised clustering". Although the task is still the same, the main difference is the learning of a similarity metric using deep networks. Because of this aspect, these works are often classified as falling in the "metric learning" literature. Hsu et al. (2017; propose a Constrained Clustering Network (CCN) for learning to cluster based on two distinct steps: learning a similarity metric to predict if two examples are in the same class, and optimizing a neural network to predict cluster assignments which tend to agree with the similarity metric. CCNs obtained the state-of-the-art when compared against other supervised clustering algorithms, we will thus use CCN as a strong baseline. In our experiments, Centroid Networks improve over CCN on their benchmarks, while being simpler to train and computationally much cheaper. Semi-supervised & constrained clustering. Semi-supervised clustering consists of clustering data with some supervision in the form of "this pair of points should be/not be in the same cluster". Some methods take the pairwise supervision as hard constraints , while others (including CCN) learn metrics which tend to satisfy those constraints . See the related work sections in;. Supervised few-shot classification. For the unsupervised few-shot classification task, our method may be compared to the supervised few-shot classification literature (; ;). In particular, we have compared with Prototypical Networks , which was a source of inspiration for Centroid Networks. Our work is also related to follow-up work on Semi-Supervised Prototypical Networks , in which the support set contains both labeled and unlabeled examples. In this work, we go beyond by requiring no labels to infer centroids at evaluation time. Sinkhorn K-Means. The idea of formulating clustering as minimizing a Wasserstein distance between empirical distributions has been proposed several times in the past (a). explicit some theoretical links between K-Means and the Wasserstein-2 distance. The most similar work to Sinkhorn K-Means is Regularized Wasserstein-Means (b), but they use another method for solving optimal transport. Specifically using Sinkhorn distances 4 for clustering has even been suggested in. However, as we could not find an explicit description of the Sinkhorn K-Means anywhere in the literature, we coin the name and explicitly state the algorithm in Section 5.1. To our knowledge, we are the first to use Sinkhorn K-Means in the context of learning to cluster and to scale it up to more complex datasets like miniImageNet. Note that our work should not be confused with Wasserstein K-Means and similar variants, which consist in replacing the squared L 2 base-distance in K-Means with a Wasserstein distance. Meta-Learning and Unsupervised Learning. Finally, some recent work has explored combinations of unsupervised learning and meta-learning, to address various other tasks. propose a method to meta-train an unsupervised representation learning model that produces useful features for some given task. That is, at evaluation time, their method produces features without requiring labels, much like Centroid Networks produce centroids without requiring labels. The difference with their method thus lies in the addressed task: we focus on clustering, while they consider the task of representation/feature learning.; also considers the opposite: meta-learning that requires no labels for meta-training but that delivers methods that require labels to be run at evaluation time. Specifically, they propose unsupervised approaches to generate episodes for supervised few-shot classification, while we use supervised data to learn an unsupervised clustering algorithm. The main point of this paper is to discuss the class semantics consistency of few-shot classification benchmarks. Recall the visual examples from the introduction, where we asked the reader to cluster similar images together. The more consistent the class semantics are across episodes, the easier it should be to cluster them. Therefore, for the purpose of evaluating semantics consistency, we propose to consider additional categorization tasks for existing few-shot classification benchmarks. The most common categorization task is supervised few-shot classification, where episodes come with a small training (support) set S = (X S, Y S) and a small validation (query) set Q = (X Q, Y Q), where X S, X Q denote images or data, and Y S, Y Q the associated labels. The task is to predict labels for validation images X Q and the algorithm has access both to the support set images X S and labels Y S. Finally, the predicted labels are compared against Y Q, and the accuracy is returned. From now on we call this metric the supervised accuracy in order to distinguish it from the clustering and unsupervised accuracy introduced below. The task is to cluster the query 5 images X Q, without access to the support set X S, Y S. For evaluation, the predicted clusters are matched with the ground-truth clusters (which can be obtained from Y Q) by searching for the one-to-one ground-truth cluster/predicted cluster mapping (i.e. permutation) which in the highest accuracy. Finding the optimal permutation can be done efficiently using the Hungarian algorithm as described in. The ing accuracy is called the clustering accuracy. This is a common metric used in the literature on learning to cluster. See Figure 1 for an illustration. Few-shot clustering is the simplest clustering task defined here, and can be seen as an episodic version of the learning to cluster task. However, clustering accuracy cannot be meaningfully compared with supervised accuracy. On one hand, few-shot clustering is harder than supervised few-shot classification because the support set cannot be used. On the other hand, it may be easier because the query set is clustered jointly (vs. independent predictions for supervised few-shot classification). In particular, the 1-shot clustering is trivial because each point already belongs to its own cluster, whereas supervised 1-shot classification is not. Therefore, we propose the unsupervised few-shot classification task which is by construction strictly harder than supervised few-shot classification. Unsupervised Few-Shot Classification Task The task is to cluster the support set images X S then to associate each query set image x Q with one of the predicted clusters. For evaluation, the optimal permutation between predicted clusters and ground-truth clusters (which can be obtained from Y S) is found in order to maximize the corresponding support set accuracy. Then the unsupervised accuracy is computed after relabeling the query set predictions and comparing them with Y Q. Unsupervised accuracy can be compared with supervised accuracy because unsupervised is strictly harder than supervised few-shot classification (support set classes Y S are not available and need to be inferred). See Figure 2 for an illustration. We will use this metric to define our novel measure for the difficulty of few-shot learning benchmarks. Prototypical Networks or Protonets is one of the simplest and most accurate supervised few-shot classification methods. The only learnable component of Protonets is the embedding function h θ: X → Z which maps images to an embedding (feature) space. Given a supervised task T = (K, M, N, S labeled, Q) to solve, Protonets compute the average embedding (the prototype) of each class s i = j} on the support set. Each point from the query set is then classified according to the softmax of its squared distance p θ (y . Protonets are trained end-to-end by minimizing the log-loss on the query set. In this section, we describe our method and explain how it can be applied to few-shot clustering and unsupervised few-shot classification. Centroid Networks (or CentroidNets) consist of two modules: a trainable embedding module and a fixed clustering module. The fact that the only trainable component of Centroid Networks is the embedding function makes implementation and training very simple. The embedding module is the same as in Prototypical Networks and consists in a neural network h θ: X → Z which maps data (images) x to features z = h θ (x) in the embedding space. The clustering module takes as input the embedded data (z i) 1≤j≤N and outputs a set of centroids (c j) 1≤j≤K (representatives of each cluster) as well as the (soft) assignment p i,j of each point z i to each centroid. We use the Sinkhorn K-Means algorithm as our clustering module (Section 5.1). We propose Sinkhorn K-Means as the clustering module of Centroid Networks. It takes as input a set of N points x i ∈ R d (typically learned embeddings) and outputs a set of K centroids c j ∈ R d, similarly to K-Means, which can be used to cluster points. Sinkhorn K-Means is based on the Sinkhorn distance (or regularized Wasserstein), described more in depth in Appendix A.1. The differences between Sinkhorn and Regular K-Means, and their formulations as constrained optimization problems are discussed in Appendix B.1. In particular, we expect Sinkhorn K-Means to improve performance on the considered tasks (Section B.2). Step 1: Finding Centroids. We propose an Expectation-Maximization-style procedure to find the centroids which minimize the Sinkhorn distance (Section A.1) between the empirical distributions respectively defined by the data p(We alternate descent on assignments and centroids. Minimization in the assignments is a call to the Sinkhorn algorithm (Algorithm 1), instead of the usual greedy argmin for K-Means. Minimization in the centroids amounts to setting them equal to the weighted average of the points assigned to them. For simplicity, Algorithm 2 describes the procedure in the case where clusters are balanced (e.g. Omniglot and miniImageNet). Typically, we initialize centroids around zero and add a tiny bit of Gaussian noise to break symmetries. When clusters are not balanced but the cluster weights are known (e.g. Meta-Dataset), the weights can be passed to the Sinkhorn distance (see Algorithm 1). All details can be found in the code. Algorithm 1 Sinkhorn(x, c, γ) for Wasserstein-2 distance between empirical distributions. Update centroids c j to minimize cost end while Return centroids c and assignments p. Step 2: Clustering Points. Once the centroids are computed, we propose different ways to cluster the data points: • Softmax conditionals. The conditional probability of point i being assigned to centroid j is given by a softmax on their distance: We add an extra temperature parameter T > 0. Larger temperatures yield more uniform assignments. This is the way points are classified in Prototypical Networks. • Sinkhorn conditionals. The conditional probability of point i being assigned to centroid j is given by the optimal transport plan p i,j computed previously:. Although there is no temperature parameter to tune, the Sinkhorn algorithm has a regularization parameter γ > 0, which has a similar effect as the temperature, since using both are equivalent to rescaling the distance matrix ||h θ (x i) − c j || 2. Using Sinkhorn conditionals favors balanced clusters, but using Softmax conditionals provides no such guarantees. Given a few-shot clustering or unsupervised few-shot classificaiton episode, we embed the raw data z i = h θ (x i). Then, we cluster the support set in embedding space using Sinkhorn K-Means. Finally, we associate query set points with predicted clusters by finding their nearest-centroid in embedding space. We compute the clustering and unsupervised accuracies following Section 3.1. The most intuitive way to train Centroid Networks would be to train them end-to-end, by backpropagating through Sinkhorn K-Means, which contains two nested loops. Although this is technically possible after defining smoother versions of the clustering/unsupervised accuracies (by replacing the 0-1 loss with a cross-entropy), we did not have much success with this approach. Instead, we opt for the much simpler approach of training with a supervised surrogate loss. Since we have access to the ground-truth classes during meta-training, we can simply replace the centroids c j with the average of each class Then, we classify the query set points using either Softmax or Sinkhorn conditionals. Finally, we compute the log-loss on the query set and minimize it using gradient descent. 6 The supervised surrogate loss is very simple, as it removes both the need to find the optimal cluster-class permutation and the the need to backpropagate through Sinkhorn K-means. Center Loss. Additionally to the supervised surrogate loss, we use a center loss penalty . Center losses have been used in metric-learning methods to penalize the variance of each class in embedding space. See for instance where it is used in addition to the standard log-loss for learning discriminative face embeddings. Using a center loss makes sense because there is no obvious reason why the surrogate loss (basically a cross-entropy) by itself would make the classes more compact in embedding space. However, compact clusters is an implicit assumption of K-means and Sinkhorn K-means, which makes it essential for having good validation performance. We find experimentally that center loss helps improve clustering and unsupervised accuracies, at the cost of making supervised accuracy slightly worse (we don't use it for training Protonets). As a preliminary attempt to quantify how consistent class semantics are across episodes, we define the Class Semantics Consistency Criterion as the following ratio: where we define the supervised and unsupervised Bayes accuracies as the highest possible accuracies on a given supervised few-shot classification task and its associated unsupervised counterpart. Except for degenerate cases, the CSCC always varies between 0 (classes are totally inconsistent) and 1 (classes are totally consistent). In practice, we approximate the CSCC by replacing the Bayes accuracies with the supervised accuracy of Protonets and the unsupervised accuracy of Centroid Networks, but with the constraint that their backbone networks have exactly the same architecture. We point out that approximate CSCC is not rigorously defined and can potentially depend significantly on the chosen architecture and hyperparameters. However, we see it as a first step towards quantifying the difficulty of few-shot learning benchmarks. The motivations for introducing unsupervised Bayes accuracy and CSCC are discussed more in depth in Sections B.5 and B.4. We first confirm that Centroid Networks is a reasonable approach by comparing it against a stateof-the art few-shot clustering method (Section 6.1). Then, we attempt to use Centroid Networks to evaluate the difficulty (in terms of class semantic variability) of current few-shot learning benchmarks (Sections 6.2 and 6.3). In all cases, we train our method by minimizing the surrogate loss with Softmax conditionals combined with a center loss (both improve accuracies). Our method requires little to no tuning across datasets, and to show this, we run all experiments with the following default hyperparameters: temperature T = 1, sinkhorn regularization γ = 1, center loss of 1, and Sinkhorn-way 20-shot Acc. Few-shot Clustering (Clustering Accuracy) K-Means (raw features) 21.7%* CCN (KCL) 82.4%* CCN (MCL) 83.3%* Centroid Networks (ours, protonet arch.) 86.8%± 0.6% Centroid Networks (ours, CCN arch.) 86.6%± 0.6% Table 1: Top: Centroid Networks vs. K-Means on raw and Protonet features. Bottom: Test clustering accuracies on Omniglot evaluation set, using the Constrained Clustering Network splits (much harder than Ravi splits). Numbers with a star* are those reported in . We compared both using the Protonet Conv4 architecture and the architecture in (CCN), which has more filters. The differences between the two architectures are not significant. All our accuracy are averaged over 1000 test episodes with a fixed model, and are reported with 95% confidence intervals. Omni5-way 5-shot Omni 20-way 5-shot Acc. miniINet 5-way 5-shot Few-shot Clustering (Clustering Accuracy) K-Means (raw images) 45.2% ± 0.5% 30.7% ± 0.2% 41.4% ± 0.4% K-Means (Protonet features) 83.5% ± 0.8% 76.8% ± 0.4% 48.7% ± 0.5% Centroid Networks (ours) 99.6% ± 0.1% 99.1% ± 0.1% 64.5% ± 0.7% Table 2: Top: Few-shot clustering accuracies for Centroid Networks vs. K-Means on raw data and Protonet features. conditionals for training. The only exceptions are for Omniglot-CCN, where we take center loss weight equal to 0.1, and for the Meta-Dataset, for which we take γ = 0.1 and a center loss weight of 0.01. Please refer to the Appendix B.6 for an ablation study on the effect of each trick. We start with experiments designed to validate that Centroid Networks are a competitive approach to learning how to categorize examples without labels (an important assumption behind our proposed CSCC). [Table 1] For this, we consider the specific task of few-shot clustering and compare with Constrained Clustering Networks (;, a recent state-of-the art learning to cluster method, on the same task as them, which we will denote Omniglot-CCN. 7 Omniglot is resized to 32 × 32 and split into 30 alphabets for training ( set) and 20 alphabets for evaluation. The Omniglot-CCN task consists in clustering each alphabet of the evaluation set individually, after training on the set. This makes it a harder task than standard few-shot classification on Omniglot, because characters from the same alphabet are harder to separate (more fine-grained), and because the number of ways varies from 20 to 47 characters per set. We run Centroid Networks with all default hyperparameters, except a centroid loss of 0.1. The given in Table 1 show that Centroid Networks outperform all "flavors" of CCN by a margin (86.8% vs. 83.3% highest). Furthermore, Centroid Networks are also simpler and about 100 times faster than CCN, because they require to embed the data only once, instead of iteratively minimizing a KCL/MCL criterion. However, we wish to point out that Centroid Networks are less flexible than CCNs, as they require specifying the number of clusters and making an assumption on the sizes of the clusters (in our case, equal size). For this reason, CCNs are more appropriate for the general setting where such assumptions cannot be made. That said, Centroid Networks are particularly suited to our CSCC metric for few-shot classification benchmarks, as they are very efficient and otherwise require strictly less information than a supervised few-shot learning method would. Note that extending Centroid Networks to be as flexible as CCNs would be a promising direction for developing new learning-tocluster methods. Table 3: Using Centroid Networks to solve Omniglot and miniImageNet without using meta-testing labels (unsupervised few-shot classification). We compare the unsupervised test accuracy of centroid networks with the supervised test accuracy of Protonets. Centroid Networks can solve Omniglot almost perfectly (CSCC close to 100%), which suggests the class semantics are extremely consistent, while there is a small gap for miniImageNet (CSCC close to 80%), which suggests the class semantics are fairly consistent. Accuracy are averaged over 1000 test episodes with a fixed model, and are reported with 95% confidence intervals. [Table 2] We also compare Centroid Networks with two baselines on Omniglot and miniImageNet (standard splits, see Appendix A.2). We run K-Means with K-Means++ initialization directly on the raw images and show that it performs very poorly even on Omniglot, which confirms the importance of learning an embedding function. We also run K-Means on pretrained Protonet features, which is a more interesting comparison, since at the highest level, our method could be described as just clustering Protonet embeddings. It turns out that Centroid Networks still outperform K-Means on the embeddings by a substantial margin on both Omniglot (99.6% vs. 83.5% for 5-way) and miniImageNet (64.5% vs. 48.7%), which confirms the importance of combining Sinkhorn K-Means and the center loss trick. We now come to the main contribution of this work, which is to assess the difficulty of current few-shot learning benchmarks, using CSCC. [Table 3] We report the performance of CentroidNets on unsupervised few-shot classification tasks on Omniglot and miniImageNet. We also report the performance of Prototypical Networks for the standard supervised few-shot classification tasks. This comparison between the models is fair (Section B.3) even if they solve different tasks, because unsupervised few-shot classification is strictly harder than supervised few-shot classification (Section 3.1). Data splits and architectures are the same as in Protonets, and can be found in Section A.2 of the Appendix. 9 For Omniglot, CentroidNets achieves nearly the same accuracy as Prototypical Networks despite using none of the labels of the support set. The CSCCs of 0.99 are nearly equal to the maximum, which supports our hypothesis that Omniglot has nearly perfect class semantics consistency. For miniImageNet CentroidNets can still achieve an unsupervised accuracy of 55.3%, which is of the same order as the supervised accuracy of 68.7%, despite not using any labels from the support set. The CSCC of 0.80 is not as high as Omniglot but still suggests that there is still a fairly high amount of class semantics consistency. [Table 4] We use approximate CSCC to evaluate the difficulty of Meta-Dataset, under the two settings presented in: meta-train on ILSVRC and meta-train on all datasets. Traffic Sign and MSCOCO are evaluation-only datasets which are excluded from the meta-training. Meta-evaluation is done on all datasets. We use the same Resnet-18 architecture and hyperparam- Table 4: Using Centroid Networks to solve Meta-Dataset without using meta-testing labels (unsupervised few-shot classification) under the two originally proposed settings: training on ILSVRC, and training on all datasets except Traffic Sign and MSCOCO. We report supervised test accuracy for Prototypical Networks (reproduced from the official implementation), unsupervised test accuracy for Centroid Networks (ours), and approximate CSCCs (their ratio). All numbers are in percentages, all accuracies are averaged over 600 test episodes. eters for Protonets and CentroidNets, but CentroidNets is trained with an additional of center loss of 0.001 during meta-training. We pretrain on ILSVRC for the all datasets setting. The first observation is that supervised/unsupervised accuracies and approximate CSCCs, 10 are higher when training on all datasets instead of training on ILSVRC only, except for ILSVRC (since it is used in both trainings), Traffic Sign and MSCOCO. The fact that CSCC for Traffic Sign and MSCOCO is actually lower when training on more datasets either means that training on ILSVRC alone can sometimes be better for transfer learning, or is a consequence that the sampling scheme is not optimal . Aircraft (53%) and Omniglot (69%) are the ones that benefit the most from training on all datasets in terms of CSCCs. We compare approximate CSCCs inside each sub -table. 11 High CSCCs in the all datasets sub-table suggest that those datasets have very self-consistent class semantics: Omniglot and Aircraft both have very high CSCCs (more than 80%), while ILSVRC (57%) and Fungi (56%) have the lowest ones. It is less clear how to interpret the CSCCs in the only ILSVRC sub-table, but High CSCCs might suggest that those datasets have very similar class semantics with ILSVRC. Except Omniglot (69%), most datasets have fairly low CSCCs. It is interesting to note that some datasets higher CSCC than ILSVRC itself. We leave to future work to determine whether it means that ILSVRC is so inconsistent that it is easier to adapt to other datasets, or if it is a shortcoming of our metric. We proposed Centroid Networks for performing clustering without labels at meta-evaluation time, and with the idea of using it to assess the difficulty of few-shot classification benchmarks. First, we validate our method by beating a state-of-the-art few-shot clustering method in the setting of a known number of equally-sized clusters, with the advantage that our method is easier to train and orders of magnitude faster to run. Then, we define the CSCC metric from the unsupervised accuracy of Centroid Networks, and use it for quantifying the difficulty of current few-shot learning benchmarks in terms of class semantics consistency. We find that Omniglot has extremely consistent class semantics (CSCC close to 1), and that miniImageNet has fairly high CSCC as well (CSCC close to 0.8), which backs the intuition that its class semantics invariably correspond to object categories. Our on the Meta-Dataset benchmark show that it has much lower CSCCs than Omniglot in all settings, and lower CSCCs than miniImageNet in the ILSVRC only setting, which confirms that Meta-Dataset has harder and more diverse class semantics. As future work, we would like to improve the CSCC by making it more interpretable and less dependent on the backbone architectures. A APPENDIX: AND IMPLEMENTATION DETAILS A.1 SINKHORN DISTANCES The Wasserstein-2 distance is a distance between two probability masses p and q. Given a base distance d(x, x), we define the cost of transporting one unit of mass from x to x as d(x, x) 2. The Wasserstein-2 distance is defined as the cheapest cost for transporting all mass from p to q. When the transportation plan is regularized to have large entropy, we obtain Sinkhorn distances, which can be computed very efficiently for discrete distributions (entropy-regularization makes the problem strongly convex). Sinkhorn distances are the basis of the Sinkhorn K-Means algorithm, which is the main component of Centroid Networks. In Algorithm 1, we describe the Sinkhorn algorithm in the particular case where we want to transport mass from the weighted data points (x i, R j) to the weighted centroids (c j, C j), where R j and C j are the weights of the data points and centroids, respectively. In practice, we leverage the log-sum-exp trick in the to avoid numerical underflows. A.2 DATA SPLITS AND ARCHITECTURE FOR OMNIGLOT AND miniIMAGENET EXPERIMENTS For the embedding network for the Omniglot and miniImageNet, we reuse exactly the same simple convolutional architecture as in Prototypical Networks , which consists of four stacked blocks (2D convolution with 3 × 3 kernel and stride 1, BatchNorm, ReLU, and 2 × 2 max-pooling), the output of which is flattened. This in a 64-dimensional embedding for Omniglot and 1600-dimensional embedding for miniImageNet. For miniImageNet, we pretrain the embedding function using prototypical networks to solve 30-way problems instead of 5, which is the recommended trick in the paper . For the other settings, we train from scratch. Omniglot consists of a total of 1623 classes of handwritten characters from 50 alphabets, with 20 examples per class. Images are grayscale with size 28 × 28. We follow the same protocol as in Prototypical Networks and use the "Vinyals" train/validation/test splits. We consider 5-way 5-shot and 20-way 5-shot settings (15 query points per class). miniImageNet consists of 100 classes, each containing 600 color images of size 84 × 84. We follow the "Ravi" splits: 64 classes for training, 16 for validation, and 20 for testing. We consider the 5-way 5-shot setting (15 query points per class). We present our version of the Sinkhorn K-Means optimization problem, and compare it with regular K-Means. Both of them can be formulated as a joint minimization in the centroids c j ∈ R d (real vectors) and the assignments p i,j ≥ 0 (scalars) which specify how much of each point x i is assigned to centroid c j: • K-Means. Note that compared to the usual convention, we have normalized assignments p i,j so that they sum up to 1. where H(p) = − i,j p i,j log p i,j is the entropy of the assignments, and γ ≥ 0 is a parameter tuning the entropy penalty term. Sinkhorn vs. Regular K-Means. The first difference is that K-Means only allows hard assignments p i,j ∈ {0, 1 N}, that is, each point x i is assigned to exactly one cluster c j. On the contrary, the Sinkhorn K-Means formulation allows soft assignments p i,j ∈ [0, 1 N], but with the additional constraint that the clusters have to be balanced, i.e., the same amount of points are soft-assigned to each cluster i p i,j = 1 K. The second difference is the penalty term −γH(p) which encourages solutions of high-entropy, i.e., points will tend to be assigned more uniformly over clusters, and clusters more uniformly over points. Adding entropy-regularization allows us to compute p i,j very efficiently using the work of. Note that removing the balancing constraint i p i,j = 1 K in the Sinkhorn K-Means objective would yield a regularized K-Means objective with coordinate update steps identical to EM in a mixture of Gaussians (with p i,j updated using softmax conditionals). The ablation study in Section B.6 shows that using Sinkhorn K-Means instead of K-Means is the most decisive factor in improving performance. There are mainly two possible explanations: 1. Sinkhorn K-Means is particularly well adapted to the few-shot clustering and unsupervised few-shot classification problems because it strictly enforces the fact that the classes have to follow a given distribution (e.g. balanced), whereas K-Means does not. 2. Sinkhorn K-Means is likely to converge better than K-means due to the entropyregularization factor of the Sinkhorn distance. To illustrate the second point, consider the limit case where the regularization factor of Sinkhorn distance goes to infinity (γ → ∞). Then, the assignments in Sinkhorn K-Means become uniform (each cluster is assigned equally to all points), and all the centroids converge -in one step -to the average of all the points, reaching global minimum. This is by no means a rigorous proof, but the limit case suggests that Sinkhorn K-Means converges well for large enough γ. This behavior is to be contrasted with K-means, for which convergence is well known to depend largely on the initialization. One could argue that comparing CentroidNets with ProtoNets is unfair because using Sinkhorn KMeans leads to centroids which are weighted averages, whereas ProtoNet prototypes are restricted to be unweighted averages. Therefore, we run Centroid Networks on miniImagenet, but under the constraint that centroids to be unweighted averages of the data points. To do so, starting from the soft weights, we reassign each data point only to its closest centroid, and compute the unweighted averages. The comparison between ProtoNets and CentroidNets is now fair in the sense that both prototypes and centroids use unweighted averages. • Unsupervised accuracy on miniImagenet is 0.5508 ± 0.0072 for weighted average and 0.5497 ± 0.0072 for unweighted average. The difference is not significant. • Clustering accuracy on miniImagenet is 0.6421±0.0069 for weighted average and 0.6417± 0.0069 for unweighted average. The difference is also not significant. This experiment suggests that using weighted averages does not bring an unfair advantage, and therefore does not invalidate our comparison. More generally, instead of trying to tune ProtoNets and CentroidNets as well as possible, we try to make ProtoNets and CentroidNets more comparable by using the same architectures and representation. We define the unsupervised Bayes accuracy of an unsupervised few-shot classification task distribution as the highest achievable unsupervised accuracy. Just like the usual Bayes error is limited by label noise, the unsupervised Bayes accuracy is limited by cluster-semantic noise of a task. For illustration, consider the following unsupervised few-shot classification task distribution: 1. Uniformly sample a random dimension 1 ≤ j ≤ D (hidden to the algorithm) 2. Sample (iid, with probability 1/2) random binary vectors (x i) 1≤i≤D of dimension D (shown to the algorithm) and split them between support and query set. 3. Assign binary labels y = x j to each vector (x i) (hidden to algorithm). 4. The goal is to cluster the support set and associate query set points with the support clusters. Because the algorithm does not know which dimension j was sampled (i.e. the class semantic), it does not know how to cluster the support set. Therefore, it is just as good to make random predictions on the query set. Therefore the unsupervised Bayes accuracy is 0.5. Now, consider the same task distribution, except the dimension index j is always fixed to 1. After meta-training, the algorithm can learn a representation mapping each vector to the value of its first dimension only. The support set can be clustered by grouping all 1s together, and all 0s together. Each query point can then be unambiguously assigned to one of the clusters. The ing unsupervised Bayes accuracy is 1. Both task distributions would become equivalent if the algorithm had access to the class semantics j. Therefore, the two unsupervised few-shot tasks differ in difficulty only because of the uncertainty/variability on class semantics, and this is reflected in the difference in unsupervised Bayes accuracy. CSCC attempts to quantify the importance of the supervision information, which is not directly related to the difficulty of few-shot learning problem. Indeed, the difficulty of few-shot learning problems can come from many aspects, including but not limited to: • visual difficulty (how hard is it to train a classifier on all the classes at the same time) • class semantic consistency (how much do the class semantics vary) If the goal is to design meaningful benchmarks for supervised few-shot classification methods, it is important to understand which aspects make those benchmarks difficult. For instance, consider the limit case of a supervised few-shot classification task in which the same 5 classes are sampled over and over again. The visual difficulty might be extremely high (e.g. very fine-grained classification), which might lead people to believe that it is a good benchmark (because it is hard and all methods achieve low accuracies). However, because there is no variability at all in the class semantic consistency, such a benchmark does not evaluate at all the capacity of few-shot methods of adapting to new tasks. Our intent is not to introduce CSCC as a proxy for task difficulty (supervised accuracy of SOTA models might be fine for that purpose). Rather, we introduce the CSCC as an attempt to decouple the different axes of difficulty. Dividing the unsupervised Bayes accuracy by the supervised Bayes accuracy is a rough way of normalizing away the visual difficulty (which affects both supervised and unsupervised accuracies) and focusing on the supervision information only. We conduct an ablation study on Omniglot (5-way 5-shot) and miniImageNet (5-way 5-shot) to determine the effect and importance of the various proposed tricks and components: • K-Means vs. Sinkhorn K-Means. From comparing O3 to O4, O1 to O5, M6 to M7, M1 to M8, it appears that using Sinkhorn K-Means instead of K-Means++ is the most beneficial and important factor. • Center Loss. From comparing O2 to O3, O5 to O6, O4 to O8, M7 to M11, M8 to M9, center loss seems to be beneficial (although the significance is at the limit of the confidence intervals). It is the second most influential factor. • Softmax vs. Sinkhorn conditionals (at meta-training and meta-evaluation time). For training, it is not clear whether using Sinkhorn or Softmax conditionals is beneficial or not. For evaluation, from comparing M1 to M2, M3 to M4, M5 to M6, it seems that Sinkhorn conditionals are better if the metric is clustering accuracy, while Softmax conditionals might be better if the metric is unsupervised accuracy, although the effect seems to be negligible (see how the color patterns are inverted).
Omniglot and miniImageNet are too simple for few-shot learning because we can solve them without using labels during meta-evaluation, as demonstrated with a method called centroid networks
1,197
scitldr
We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment. We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world. Our show that our decision states are: 1) often interpretable, and 2) lead to better exploration on downstream goal-driven tasks in partially observable environments.
Identify decision states (where agent can take actions that matter) without reward supervision, use it for transfer.
1,198
scitldr
Inverse problems are ubiquitous in natural sciences and refer to the challenging task of inferring complex and potentially multi-modal posterior distributions over hidden parameters given a set of observations. Typically, a model of the physical process in the form of differential equations is available but leads to intractable inference over its parameters. While the forward propagation of parameters through the model simulates the evolution of the system, the inverse problem of finding the parameters given the sequence of states is not unique. In this work, we propose a generalisation of the Bayesian optimisation framework to approximate inference. The ing method learns approximations to the posterior distribution by applying Stein variational gradient descent on top of estimates from a Gaussian process model. Preliminary demonstrate the method's performance on likelihood-free inference for reinforcement learning environments. We consider the problem of estimating parameters θ of a physical system according to observed data y. The forward model of the system is approximated by a computational model that generates dataŷ θ based on the given parameter settings θ. In many cases, the corresponding likelihood function p(ŷ θ |θ) is not available, and one resorts to likelihoodfree methods, such as approximate Bayesian computation (ABC) , conditional density estimation , etc. For certain applications in robotics and reinforcement learning, however, the number of simulations might be limited by resource constraints, imposing challenges to current approaches. Recent methods address the problem of efficiency in the use of simulations by either constructing conditional density estimators from joint data {θ i,ŷ i} N i=1, using, for example, mixture density networks , or by sequentially learning approximations to the likelihood function and then running Markov chain Monte Carlo (MCMC). In particular, derive an active learning approach using Bayesian optimisation (BO) to propose parameters for simulations. Their approach reduces the number of simulator runs from the typical thousands to a few hundreds. This paper investigates an approach to combine the flexible representative power of variational inference methods with the data efficiency of Bayesian optimisation. We present a Thompson sampling strategy to sequentially refine variational approximations to a black-box posterior. Parameters for new simulations are proposed by running Stein variational gradient descent (SVGD) over samples from a Gaussian process (GP) . The approach is also equipped with a method to optimally subsample the variational approximations for batch evaluations of the simulator models at each round. In the following, we present the derivation of our approach and preliminary experimental . Our goal is to estimate a distribution q that approximates a posterior distribution p(θ|y) over simulator parameters θ ∈ Θ ⊂ R d given observations y from a target system. We assume no access to a likelihood function p(y|θ), but only to a discrepancy measure 1 between simulator outputs and observations ∆ θ, as in. We take a Bayesian optimisation approach to find the optimal q * by minimising a discrepancy between q and the target p: where S represents the kernelised Stein discrepancy (KSD) . 2 We solve Equation 1 via a black-box approach which does not require gradients of the target distribution p nor its availability in closed form. The ing BO algorithm is composed of a GP model to form an approximate likelihood, a Thompson sampling acquisition function to select candidate distributions and a kernel herding procedure to optimally select samples of simulator parameters. A standard BO approach would place a GP to model the map from q's parameters to the corresponding KSD. However, such parameter space holds a weak connection with the original Θ and is possibly higher-dimensional. We choose to bypass this step by learning q directly via Stein variational gradient descent (SVGD) . Applying SVGD directly to Equation 1 would require gradients of the target log p. In our case, we have that: As p(y|θ) is unavailable, we use a GP to model g: θ → −∆ θ, which defines a synthetic likelihood function , i.e.: 1. The ABC literature offers a plenitude of choices for ∆ θ. For a review, we refer the reader to and. Our choice for experiments is given in Section 3. 2. Background details on the KSD are presented in the appendix (Section A.1). The simulations-observations discrepancy ∆ θ is possibly expensive to evaluate and not differentiable, due to the need of running a black-box simulator. The GP then provides an approximation which is cheap to evaluate and whose sample functions are differentiable for smooth kernels, allowing us to apply SVGD in the BO loop. We propose selecting candidate distributions q n ∈ Q based on a GP posterior sampling approach known as Thompson sampling , which has been successfully applied to BO problems in the case of selecting point candidates θ ∈ Θ (; ; Mutný and). Thompson sampling accounts for uncertainty in the model by sampling functions from the GP posterior. For models based on finite feature maps, such as sparse spectrum Gaussian processes (SSGPs) (Lázaro-), the Thompson sampling approach resumes to sampling weights w n from a multivariate Gaussian (Appendix A.2), so that: constitutes a sample from the posterior of a SSGP with mean function µ 0 and feature map φ. Recalling the objective in Equation 1, we can now define the acquisition function as: wherep n (θ) ∝ p(θ)e gn(θ) corresponds to an approximation to the target posterior p(θ|y) based on g n. SVGD represents the variational distribution q as a set of particles {θ i} M i=1 forming an empirical distribution. The particles are initialised as i.i.d. samples from the prior p(θ) and optimised via a sequence of smooth perturbations: where k(θ, θ) = φ(θ) T φ(θ) corresponds to the SSGP kernel, and η t is a small step size. Intuitively, the first term in the definition of ζ guides the particles to the local maxima of logp n, i.e. the modes ofp n, while the second term encourages diversification by repelling nearby particles. In contrast to the true posterior, the gradients of logp n are available as: Gradients of sample functions are always defined for SSGP models with differentiable mean functions, since the feature maps are smooth. For a uniform prior, which we use in experiments, also note that ∇ θ log p(θ) = 0 almost everywhere. Having selected a distribution q n, we need to run evaluations of ∆ θ from samples θ ∼ q n to update the GP model with. Representing q by a large number of particles M improves Algorithm 1: DBO Input: f, Q, N, S for n ∈ {1, . . . N} do q n ∈ argmax q∈Q h(q|D n−1) # Maximise acquisition function via SVGD {θ n,i} S i=1 ∼ Herding(q n, D n−1) # Sample simulator parameters for i ∈ {1, . . ., S} do z n,i:= −∆ θ n,i # Collect observation end end exploration of the approximate posterior surface, allowing SVGD to find distant modes. However, we should not use the large number of particles directly as sample parameters to run the simulator with, since simulations are expensive. Therefore, we select S M query parameters {θ n,j} S j=1 ⊂ Θ by optimally subsampling the candidate q n. Kernel herding constructs a set of samples which minimises the error on empirical estimates for expectations under a given distribution q. This error is bounded by the maximum mean discrepancy (MMD) between the kernel embedding of q and its subsampled version . In the case of SSGPs, the kernel herding procedure resumes to the following algorithm: for j ∈ {0, . . ., S − 1} and α 0 = ψ q = E θ∼q [φ(θ)]. However, instead of naively herding with the original feature map φ, we make use of the information encoded by the GP to select samples which will be the most informative for the model. Such information is encoded by the GP posterior kernel: where N is the covariance matrix of the GP weights posterior (defined in Appendix A.2). The posterior kernel provides an embedding for q given by: which accounts for the previously observed locations in the GP data. Replacing ψ q by ψ n q in Equation 8 yields the sampling scheme we use. The distributional Bayesian optimisation (DBO) algorithm is summarised in Algorithm 1. In this section, we present experimental evaluating DBO in synthetic data scenarios. As a baseline we compare the method against mixture density networks (MDNs), as in , which were learnt from a dataset of parameters sampled from the prior p(θ) and the corresponding simulator outputsŷ θ. The experiment evaluates the proposed method on OpenAI Gym's 3 cart-pole environment. We fix a given setting for its physics parameters θ real and generate a dataset y of 10 trajectories by executing randomly sampled actions. Summary statistics γ were the same as. The discrepancy was set to ∆ θ:= γ θ − γ real 2 /σ 2. We place a uniform prior p(θ) with bounds specific for the environment. Further details on the experimental setup are described in Appendix B. An open-source implementation can be found online 4. The in Figure 1 show that the mehtod is able to recover the target system's curve-shaped posterior and is able to obtain better approximations to the posterior when compared to the MDN approach. We can also see that in terms of MMD, DBO is able to provide a better overall approximation than the MDN. This paper presented a Bayesian optimisation approach to inverse problems on simulator parameters. Preliminary demonstrated the potential of the method for reinforcement learning applications. In particular, show that distributional Bayesian optimisation is able to provide a more sample-efficient approach than other likelihood-free inference methods when inferring parameters of a classical reinforcement learning environment. Future work includes further scalability and theoretical analysis of the method. 3. OpenAI Gym: https://gym.openai.com 4. Code available at: https://github.com/rafaol/dbo-aabi2019, instead., we can represent any function g sampled from the SSGP posterior as g(θ) = µ 0 (θ) + w T φ(θ), θ ∈ Θ, where: with Φ N = [φ(θ 1),..., φ(θ N)] ∈ R 2M ×N. The posterior over g is then determined by: µ N (θ):= µ 0 (θ) + φ(θ) where µ N and σ 2 N denote the GP posterior mean and variance functions, respectively. Fast incremental updates: To reduce the time complexity in the update of the GP posterior when given a new observation pair (θ N +1, z N +1), propose using the decomposition: To avoid recomputing A −1 N +1, one can instead keep track of its Cholesky factors. The latter allows us to update the GP posterior with time complexity O(M 2) , which is constant with respect to the number of data points N.
An approach to combine variational inference and Bayesian optimisation to solve complicated inverse problems
1,199
scitldr