query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems’ actual effects on the chat experience remain unknown. We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared these in two different studies. To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants, and also a 15-day longitudinal field deployment with 18 participants. We found that lexical emoji suggestions increased emoji usage by 31.5% over a keyboard without suggestions, while semantic suggestions increased emoji usage by 125.1%. However, suggestion mechanisms did not affect the chatting experience significantly. From these studies, we formulate a set of design guidelines for future emoji suggestion systems that better support users’ needs. forms of text-based computer-mediated communication (CMC) lack nonverbal expressions like vocal tones, facial expressions, and gestures that are useful in face-to-face conversations. However, several studies have shown that emojis can facilitate affective communication and serve a non-verbal function in text conversations. Emojis are already widely used in text-based CMC, with nearly every instant messaging platform supporting their entry. Five billion emojis were sent per day on Facebook Messenger in 2017, and half of all Instagram comments included an emoji as of mid-2015. Many mobile keyboards offer emojis as a set of pictographic Unicode characters. As there is a large and growing set of emojis, manually searching for and selecting emojis can be a tedious task interrupting the flow of text entry. Commercial products that automatically suggest emojis have helped the emoji entry process become more seamless. These products usually come in two variations-lexical and semantic suggestions-as shown in Table 1. With lexical suggestions (e.g., Gboard), relevant emojis appear in a candidate list based on recent keywords typed by the user. With semantic suggestions (e.g., Dango; Figure 1 ) proposed emojis are based on the meaning of the message's content rather than on specific keywords. Although emojis themselves are known to enrich conversations, the role that different emoji suggestion systems play has not been explored. Instead, prior work on suggestion systems has focused on retrieval precision and recall. But how do different suggestion mechanisms influence emoji usage? How do they differ in terms of usability? How do they influence the chat experience, specifically the engagement and the clarity of conversations? To investigate these questions, we implemented a keyboard capable of offering both lexical and semantic emoji suggestions. We first evaluated the performance of the two suggestion mechanisms with an online study. The I enjoyed the fish tonight very much! I love him but he just ignored me... I'm tired of "happy birthday" Table 1. Examples of lexical and semantic emoji prediction. With lexical prediction, the suggested emojis are related to the literal meaning of certain keywords. With semantic prediction, the suggestions focus on the meaning of the sentence. confirmed that semantic suggestions were perceived as being more relevant than lexical suggestions. We then conducted an in-lab study with pairs of participants using three emoji suggestion mechanisms (no suggestions, lexical suggestions, and semantic suggestions). We also conducted a 15-day field deployment to evaluate emoji usage in real-world settings. We found that emoji suggestion systems increased emoji usage overall, with users picking more emojis via semantic suggestions versus lexical suggestions or no suggestions. Users also felt that semantic emoji suggestions were more relevant than lexical emoji suggestions. Although semantic suggestions received more positive feedback in terms of usability than lexical suggestions, neither had a significant effect on the participants' perceived chat experience overall. The contributions of this work are: from an online study comparing the perceived relevance of lexical and semantic suggestions; from an in-lab study comparing emoji suggestion mechanisms within the mobile chat experience; from a longitudinal field deployment that tracked realistic usage of emoji suggestion systems; and design guidelines of emoji suggestion systems based on the findings from our studies. Emoji-related research has become more prominent as emojis have grown in number and popularity. In this section, we review related work from three different areas: emoji usage and its effects in online communication, emoji entry techniques, and the use of machine learning for producing semantic emoji suggestions. As Unicode character pictographs, emojis are treated similarly to other characters in text-based applications. In fact, emojis can even be used in text-only locales such as URLs. That being said, emojis represent richer information than plain text and are easier to share than images, giving emojis certain advantages over other forms of communication. 1 https://emojipedia.org/stats/ Emojis usage has steadily increased since emojis were introduced to the Unicode Standard in 2009. According to a report by Swiftkey in 2015, their users inputted over one billion emojis in a four-month period. Although over 800 emojis were available to users during that time, traditional "face" emojis (e.g.,) comprised nearly 60% of all emojis sent. Roughly 70% of the messages containing emojis expressed a positive emotion, and only 15% of the messages expressed a negative emotion. Jain et al. found that emojis are used to convey all kinds of emotions, and the number of emojis used in a message could determine the arousal of the sender. They also found that emoji combinations could be used to convey more complex expressions (e.g., meaning, "I'm relaxing and playing soccer"). Cramer et al. conducted an online survey with 228 respondents, finding three major reasons for why people use emojis: to provide additional emotional or situational information, to change the tone of a message, and to engage the recipient and maintain their relationship. Although every emoji has an intended definition, people also use emojis in highly personalized and contextualized ways to create "shared and secret uniqueness". Wiseman and Gould cited an example where a couple used the pizza emoji to express love because they both loved eating pizza. There is no doubt that emojis extend and enrich the way people express themselves in text-based CMC. Our current work focuses on the role that suggestion mechanisms play in facilitating such expressions. Pohl et al. provide a thorough review of emoji entry techniques. The most common entry method on current commercial keyboards is grouped enumeration, wherein users can scroll through different categories to select their emojis. However, as there are over 2,800 emojis, 1 so visually searching and selecting emojis is a tedious process. EmojiZoom displays all emojis at once, requiring users to zoom to select one. However, this method still fails to scale as the number of emojis increases. Querying techniques, such as text search or sketching, are implemented in many keyboards like Gboard. Users can search for emojis by sketching them or typing their intended meaning (e.g., "happy birthday" for a cake emoji). Such techniques require users to have a target emoji in mind, and the process is slow. Suggestion-based input methods have become popular in recent years. Lexical suggestions are offered by keyboards like the Apple iOS 11 keyboard. However, the suggestions do not work for all possible keywords, since keywords must be defined beforehand. For example, the pear emoji Figure 1. The semantic emoji suggestion application Dango. When text is typed, Dango pops up a suggested emoji based on semantic message content. The user can tap on an icon to see more options. appears in Gboard's suggestion list when "pear" is typed, but it disappears if "pears" is typed. A relatively new emoji suggestion technique that appears in products like Dango uses semantic information. Instead of relying on keywords, semantic suggestion offers emojis based on the sentiment of the whole message. This mechanism often provides affective emojis like faces. Google deployed a similar system called Smart Reply; rather than focusing on suggestions based on input, SmartReply auto-generates replies with emojis based on the context of the conversation. To suggest emojis using semantics, emojis must be linked with the meanings of typed messages. Our keyboard implementation relies on a method from prior work called DeepMoji by Felbo et al.. The implementation of their model used in this paper can be found on GitHub. There were several limitations of Felbo et al.'s approach. First, the model was built for sentiment classification tasks, so most of the emojis it predicted were related to the messages' emotions rather than their meanings. If a user typed "happy birthday," for example, the predicted emojis would be happy face emojis rather than the birthday cake emoji . Second, the model was trained to handle only 64 emojis. Although those emojis were the most common on Twitter, many of them were faces rather than objects. Nevertheless, our goal is to examine the effects of different types of suggestion mechanisms rather than improving on the mechanisms themselves, so we did not extend the output emoji set beyond the original 64. We built our Android keyboard using the open source project AnySoftKeyboard 3. The keyboard interface is shown in Figure 3. The keyboard uses the default auto-correction mechanism, but the word-suggestion feature is replaced with emoji suggestions. Users can enter special characters or numbers by tapping the upper-left button; they can enter emojis by tapping the lower-left button. The overall text entry interaction of the keyboard is shown in Figure 4. As a user types in the text box, the keyboard provides word suggestions in the candidate list. When the user finishes typing a word, the keyboard suggests emojis instead of words in the candidate list. If the user picks an emoji from the list, it is inserted at the end of the message. The suggestion varies based on the mechanism in use. With semantic suggestion, the keyboard always presents five emojis after the user finishes typing a word. Suggestions are generated using the DeepMoji model running on a remote server. The keyboard sends an HTTP POST request to the server each time the user finishes typing a word, and the server returns the top-five related emojis. (The amount of information transmitted is small and there were no latency concerns in our implementation or studies.) With lexical suggestion, the keyboard suggests emojis only if the keyword list contains the last-typed word. If no emoji matches the last-typed word, the keyboard presents the most recent suggestions. For example, if the user types "football field," the keyboard will continue to suggest the football emoji because there is no lexical match with the word "field." If no word anywhere in the message has a match in the emoji keyword list, the keyboard provides no suggestions. Lexical suggestion is implemented using the open-source emoji library emojilib 4. The library provides a *.json file containing 1,502 emojis and their corresponding keywords. For example, the clapping emoji has words "hands," "praise," "applause," "congrats" and "yay." We did not force the frequency of emoji suggestion updates to be the same for both lexical and semantic suggestion mechanisms, as these mechanisms are fundamentally different in nature. For lexical suggestion, the opportune moment for updating the emoji suggestions is straightforward-whenever a keyword has been typed. For semantic suggestion, however, it is unclear when the suggestions should be updated because it is not obvious when the user is finished typing. Thus, our keyboard updates the emoji suggestions after the user finishes typing each word, not just keywords. We also did not force an equal number of emoji suggestions across the two keyboards. Using a machine learning model for semantic suggestion returns a fixed number of emojis (five in DeepMoji), but lexical suggestion can produce a 4 https://github.com/muan/emojilib variable number of emojis. Adding extra emojis when lexical suggestion produces too few emojis would confuse users with unrelated emojis, and conversely, trimming potentially relevant emojis from the semantic suggestions would make for a keyboard unrepresentative of its full potential. Our keyboard logs input statistics related to text and emoji entry: the number of typed characters, the number of deleted characters, the number of emojis manually added from the traditional emoji enumeration interface, and the number of emojis selected from the two suggestion lists. To respect participants' privacy, our keyboard did not log the content of any typed messages. Before having people use our keyboard, we conducted a preliminary crowd-sourced experiment to evaluate the performance of lexical and semantic emoji suggestion systems. Specifically, we were interested in quantifying the mechanisms' human-perceived accuracy. We randomly sampled 50 tweets from the Sentiment140 dataset 5, among which 25 contained positive sentiments and 25 contained negative sentiments. For each tweet, lexical and semantic emoji suggestions were generated and shuffled into a list. The tweets and their corresponding emoji suggestions were added to a single survey task on Amazon Mechanical Turk. Respondents were asked to "choose as many emoji options that made sense to be added at the end of each sentence." If the respondent felt that none of the emoji suggestions were relevant, they were allowed to select "none of the above." Respondents were not told that different suggestion mechanisms were in use. We collected responses from 20 English-speaking Mechanical Turkers. The are shown in Table 2. Overall Relevance refers to the number of selected emojis divided by the total number of emojis shown in the survey. There were 5,000 total emojis shown by the semantic suggestion system (50 tweets × 5 emojis per tweet × 20 people); respondents felt that 52.5% of them aligned with their corresponding tweet. For the lexical suggestion system, 2,520 emojis were shown, of which 21.6% were deemed relevant, less than half of those from the semantic suggestion system. Pick-1 Relevance examines whether any of the predicted emojis were selected for each tweet; in other words, as long as any emoji from a particular suggestion mechanism was deemed relevant for a tweet, the Pick-1 Relevance of that mechanism was 100% for that tweet. Out of 1,000 tweets (50 tweets × 20 users), semantic suggestion provided a relevant emoji 94.6% of the time. Lexical suggestion, on the other hand, only produced a relevant emoji 32.6% of the time. Top-1 Percentage is a head-to-head comparison that captures whether the most commonly selected tweet per emoji came from the lexical or semantic suggestion system. Semantic suggestion was the overwhelming winner, providing the more relevant emoji for 47 of the 50 tweets. The show that Turkers generally perceived the emojis based on semantic content as more relevant than those based on keywords. This verified our hypothesis that the two suggestion mechanisms would be perceived differently. However, this study alone does not demonstrate how the different suggestion mechanisms affect the chatting experience, which leads us to our laboratory experiment. Although in-lab experiments are not generally representative of realistic conditions, they are useful for studying conversations because they allow data to be gathered from both senders and receivers. An in-lab study enabled us to explore how the emoji suggestion mechanisms might affect the two conversational sides differently. If a person uses more emojis because they find it easier to do so, a recipient might react in two ways: they might enjoy the conversation more and reciprocate, or they might enjoy the conversation less due to annoyance, distraction, or confusion. Twenty-six participants (15 females, 11 males) between 18 and 34 years old (M=28.9, SD=4.2) were recruited via emails, word-of-mouth, and convenience sampling in a large university setting. The participants were randomly divided into 13 pairs. The pairs were constructed such that the participants did not know each other and did not meet faceto-face until the end of the study. Each participant was given $8 USD as compensation for the 30-minute study. Participants were provided with Nexus 6P smartphones running Google Android 7.0. Our keyboard was installed on each phone. Wechat 6 was used as the instant message application because Wechat provides a function to export the chat history. We used the chat history to verify the data logged by our mobile keyboard. Participants were instructed to avoid using Wechat's built-in button for emoji entry since it bypassed our keyboard's logging functionality. Participants were told that they would take part in an online chat experiment using our mobile keyboard. They chatted 6 https://www.wechat.com/en/ with another participant for three 10-minute sessions, each of which was assigned to one of three emoji suggestion conditions: no-suggestion, lexical, or semantic. The order of the conditions was fully counterbalanced across participants. The participants were told that they could steer the conversation towards any topic of their choosing but were told that a "recent activity" could be used to start. The participants were also told that the only difference between the sessions would be the keyboard's emoji suggestion , but they were not told anything about the suggestion mechanisms, what they were, or how they worked. Q1. Do you communicate online with your phone (SMS/IM/Email, etc.) a lot? Q2. Do you use emojis in online conversations a lot? Table 3. The questionnaire about online chatting and emoji use behavior. Q1. The chatting experience was interesting. Q2. My attention was focused on the conversation. Q3. I could express my emotion clearly using the keyboard. Q4. I felt constrained in the types of expressions I could make. Q5. I was able to get an impression of my partner. Q6. The chatting experience excites my curiosity. Table 4. The survey questions about the chat experience. Answers were provided via Likert scales ranging from 1 (strongly disagree) to 7 (strongly agree). Q1. I used the emoji suggestion a lot in my typing, and it was useful. Q2. I would like to use this system frequently. Q3. I thought the system was easy to use. Q4. The system did well on proposing relevant emojis. Q5. I like the emoji suggestion system better than the nosuggestion system. Table 5. The usability survey for the suggestion keyboards. Answers were provided via Likert scales ranging from 1 (strongly disagree) to 7 (strongly agree). Before the conversation began, the participants were told to fill out a questionnaire that asked about their online chat and emoji use behaviors (Table 3). After each session, the participants filled out another questionnaire asking about their chat experience (Table 4). This questionnaire probed their engagement (Q1, Q2, and Q6) and perceived expressiveness and clarity (Q3, Q4, and Q5) regarding the chat experience. Both questionnaires were derived from prior work on CMC. When participants used lexical or semantic suggestions in a session, they also completed the usability questionnaire shown in Table 5, which was adapted from the SUS survey. At the end of the 30-minute session, participants were interviewed with two open-ended questions: "How do you like the suggestion keyboards? Do you find they affect you (in negative or positive ways) in online chatting?" and "Do you find any problems with the keyboard suggestion mechanism, or do you have any suggestions?" The study was a single-factor three-level within-subjects design with the suggestion mechanism as the independent variable: no-suggestion, lexical, and semantic. We utilized multiple statistical analyses according to the nature of the dependent variables: character count measures were analyzed using the aligned rank transform procedure; emoji count measures fit a Poisson distribution, and were therefore analyzed with mixed model Poisson regression; Likert-scale responses were treated as ordinal measures and were therefore analyzed with mixed model ordinal logistic regression. Further specifics are given with each analysis in the . In this section, we describe the of the study comparing the three levels of the Suggestion factor: no suggestions, lexical suggestions, and semantic suggestions. During the study, one pair of participants did not conduct what we considered a realistic conversation. In one of their sessions, they sent only nonsensical numbers and capital letters to each other. This participant pair was therefore excluded from our analyses, and another pair was recruited in their place. Thus, our dataset included 12 valid participant pairs with two pairs per Suggestion order due to full counterbalancing (3! conditions = 6 orders). We collected 12×3=36 data logs of valid sessions, together with 72 surveys regarding the chat experience and 48 usability surveys for emoji suggestion. We conducted formal analysis with open coding, in which research team members identified any themes or codes they discovered from the 48 responses to the open-ended questions (Q1-Q2 in Table 3). Among the 24 participants, 22 stated that they always communicate with their phone, while the other two stated that they only used their phone sometimes. Nine participants stated that they always use emojis in online conversations, 14 sometimes, and one seldom. As for how our participants normally enter emojis, 14 participants manually selected emojis from a list, one participant used lexical suggestions from the keyboard, and nine used both methods. The descriptive of the logged data are shown in Table 6. Total Characters is the number of characters excluding emojis sent in the conversation; Total Emojis is the number of emojis used in the conversation, however they might have been inputted; and Selected Emojis is the number of emojis picked from the suggestion list. 7 Note that the no suggestion condition was excluded from this analysis since it did not produce emoji suggestions. =7.76, p<.05), with semantic suggestion ing in more selected emojis than lexical suggestion. 7 This indicates that although the total number of emojis participants used across conditions was similar, participants selected more semanticgenerated emojis than lexical-generated ones. Participants responded to the questionnaires along a 7-point Likert Scale (1=strongly disagree, 7=strongly agree), so the data were analyzed using mixed model ordinal logistic regression. Surprisingly, there were no significant across the different Suggestion levels for any question regarding either the chat experience (Table 4) or usability (Table 5). Based on the analysis of emoji counts in the study, we found that although different suggestion levels ed in similar amounts of inputted emoji, participants tended to pick more from semantic suggestions than from lexical suggestions. One surprising finding was that although the usage of emojis indeed affected the senders' chat experience, the suggestion type did not affect the chat experience significantly. One explanation is that different suggestion mechanisms only affect how the user inputs emojis, rather than what they input. As long as they can input the expected emojis, the chat experience is not affected. Looking at participants' interview answers, we found that participants did notice the difference between the suggestion mechanisms, and provided more positive feedback on semantic suggestions than the other conditions. Five participants mentioned that semantic suggestions were convenient and timesaving. The convenience might come from the relevance of the semantic suggestions. P13 pointed out, "The first one [semantic] is better than the second one [lexical], showing more emotion-related emojis. The second one is related to the word itself and it makes no sense to use the emoji in the conversation." Although P19 did not use many emojis during the study, she stated that "their [emojis'] appearance in suggestion bars makes me feel good." This feedback supports our finding that people chose more emojis from semantic suggestion than lexical suggestion. We also conducted a 15-day field deployment to explore the longitudinal effects of the different emoji suggestion systems. This study focused on the usability of the emoji suggestion systems and on their effects on emoji usage during everyday conversations. Eighteen participants (8 females, 10 males) between 18 and 43 years old (M=24.0, SD=6.4) were recruited via emails, flyers, and word-of-mouth. Inclusion criteria required that participants were able to use English as their primary language and owned a smartphone with Android 6.0 that they used on a daily basis. Those who were in the laboratory study were not allowed to participate in the field deployment due to prior exposure. The 15-day study contained three five-day periods. Participants were compensated $20 USD in the first two periods and $40 for the third, adding to $80 total. The study was conducted as a partial within-subjects design with the suggestion mechanism as the independent variable. All of the participants used the no-suggestion keyboard in the first five-day period as a baseline (however, they could still input emoji from the emoji selection panel). During the second period, half of the participants used the lexical suggestion keyboard while the other half used the semantic suggestion keyboard. Everyone returned to the nosuggestion keyboard during the last period to determine whether they returned to their baseline behavior. In psychology terms, the study compared an ABA condition sequence to an ACA condition sequence. When participants were enrolled, they were asked to fill out the same questionnaire about online chatting and emoji usage as in the laboratory study. Participants were told that they would be using an emoji suggestion system during the field study, but that they were free to use or ignore the suggestions as they pleased. Participants were instructed to use the keyboard whenever they were typing in English and to keep their phone network connected so they could retrieve emoji suggestion . The same usage information was logged as before (Total Characters, Total Emojis, and Selected Emojis). After participants signed the consent form, the keyboard was installed on their phone. The keyboard was configured to participants' personal preferences, including its aesthetic theme and vibration behavior. Participants met with a researcher after each five-day period to have their keyboards reconfigured to another condition and fill out a short questionnaire about the experience (Table 7). After the second period, when emoji suggestions were provided, participants also completed the same usability survey as in the first study (see Table 5). Survey After Period 1 1. Do you find yourself using emojis more or less often than before the study? Why? Survey After Period 2 1. How do you like or dislike the suggestion keyboard? Do you find it affecting you (in negative or positive ways) in online communication? 2. Do you find yourself using emojis more often than before the study? Why? 3. Do you have any comments about the keyboard emoji suggestions? Survey After Period 3 1. What do you think of the current keyboard for this period? 2. Do you find yourself using emojis more or less often than before the study? Why? 3. After the whole period, do you have any comments about the keyboard emoji suggestions? Table 7. The survey questions after each period. The emoji suggestions were offered only during period 2, which is why the questions are different for that period. We collected 54 data logs (18 participants × 3 periods), 18 survey about the usability of emoji suggestions, and 54 open responses analyzed using inductive analysis. As before, Suggestion was the independent variable of three levels: no-suggestion, lexical, and semantic. Among the 18 participants, 14 stated that they always communicate with their phone, three sometimes, and one seldom. Four participants stated that they always use emojis in online conversations, 11 sometimes, and three seldom. As for the participants' typical emoji entry method, 10 participants manually selected emojis from a list, one participant used lexical suggestions from the keyboard, and seven used both methods. The descriptive statistics for Total Characters, Total Emojis and Selected Emojis per day are shown in Unsurprisingly, participants used more emojis with lexical and semantic suggestions than with no suggestions. On average, participants who used lexical suggestions in the second period increased their emoji usage by 31.5% over their baseline, while participants who used semantic suggestions increased their usage by 125.1%. We note that the average usage of daily emoji seems low (fewer than 5 emojis per day). After looking into the data, we found that some participants used over 10 emojis per day, while the other participants used less than one emoji per day. Furthermore, we analyzed the difference in Total Emojis between the different periods by Suggestion using MannWhitney U tests. Results showed that emoji usage increased significantly more with semantic suggestions than with lexical suggestions from the first to second period (p<.001). The change between the first and third periods was not significantly different, indicating that the change in emoji usage was due to the emoji suggestion and not just time. The Likert scale responses from the usability survey during the second period were analyzed using mixed model ordinal logistic regression. No statistically significant differences were found between the semantic and lexical suggestions for any of the questions. The quantitative analysis are similar to the in-lab study: the total emoji inputs were similar between different suggestion levels in period 2, and users chose more semantic suggestions than lexical suggestions. Again, based on the survey , suggestion mechanisms did not influence the online conversation experience significantly., and also provides supporting evidence of why people picked more semantic emojis in our online study. Our goal was to examine the impact of emoji suggestion on online conversations. In particular, we sought to answer two primary questions: How do emoji suggestion systems affect the chat experience? Do lexical and semantic suggestion systems affect daily emoji usage differently? We first conducted an online study to evaluate the performance of the two systems, finding that semantic emoji suggestions were perceived as more relevant than lexical emoji suggestions. We then conducted two experiments, finding that emoji usage had a stronger effect on senders than on receivers, but the suggestion system in use did not affect the overall chat experience. A possible explanation is that the suggestion levels only affect the ease of inputting an emoji. Although participants picked more from the semantic suggestions, they could still manually pick their desired emojis if those emojis were not suggested, leading to similar numbers of total emojis inputted with the different suggestion systems. However, both our in-lab study and our field deployment revealed that the suggestion systems influenced how users reflected on their own experiences. Participants were clearly most excited about semantic suggestions. Even without knowing the details of the different suggestion systems, the participants were pleasantly surprised that the predicted emojis were related to the sentiment of their messages. During the field deployment, participants used more emojis in their daily conversations from semantic suggestions than from lexical suggestions. This finding shows that the semantic suggestions provided more relevant emojis than did the lexical suggestions. Based on feedback from the user studies, we created several design guidelines for future emoji suggestion systems: • Suggestion Diversity. Emoji suggestion systems should suggest various types of emojis, ranging from emojis that portray objects to emojis that portray emotions. Although semantic suggestions were preferred in our study, many participants wanted the system to provide more suggestions than just face emojis. Some participants also appreciated that the lexical suggestion system would sometimes suggest rare emojis. Suggestions from multiple systems could be combined to provide more diverse emojis. Lexical suggestion could provide emojis as the user is typing a sentence, and once the user has finished the sentence, semantic suggestion could provide emojis that reflect the message's overall meaning. Combining the two suggestion schemes could be useful because not all messages contain strong semantic information, and people also use emojis to provide additional information for certain words, such as changing the tone. • Personalization. Beyond providing the most common emoji suggestions, emoji suggestion systems should be aware of the user's personal favorites and usage behaviors. Usage behaviors could be based on categories (e.g., faces, hearts) or the emotions that the user prefers to express. In addition, it would be useful if the suggestion keyboard could recognize the recipient or the usage scenario. For example, a user might want heart emojis when chatting with a family member on a messaging app, but object emojis when composing an email. • Avoiding Intrusion. Emoji suggestion keyboards should only predict emojis when necessary. Some participants only wanted suggestions at the end of messages, as they found the always-on style of semantic suggestions to be distracting. One limitation of our study is that the suggestion frequency of the two emoji systems was not the same. The semantic suggestion system updated with each new word typed, while the lexical suggestion system updated only after each predefined keyword. Thus, participants were exposed to more suggestions in the semantic condition than in the lexical condition. We used our online study to measure the relevance of emoji suggestions independent of frequency. Collecting a similar measure could have been done in our other studies by counting the number of selected emojis and dividing by the total number of emoji suggestions; however, such a metric would neglect many other factors that affect selection rate (e.g., time duration, ordering of emojis). Another limitation is in our keyboard implementation, namely that the existing semantic-level suggestion model we used contains only 64 possible emojis, thus limiting the diversity of possible suggestions. The DeepMoji model could be extended to more emojis, but we chose to stay with the original set to align with the findings from Felbo et al.'s prior work, since there is no available conversation datasets with emojis for fine tuning the model. In this work, we compared two emoji suggestion systems: lexical and semantic. Specifically, we explored whether the suggestion type affected the online chat experience and how people perceive the two suggestion types. Our online crowdsourced study revealed that people perceived semantic suggestions as most relevant. Our laboratory study showed that semantic emoji suggestions were used about 1.5 times more than lexical emoji suggestions. Our longitudinal field deployment showed that semantic suggestions led to an increase in emoji usage and were preferred because of their relevance to emotions. As other research in this area has found, we can conclude that emojis themselves, rather than the type of suggestion system, affects the chat experience most profoundly. Based on our study , we offered design guidelines for emoji suggestion systems. We believe that by incorporating semantic information in emoji suggestion, researchers can provide better experiences in text-based computer-mediated communications.
We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared their effects in two different chat studies.
1,300
scitldr
Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation. Generative Adversarial Networks ) (GANs) have achieved tremendous success for image generation and received much attention in computer vision. For text generation, however, the performance of GANs is severely limited due to reward sparsity and mode collapse: reward sparsity refers to the difficulty for the generator to receive reward signals when its generated samples can hardly fool the discriminator that is much easier to train; while mode collapse refers to the phenomenon that the generator only learns limited patterns from the real data. As a , both the quality and diversity of generated text samples are limited. To address the above issues, we propose a novel self-adversarial learning (SAL) paradigm for improving adversarial text generation. In contrast to standard GANs (Figure 1(a) ) that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier assessing whether the currently generated sample is better than its previously generated one, as shown in Figure 1 (b). During training, SAL rewards the generator when its currently generated samples are found to be better than its previously generated samples. In the earlier training stage when the quality of generated samples is far below the real data, this self-improvement reward mechanism makes it easier for the generator to receive non-sparse rewards with informative learning signals, effectively alleviating the reward sparsity issue; while in the later training stage, SAL can prevent a sample from keeping receiving high reward as the self-improvement for a popular mode will become more and more difficult, and therefore help the generator avoid collapsing toward the limited patterns of real data. We comprehensively evaluate the proposed self-adversarial learning paradigm in both synthetic data and real data on the text generation benchmark platform . Compared to the previous approaches for adversarial text generation (; ;), our approach shows a substantial improvement in terms of both the quality and the diversity of generated samples as well as better performance stability in adversarial learning. Figure 1: (a) Conventional adversarial learning that uses a binary real/fake classifier as its discriminator; (b): Self-adversarial learning that employs a comparative discriminator to compare the currently generated sample to its previously generated samples for obtaining rewards through self-improvement. Adversarial text generation has drawn much attention in recent years due to its advantages (e.g., sequence-level guidance without the exposure bias issue ) over maximum likelihood estimation (MLE) for natural language generation. It formulates the learning process as a minimax game between a generator G θ parameterized by θ and a discriminator D φ parameterized by φ: the discriminator is trained to distinguish between the samples drawn from the real data distribution p data and the samples generated by the generator; while the generator is trained to generate samples that can "fool" the discriminator. The adversarial learning objective of the generator and the discriminator can be formulated as: where x is a sample from the read data, G θ (z) is a sample generated by the generator with the initialization z that is drawn from the noise distribution p z (e.g., standard normal distribution). While GANs have shown some promising , there are two fundamental issues that impede their progress in text generation: (i) Reward sparsity, which is due to the fact that the discriminator tends to learn much better than the generator and thus easily recognizes generated samples as fake. In such cases, it will be difficult for the generator to receive rewards; (ii) Mode collapse, which arises from the intrinsic nature of GANs and leads the adversarial models to only learn the limited patterns from the real samples. These two issues limit the ability of GANs to generate high-quality and diverse text samples, which have not been well addressed yet. To address the aforementioned issues, we propose a novel self-adversarial learning (SAL) paradigm. Inspired by self-play in reinforcement learning, the core idea of SAL is to reward the generator if its currently generated sample is found to be better than its previously generated ones. Like AlphaGo , the generator in SAL struggles to learn to generate better samples than its previously generated samples for passing the "self-improvement" test by a comparative discriminator, which is a pairwise classifier trained to compare the quality of two samples, as Figure 1 (b) shows. Compared to conventional GANs (Figure 1(a) ), SAL has the following advantages: First, in the earlier training stage when the quality of generated samples is far below the real data, the self-improvement reward mechanism of SAL allows the generator to receive informative learning signals more easily as it makes the assessment of sample quality better adapted to the current capability of the generator, making it less likely to suffer from the issue of reward sparsity; Second, in the later training stage when the generated samples' quality is high, SAL can prevent a sample from keeping receiving high reward as it will become more and more difficult to pass the "self-improvement" test, thus reducing the risk of the generator collapsing towards limited patterns. The self-improvement mechanism and the'tie' option in the comparative discriminator also provides a reasonable baseline which corresponds to cases where newly generated samples are found to be indistinguishable with previous ones, thus improving the training stability. We provide a more detailed qualitative analysis of why the proposed self-adversarial learning paradigm can alleviate these problems in Appendix. As introduced above, the core component for SAL is the comparative discriminator, which is a pairwise classifier comparing the quality of two samples. It learns a total order of sample quality and encodes the inductive bias that one sample is better (>), worse (<), or indistinguishable (≈) in terms of its quality compared to the other. For a (text) sample, the comparative discriminator can offer more informative learning signals than the conventional binary (i.e., real/fake) classification based discriminator because the sample can receive multiple feedbacks from the comparative discriminator by comparing it with multiple other samples. For training the comparative discriminator, we construct pairwise training examples from the real and generated samples, as Figure 2 shows. For a real sample s + and a generated sample s −, we assign the label "better (>)" to the pair (s +, s −) and "worse (<)" to (s −, s +). For two samples both from real data or from the generated samples, we assign the label "indistinguishable (≈)" to such pairs (i.e., (s . For a training set with n real samples and n generated samples, the comparative discrimination can construct 2n 2 pairwise training examples, allowing to enhance the generalization ability of the comparative discriminator. Moreover, to improve the model's ability to distinguish between good generated samples and bad generated samples for self-play learning, we additionally select the samples generated during the later stage of MLE training as pseudo-real samples, and select those generated in the earlier epochs when the generator does not fully converge as fake sentences. We then pair the pseudo-real samples with the fake samples to construct training instances to supervise the model to compare their qualities. In this way, the comparative discriminator is prevented from being taught to always recognize two generated samples as equally bad and assigning zero reward to the generator. As a , it can become more sensitive to the quality difference in a pair of text samples and thus allow the generator to receive rewards more easily. Before we formally introduce the training procedure for SAL, we first define the learning objective for the comparative discriminator D φ and the generator G θ in SAL: In Eq and Eq, M is the set of previous generated samples by the generator, Q(x 1, x 2) ∈ {>, <, ≈} is the true label for the pair (is the probability of the comparative discriminator's prediction being q (q ∈ {>, <, ≈}) for the pair (x 1, x 2). w q is the reward weight for the case q, which is a hyperparameter for SAL. If the generator generates a sample G θ (z) that is better (>) than its previously generated sample x r, it receives a positive reward; if G θ (z) is worse (<) than x r, it receives a negative reward; while if the quality of G θ (z) is classified as similar (≈) to x r, it receives zero credit. Therefore, we have Since L G can only be directly optimized in standard continuous GAN training, we alternatively employ the policy gradient algorithm to train the generator, as previous approaches for adversarial text generation. For SAL, we define the reward for a generated sample x g compared with a reference sample x r which is a previous generated sample by the generator as the weighted reward based on the probability distribution of the prediction by the comparative discriminator: In text generation, the generator G θ obtains the reward only when one sample has been completely generated, which means no intermediate reward is gained. To relieve this problem, following the practice in SeqGAN , we utilize the Monte Carlo rollout method to approximate intermediate rewards by sampling unknown tokens with generated prefix Y 1:t following generator policy G θ till sample completion. Empirically, we found that the Monte Carlo rollout also helps to reduce the variance of the reference sample utilized for comparison. We calculate the expected reward as The objective of the generator is to generate a sequence to maximize its expected final reward. With likelihood ratio , we can formulate the gradient of the objective function for generator G θ as: To improve the training of self-adversarial learning, we borrow ideas from the field of deep reinforcement learning and propose two training techniques to improve self-adversarial training. Scheduled rewarding Similar to the exploitation-exploration trade-off in reinforcement learning , the positive reward assigned for actions generating better samples encourage exploration while the penalty for generating worse samples makes the generator more conservative. Intuitively, in the earlier stage of self-adversarial learning, the generator should explore better policy by receiving higher rewards for relative progress; while in the later stage, the generator should be more conservative by penalizing more for worse samples to avoid performance degradation. We simply decrease w (>) and increase w (<) linearly with training iteration and refer this technique as scheduled rewarding. Memory replay Continuously comparing the generator with its most recent stage may suffer from the correlation between generated samples and reference samples, which makes the training process unstable. Inspired by experience replay , we construct a memory buffer which contains samples generated in the last K training steps. Reference samples are sampled from the memory buffer rather than samples generated in the most recent stage of the generator, which empirically helps stabilize the training process. The training process of SAL is summarized in Algorithm 3. Self-adversarial learning with the proposed comparative discriminator achieves Nash Equilibrium when the generator models the distribution of real samples perfectly. In this case, the comparative discriminator cannot successfully distinguish generated samples from real samples and tends to recognize two samples as "indistinguishable". The reward received by the generator is thus zero and training converges. However, how a non-Bernoulli GAN converges to such an equilibrium is still an open problem and is beyond the scope of this work. Following the experimental settings in previous work (; ; ; ;), we evaluate our approach in both synthetic and real datasets Require: Generator G θ; comparative discriminator D φ; samples of real sentences S+; self-adversarial learning step g; discriminator step k; memory buffer M for the previous generated samples 1: Pretrain G θ using MLE on S+ 2: Generate samples with G θ and store them into M 3: repeat 4: for k steps do 5: Collect a mini-batch of balanced sample pairs (x1,x2) from M ∪S+ 6: Update D φ via Eq 7: end for 8: for g steps do 9: Generate a mini-batch of samples xg ∼ G θ 10: Collect a mini-batch of reference samples xr from M 11: Update G θ via Eq 12: end for 13: Update M with G θ 14: until Convergence based on Texygen , which is a benchmark platform for evaluating adversarial text generation models. Table 1 presents the basic information of the datasets used for evaluation. As SeqGAN , our generator is a single-layer LSTM and our discriminator is almost based on TextCNN except that it concatenates the feature representation of two compared samples and outputs the probability for their comparative relations (i.e., >, <, ≈). We keep the most of the hyperparameters same with the SeqGAN except the hyperparameters introduced by our models (i.e., w (>), w (<), w (≈) ) which are tuned based on the synthetic experiment and kept the same for the real data experiments. We evaluate the adversarial text generation models in terms of both quality and diversity. Following the prior work, in the synthetic dataset, we use the oracle LSTM to evaluate the negative log-likelihood of our generated samples (denoted as NLL oracle) as the metric to reflect quality; for the diversity metric, we use the negative log-likelihood of the synthetic dataset (denoted as NLL gen) evaluated by the generator with the best quality (i.e., the best NLL oracle score) during training. We also use the best NLL oracle + NLL gen obtained during training to evaluate the quality-diversity trade-off. For the real data experiments, we follow the previous work to apply the commonly-used BLEU scores (BLEU(F)) and the perplexity of generated samples evaluated by an open-sourced pretrained language model as quality metrics since NLL oracle cannot be evaluated without an oracle language model. For evaluating diversity, we employ both backward BLEU (BLEU(B)) which evaluates the test data using generated samples as reference and NLL gen as the metrics. To evaluate the generated samples in more aspects, we calculate frechet distance (FD) between generated samples and real data with sentence representation obtained by InferSent which is a pre-trained sentence embedding model. We compare our approach to previous well-known adversarial text generation models including SeqGAN , RankGAN and MaliGAN . Leak-GAN and RelGAN focus on architecture-level modification, which is orthogonal to our work and the proposed self-adversarial learning paradigm can be applied to them as well. We provide of the combination of LeakGAN with SAL in the Appendix. In the following sections, we denote our proposed self-adversarial learning approach as SAL. Since adversarial training is very sensitive to random initialization and suffers from high variance, we conduct five individual runs with different random seeds for each model and report the mean and the standard deviation of the obtained . Table 2 shows the in the synthetic dataset. We can observe that SAL largely outperforms the previous GANs in all metrics in both cases of sequence length 20 and 40. Although its performance in NLL gen is worse than MLE as MLE directly optimizes the metric, it yields better quality-diversity trade-off than MLE training, which has not been achieved by the previous GANs, which is shown by the fact that the NLL oracle +NLL gen for SAL is lower than that yielded by MLE, while other GANs have the same sum score with MLE, indicating that they fail to improve the quality-diversity trade-off after pretraining. This demonstrates SAL's advantage in alleviating the mode collapse problem. We also find that the training of SAL is more stable compared with other text GANs. In addition, we find SAL learns faster and better than the other GANs by comparing their performance curves of NLL oracle during training in Figure 3, which is consistent with our intuition that the selfimprovement reward mechanism in SAL can alleviate the reward sparsity issue in the earlier training stage and help the generator learn better. The for COCO image caption dataset are presented in Table 3 and the performance curve of the perplexity during training is shown in Figure 4. As in the synthetic data, we observe that our SAL consistently yields better in all the metrics with stable performance (i.e., low variance) compared to the previous GANs. According to Table 18 and Figure 4, SeqGAN and our SAL can improve MLE in the quality metrics (i.e., BLEU (F) and Perplexity) while MaliGAN and RankGAN perform comparably to MLE. However, in the WMT NEWS dataset where text samples tend to be longer, we observe something different from Table 4: all the previous GANs fail to improve MLE. This is because long text generation makes the discrepancy between generated samples and real samples very large even after MLE pre-training. As a , previous GANs fail to stably enhance the sample quality due to the reward sparsity issue. In contrast, our SAL consistently performs well Table 4: Performance comparison of different models in the EMNLP2017 WMT news generation task. Metrics from top to bottom represent respectively the generation quality, the generation diversity, and the divergence between real and generated data. For all the BLEU metrics, the higher, the better. For NLL gen and FD, the lower, the better. and largely improves the quality metrics over MLE. In addition, we observe that the diversity of samples generated by SAL is much better than previous GANs and only marginally worse than MLE, indicating SAL is helpful in reducing the risk of mode collapse. In addition to the automatic metrics, we also conduct human evaluation for the generated samples. As previous work , we randomly sample 20 sentences from each model and pool them with anonymizing the models' identity. We invite 20 graduate students with good English proficiency to score each sentence on a scale of 1-5 in terms of quality. According to Table 5, our SAL is consistently well rated in the human evaluation and outperforms all the baselines in both COCO and WMT NEWS datasets. We perform the Wilcoxon Rank Sum Test with the human evaluation and find that samples generated by baseline models can be distinguished from samples generated by SAL with p < 0.01. Details of human evaluation procedure and samples generated by compared methods in two real-world datasets are presented in the Appendix. To better understand SAL, we perform multiple ablation tests in both the synthetic and the real data. We employ NLL oracle + NLL gen score with sequence length 20 as the evaluation metric for the synthetic data, denoted as NLL. For the real data, we use the perplexity of generated samples trained with COCO dataset as the evaluation metric. We compare SAL with the following reduced models: • CAL: Replacing the comparison between the generated samples (i.e., self-play) to the comparison between the real and generated samples. • w/o comparative: Using the binary discrimination scores of other generated samples as baseline for the policy gradient algorithm, which can be considered as a combination of the self-critic training with RL-based text GANs. • w/o "≈": Replace the three-class comparative discriminator with a binary comparative discriminator by removing the "≈" class. • w/o scheduled rewarding and w/o memory replay The of the ablation tests are shown in Table 6. By observing the improvement by SAL over CAL, we confirm the importance of the self-play paradigm in SAL. It is notable that the proposed comparative discriminator alone (i.e., CAL) can yield good performance, demonstrating the effectiveness of learning by comparison. When replacing the comparative discriminator with the naive combination of self-critic baseline with text GANs, the performance largely decreases because the reward sparsity issue will be intensified when subtracting two already sparse rewards, this motivates the proposed pairwise comparative discriminator which makes self-comparison possible. In addition, we find that the "≈" option plays a critical role in improving the , without which the performance degrades significantly because it makes the task less trivial and provides a baseline for the policy gradient algorithm. Moreover, the training techniques (i.e., scheduled rewarding and memory replay) borrowed from deep reinforcement learning are also shown useful in improving the but not so important as the core components (e.g., self-play and the comparative discriminator). , LeakGAN , and RelGAN ) have been proposed for text generation as adversarial training has received increasing attention in recent years. Typically, they address the non-differentiable issue by making continuous approximation or reinforcement learning. These approaches introduce several different architectures and optimization objectives of both the generator and the discriminator for adversarial text generation. Among the previous studies for adversarial text generation, the most related work to ours is RankGAN which proposes a ranker to replace the conventional binary classifier as its discriminator for allowing the discrimination process to involve richer information. Another work whose idea is similar to ours is the relativistic discriminator (RGAN). It compares binary scores assigned to generated samples and real samples by subtraction as the learning signal to implicitly represent the inductive bias that half of the samples received by the discriminator is fake. In contrast, our comparative discriminator directly encodes this inductive bias and assesses generated sentences by comparison with a pairwise classifier, which provides more informative learning signals than subtraction in RGAN and normalized feature similarity in RankGAN . We present a self-adversarial learning (SAL) paradigm for adversarial text generation. SAL rewards the generator when its comparative discriminator finds the generator becomes better than before. Through the self-improvement reward mechanism, the problem of reward sparsity and mode collapse can be alleviated and training of text GANs are more stable, which in a better performance in the text generation benchmarks in terms of both quality, diversity, and lower variance. In the future, we plan to generalize our approach to other domains and modals to explore the potential of SAL for adversarial learning. Generated samples are presented in the Appendix together with other details, including human evaluation details and qualitative analysis of the proposed SAL. A GENERATED SAMPLES We present sentences generated by our proposed model and compared models to provide qualitative evaluation of different adversarial text generation models. From the presented generated samples, we can observe that samples generated by MLE training are less realistic compared with other samples. SeqGAN yield slightly better sample quality but the loss of diversity is observable even within randomly sampled 15 sentences. Adversarial training with proposed comparator, when trained by comparing with real samples, yield better quality but still lack of diversity. Finally, with the proposed self-adversarial learning paradigm, both quality and diversity of generated samples are improved. A.1 GENERATED SAMPLES IN IMAGE COCO DATASET Table 7: Samples generated by SAL in Image COCO dataset a picture of a person's umbrella in a cell phone. a man stands in a green field. a young boy riding a truck. a man on a motorcycle is flying on a grassy field. a girl on a motorcycle parked on a city street. a motorcycle parked in a city street. a group of bikers riding bikes on a city street. a kitchen with a cat on the hood and a street. a bathroom containing a toilet and a sink. a young woman in a kitchen with a smiley face. a jet plane on the side of a street. a dish is sitting on a sidewalk next to a baby giraffe. a dog on a large green bike parked outside of the motor bike. a person on a kawasaki bike on a race track. a commercial aircraft is parked in front of a kitchen. Table 8: Samples generated by CAL in Image COCO dataset a man is on a towel on a table outside of a real kitchen. a group of lambs at a tall building. a young boy riding a truck. a man on a motorcycle is flying on a grassy field. a man with a computer desk next to a white car. a cat is on the walls of a cat. a plane on a runway with a plane. an elegant, dilapidated plane are standing in front of a parking bag. the woman is riding a bike on their way. a man wearing an old bathroom with a banana. a plane is taking off from the ground. a man holding a man in front of herself. a woman is walking across the road. a kitchen with an island in green tiles. a clean kitchen with two small appliances. Table 9: Samples generated by SeqGAN in Image COCO dataset a large image of a herd of racing train. man and woman on horse. a plane on a runway with a plane. a man preparing a table with wood lid. a view, tiled floors and a man prepares food. a man wearing an old bathroom with a banana. a man is is with a camera. two people are parked on a street. a white and white black kitten eating on a table. a toilet is lit on the walls. a kitchen is taking off from a window. a man is wearing glasses wearing scarf. a kitchen with graffiti hanging off from an open plain. two women playing with the orange. a kitchen with an island in a clear glass. Table 10: Samples generated by MLE in Image COCO dataset a jet airplane flies flying through front from an airplane. a furry tub and overhead pot. a man in a kitchen filled with dark lights green side,.. a cross baby field dressed making cardboard a bathroom with a small tub and oven. a man above a bathroom with an oven room. a jet airliner flying through the sky. a kitchen with a dishwasher, and plenty of pots, pans. a person holding onto two red era arena sits on the street. a bathroom with a toilet and a bath tub. a cat perched on the phone and a baseball cap. the view of the street filled with really parked at the gates on the road. a large hairy dog on a high bike with a cake. a man is riding a white back bench. a narrow bed and white spotted dark tiled walls. A.2 GENERATED SAMPLES IN EMNLP2017 WMT DATASET Table 11: Samples generated by SAL in EMNLP2017 WMT dataset it' s likely to be egyptian and many of the canadian refugees, but for a decade. the ministry spokesperson also said it now significant connected to the mountain. it is the time they can more competitive, where we have another $ 99. 100 per cent, and completely on the alternative, and that' s being affected. we expect $ 200 and 0. 3 percent for all you form other, and, which then well, it' s done. so we wouldn' t feel very large in the game, but you fail to fund, and and the paper that' s like its start. other countries made a playoff cut with pages by mrs. trump' s eighth consecutive season as a president. Table 12: Samples generated by CAL in EMNLP2017 WMT dataset i didn' t put relatively quiet, we have,' his work right in the particular heat rate, take steps traditionally clean. why the u. s. then the table is our cabinet to do getting an vital company for the correct review. those had trained for that, but no thin percentage of the nhs about being warned about the palestinian election before obama is not connected in israel. in course, voters -obama said: " torture is the outcome, the most powerful tradepopularity is happening in it as a success. " in 2012, it is nice to remain -no trump actor established this night -scoring three films. we kind of not listen to knowing my most one, only, for a really good vote, and where things fun, you know. Table 13: Samples generated by SeqGAN in EMNLP2017 WMT dataset his missed 4, 000 the first 95 really 69 -year -olds. but just things, you want to thank it as my playing side has begun meeting with " and " the score had to train up, so he was tied for 11 years. and when he got back doing fresh ties with his election, he will now step in january, back. when you' t know if i saw her task to find himself more responsibility ago. his hold over -up to a nine hike in 2015, 13 percent of recently under suspects dead day, 24, and to the city. " i look up on by the city' s vehicle on the day in a meeting in november. Table 14: Samples generated by MLE in EMNLP2017 WMT dataset you know that that is great for our ability to make thinking about how you know and you? when it' s a real thing possible, is if you the first time in a time here and get. u. s, now government spending at the second half of four years, a country where the law will join the region to leave japan in germany. deputy president, the issue of government and geneva probe threats and not -backed trump, but well -changing violence for their islamic state militants were innocent people. he suggested in a presidential primary source and comment on its size following protests conducted by 18, some in 2012 will be looked at tech energy hub. " it' s growing heavy hard, " mr. romney said, he says matters that can' t again become the asian player. In this section, we present several qualitative case study examples to illustrate why comparative discrimination and self-adversarial learning helps mitigate the problem of reward sparsity and mode collapse. We extract a typical sentence generated during the initial stage of adversarial learning: "a man holding a suitcase holding a phones." We see that this sentence is of limited quality and is easily recognized by binary discriminator in SeqGAN with high confidence, this makes the credit received by the generator very sparse and makes training difficult. Comparative adversarial learning (CAL) where we use the comparative discriminator to assess the quality of this sample by comparing it with a real sample helps as comparative discrimination have three catagories, which is less trivial. The improvement is not so much as the discrepancy of generated samples and real samples is fairly large. However, with proposed self-adversarial learning paradigm, the comparative discriminator assesses this sentence by comparing it with a previous generated sentence which is also of poor quality. The self-improvement is easier to be recognized by the comparative discriminator and makes this sample get good rewards. As the comparative discriminator has to learn a total order of sample quality which is more challenging than standard binary discrimination, the chance of the comparative discriminator to be over-trained is reduced, which makes the model easier to achieve self-improvement, thus help to alleviate the reward sparsity problem. We also extract a sentence generated in the late stages which is fairly good and fools the binary discriminator: "a woman sitting on a bench on a park." In standard adversarial text generation models, a sentence like this would keep receiving large rewards and in mode collapse. In self-adversarial learning paradigm, this sentence is not much better than other sentences generated by the generator itself, so its reward is limited, which reduces the risk of mode collapse. For the ablated model variants, SAL w/o self-play and w/o comparative discriminator is trained with the following algorithms. Specifically, the difference between SAL and CAL is that the reference sample which is compared with the currently generated sample is replaced by a real sample instead of a previously generated one. For the variant without the comparative discriminator, we employ a binary discriminator trained in the same way with the vanilla GAN, as for the reward of generating x g, we first sample a previously generated sample x r as reference and calculate the reward as that the self-BLEU scores are always 1 when evaluating all the models. This problem is also discussed in the openreview of the RelGAN paper. Based on this consideration, we decide to employ backward-BLEU which is introduced in and can also evaluate the diversity of generated samples. The forward and backward BLEU resemble precision and recall of generated samples with respect to the test data, which measures the generation quality and the generation diversity respectively. For BLEU metric, as there is no sentence level alignment for unconditional generation, BLEU is evaluated by using the entire test set treated as a single reference, which contains 10000 sentences. We then generate the same number of sentences as the prediction, and then compute n-gram overlap between the reference and the prediction. We did not apply brevity penalty following previous works. But we found the number of tokens generated are roughly the same across different compared models. We briefly explain why NLL gen is able to measure the diversity of the generator: NLL gen measures the negative log-likelihood of the synthetic dataset evaluated by the generator. Intuitively, if the generator is diverse and captures more patterns in the synthetic dataset, the NLL gen score will be lower. In contrast, if the generator suffers from severe mode collapse and is of low diversity, the NLL gen will be higher as the generator fails to cover the diverse patterns in the training data. As for the metric: NLL gen +NLL oracle, our motivation is that NLL oracle measures the best quality of the generator throughout training, while NLL gen measures the best diversity attained by the generator during training. However, the best quality and diversity are generally not achieved at the same time as GAN-training generally sacrifices the diversity for better quality. Therefore, we report NLL gen +NLL oracle which can measure the quality-diversity trade-off, as the previous work demonstrated, as an additional reference. We follow most of the hyperparameters used in the benchmark platform. Specifically, we choose batch size to be 64, dropout keep prob to be 0.75, l2 regularization to be 0.2. We pretrain all model for 120 epochs and fine-tune them until convergence. The proposed self-adversarial learning paradigm introduces the relative weights for credit assignment when a generated sample is found to be better, indistinguishable or worse compared with another sample generated by the generator itself. We tuned it based on the performance in synthetic experiment and find w 0: w 2 = 1: −0.1 to be a good choice for the initial weights 1 and fixed w 1 to be 0. We empirically find that the performance of SAL is not very sensitive to the choice of reward weights as long as the absolute value of w 1 is larger enough than w 2, which guarantees the training stability. Following the human evaluation setting in , The text quality evaluation is based on grammatical correctness and meaningfulness (i.e. if a sentence makes sense), text formatting problems (e.g., capitalization, punctuation, spelling errors, extra spaces between words and punctuations) are ignored. Detailed criteria is provided as follows: It has some small grammatical errors and mostly makes sense. For example, "two women is in a cafe look outside." 3-Fair It has major grammatical errors but the whole still conveys some meanings. For example, "a man riding on on motor scooter and window." 2-Poor It has severe grammatical errors and the whole doesn't make sense, but some parts are meaningful. For example, "a blue bike on on on a dirt bike." 1-Unacceptable It is basically a random collection of words. For example, "a motorcycle close close on it and." C.5 ADDITIONAL ON CAL AND LEAKGAN In this section, we present the performance comparison on SAL vs CAL (comparative adversarial learning, which uses comparative discriminator but does not train the model with self-play.) and the application of SAL on LeakGAN. We find that SAL significantly outperforms CAL on all dataset. In addition, we find that the proposed self-adversarial learning paradigm can also be applied on other text GAN architectures, e.g. LeakGAN, and help improve its performance.
We propose a self-adversarial learning (SAL) paradigm which improves the generator in a self-play fashion for improving GANs' performance in text generation.
1,301
scitldr
Determining the number of latent dimensions is a ubiquitous problem in machine learning. In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions. The general principle behind the method is to compare the curve of singular values of the SVD decomposition of a data set with the randomized data set curve. The inferred number of latent dimensions corresponds to the crossing point of the two curves. To evaluate our methodology, we compare it with competing methods such as Kaisers eigenvalue-greater-than-one rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial). We also compare our method with the Silhouette Width (SW) technique which is used in different clustering methods to determine the optimal number of clusters. The on synthetic data shows that the Parallel Analysis and our method have similar and more accurate than the other methods, and that our methods is slightly better than the Parallel Analysis method for the sparse data sets. The problem of determining the number of latent dimensions, or latent factors, is ubiquitous in a number of non supervised learning approaches. Matrix factorization techniques are good examples where we need to determine the number of latent dimensions prior to the learning phase. Non linear models such as LDA BID1 and neural networks also face the issue of stating the number of topics and nodes to include in the model before running an analysis over a data set, a problem that is akin to finding the number of latent factors. We propose a new method to estimate the number of latent dimensions that relies on the Singular Value Decomposition (SVD) and on a process of comparison of the singular values from the original matrix data with those from from bootstraped samples of the this matrix, whilst the name given to this method, Bootstrap SVD (BSVD). We compare the method to mainstream latent dimensions estimate techniques and over a space of dense vs. sparse matrices for Normal and non-Normal distributions. This paper is organized as follow. First, we outline some of best known methods and the related works in the next section. Then we explain our algorithm BSVD in section 3. The experiments are presented in section 4 and the and discussion are reported in section 5. And finally of the study is given in section 6. The problem of finding the number of latent factors in a data set dates back to early work by BID4. It extends to a large array of fields including psychology and social science BID2, bioinformatics BID8, information retrieval BID25, and of course statistical learning (, chapter 14). A systematic review on latent variables and its applications can be found in BID15; BID12 BID21.In the following, we briefly explain some of the best known techniques of factor analysis method to decide the number of factors to retain as mentioned in BID14 such as Kaiser's eigenvalue-greater-than-one rule (K1), Parallel Analysis, Cattell's Scree test, Velicer's MAP test (Minimum Average Partial). Moreover, we outline SW technique which is used in different clustering methods to determine the optimal number of clusters. The K1-Kaiser method was first introduced by BID4 and later extended and popularized by BID11. The method an early strategy used to detect the number of factors to retain by considering. It relies on the eigenvalues of the correlation matrix of the observed factors an stipulates that the number of eigenvalues greater than one corresponds to the number of latent factors to retain. Despite its simplicity, some researches consider it unreliable BID14 BID17 BID3. We will nevertheless consider it in our comparison experiments, given that it is a classic method and the first that introduced the use of eigenvectors of the correlation matrix for determining the number of latent factors. Note that a variant of this method was introduced by BID26 that showed improvements but still lacked behind PA (§2.2 and MAP ( §2.4) and will therefore not be included in the experiment. Parallel Analysis is also based on the correlation matrix between the observed factors. It uses bootstrapping on the correlation matrix and then averages the eigenvalues over the bootstrap runs. Eigenvalues greater than the average data set eigenvalue are kept. BID17; BID14. This strategy was proposed by BID9. BID26 showed that PA attempt improves over the Eigenvalue-greater-than-one rule. Several researchers found this method appropriate and more accurate in determining the number of factors to retain BID16 BID17. We will see that it PA a close relationship with BSVD and this is corroborated by the closeness of the . Cattell's Scree test, also known as the "elbow" approach, is a graphical representation method to display the number of components to retain. Scree test is a subjective method which sorts the eigenvalues in decreasing order and shows it on the vertical axis; and the number of components in horizontal axis BID14. In this strategy, we need to find where the Scree happened and the components on the left side of the slope should be retained. BID5 used the Scree test plot and mention that the method fails when elbow cannot be found. Moreover, BID23 mentions the Scree test as one of the graphical tests to finding the number of latent variables, they specify that researchers should utilize of the ruler to plot a line across the elbow and then they could keep all the components above it. Despite, the various criticism to use this method by BID27 and BID10, it is one of the most popular methods to find the number of important factors to retain MAP approach is based on PCA and relies on the series of partial correlation matrices to define the number of significant factors to retain BID17 BID14 BID27 BID18. This approach is introduced by BID24. In general, statisticians agree that the MAP and PA are the two techniques which are reliable solution to extract the number of factors to retain with the reasonable Ledesma Clustering is generally tought of as a means to reduce the number of data points, but it can be considered for dimensionality reduction technique, namely by using the cluster's centroid and each point's distance from them as a means to define a new space, which was shown to be more effective than SVD under certain conditions . Therefore, a method to determine the optimal number of clusters can provide another means to determine the dimensionality of a data set. PAM method is one of the popular technique to automatically determine the optimal number of clusters with the Silouhette (SW) technique BID13. The number of clusters computed by SW is associated with the number of latent dimensions in the dataset. SVD is a well known matrix factorization technique that decomposes the original matrix, R, into the product of two eigenvector matrices, the eigenvectors of the cross-product of the rows and columns, and a of the diagonal matrix of their common singular values. DISPLAYFORM0 where U and V are orthogonal, and Σ is a diagonal matrix with positive real values. The singular values represent the importance of the eigenvectors, ordered by decreasing values. The BSVD method determines the number of dimensions as the point where the singular values of Σ cross the singular values Σ B of a randomized (through bootstrap sampling) matrix R B. An interpretation of this crossing point where Σ and Σ B meet is that the remaining singular values are no more due to influential factors, at least in a linear framework. An example of this can be seen in figure 2 for a data set that was generated using 9 dimensions (vertical line) with uniform distributed values. In this case, it is also easy to tell the number from the elbow at dimensions= 9.The bootstrapped samples R B are simply generated through random sampling with replacement of the values of R.In the next section, we look at the details of generating data sets and the experiments. We evaluate the ability of BSVD to identify the number of latent dimensions by using synthetic data. Although the use of synthetic data limits the generalizability of our to the real world by making strong assumptions on the data, it remains the best validation methodology given that we know the ground truth behind the synthetic data and we can control the sparsity and the underlying distributions of each of the observed variables in order to explore this space of conditions. 4.1 DATA SETS 4.1.1 SYNTHETIC DATA The synthetic data is generated by sampling from distributions to create two matrices, P and Q. Then, R is obtained by the product P · Q plus Gaussian noise. We use two types of distributions, the normal (Gaussian) with mean= 0 and standard deviation= 1, and the uniform distribution with mean= 2.5 to generate the columns of P and rows of Q. The choice of 2.5 is inspired from rating-type of data found in recommender systems. The Gaussian noise added to P · Q corresponds to one standard deviation of R with mean= 0.All R matrices are of size 150 × 240 and we explore the latent dimensions from 2 to 24.In figure 1, we illustrate an example of the generated non-normal data set with size of 5 × 6 and latent dimension 3.Data sets of different density are generated, since sparsity is a constraint that we often have to deal with in fields such as recommender systems (rating matrices) and natural language processing (term-document matrices). Sparsity is created by randomly selecting the missing value cells. In order to capture the behavior of our method when we face a sparse matrix, we employ the algorithm with different percentages of sparseness to the data set (see algorithm 1). Then, we compare the with the existing mentioned approaches in Table 2. To do so, we follow the next steps for each iteration of latent dimension (j): 1) We apply a different percentage of sparseness (j) from 10: 90 to our data set with random selection.2) We impute each missing value by the average of the mean of corresponding row and column.3) Apply BSVD. And for each latent dimension (j); record the in each iteration of k. Compute the average accuracy of each method, when (j) terminate. Figure 5 displays the accuracy of all the methods in the non-normal sparse data set with latent dimension (j) equal to 2. We repeat the previous experiments on the generated simulated random data set with a normal distribution. The BSVD algorithm is compared with Horn's PA and K1 implementations from BID7. Moreover, we used of Very Simple Structure(VSS) and PAM packages of R to have the outcome of MAP and SW methods respectively. According to the of provided experiments in the tables 1 and 2, we could show that our method has a better performance than those mentioned especially in the sparse data sets. Our empirical experiments demonstrate that on the dense data sets; the accuracy of BSVD and PA is equal and better than the other approaches. But when we apply a different percentage of sparseness to our data sets, our method is more precise. In the figures 3 and 4, we display the behavior of each method in the dense and sparse data sets. Figure 3 depicts the average accuracy of all methods in the dense data sets with normal and nonnormal distribution. It shows that MAP method in the dense data set with normal or non-normal distribution has the same accuracy. Additionally, SW technique performs better with the face of the dense data set with non-normal distribution, while K1 has an extreme behavior in the nonnormal data set. Moreover, BSVD, PA and K1 are more precise in the dense data set with normal distribution. Figure 4 shows the sparse data sets with normal and non-normal distribution. It demonstrates that BSVD, PA, and K1 have better accuracy in the sparse data set with normal distribution but MAP and SW are on the contrary. Figure 5 shows the average accuracy of all the methods in in different level of sparsity over the non normal sparse data set with latent dimensions (j) equal to 2. The error bars shows the variance of the observations after repeating the algorithm 25 times. Based on the of these experiments we can conclude that our approach (BSVD) is better than the presented methods especially in the sparse data sets. To show if the outcome is statistically significant and is not by chance, we apply t-test between our method and PA. We considered the p values less than or equal to 0.05 as a significant . To do so, we consider a sample of latent dimensions (j = {2, 3, 5, 8, 15}) and we repeat twenty-five times the mentioned experiments on the sparse data sets with normal and non-normal distribution, and record the . Then we apply t-test between BSVD and PA. In this evaluation the null hypothesis (H0) state that µ SV D = µ P A and if the H0 is rejected, we could conclude that the obtained are not by chance and our method is better than PA. TAB1 contain p values of the sparse and dense data sets with normal and non-normal distribution respectively. The first row of each table with 0% of sparsity indicate to the dense data sets. TAB1 shows more constant behavior, and implies that by increasing sparsity in the sparse data set with normal distribution, BSVD yeilds a significantly better . But table 4 that shows the of non-normal sparse data set is hard to interpret. Because the green cells are not affected by increasing sparsity. We can sum up with that the seems to be significant with increasing the sparsity. In general, according to the tables 3 and 4, the difference between our method and PA seems to be statistically significant by increasing the percentage of sparsity. The objective of our study was to introduce a new method to find the number of latent dimensions using SVD which we inspired from PA. We employ our method on simulated data sets with normal and non-normal distribution whereas are dense or sparse and compared with the present methods such as PA, MAP, K1, and SW. According to the mentioned experiments and the reported in the table 1, BSVD and PA have the same accuracy and better than the other presented methods in the dense data sets. But our method has a better in the sparse data sets which is shown in the table 2. We applied t-test on the sample of latent dimensions (j) between BSVD and PA to demonstrate if the is statistically significant or not. The in the tables (3 and 4) demonstrate that in the sparse data sets with increasing the sparsity, our method seems to be significantly better than the other methods. Our method performance is limited to the presented experiments and data sets. If we want to generalize the method, We need to see the behavior of the algorithm when we have a more complex data set.step a. Generating the matrices x and y with the sizes of 6 × 3 and 5 × 3.
In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions.
1,302
scitldr
Deep learning models are often sensitive to adversarial attacks, where carefully-designed input samples can cause the system to produce incorrect decisions. Here we focus on the problem of detecting attacks, rather than robust classification, since detecting that an attack occurs may be even more important than avoiding misclassification. We build on advances in explainability, where activity-map-like explanations are used to justify and validate decisions, by highlighting features that are involved with a classification decision. The key observation is that it is hard to create explanations for incorrect decisions. We propose EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Specifically, we use SHAP, which uses Shapley values in the space of the input image, to identify which input features contribute to a class decision. Interestingly, this approach does not require to modify the attacked model, and it can be applied without modelling a specific attack. It can therefore be applied successfully to detect unfamiliar attacks, that were unknown at the time the detection model was designed. We evaluate EXAID on two benchmark datasets CIFAR-10 and SVHN, and against three leading attack techniques, FGSM, PGD and C&W. We find that EXAID improves over the SoTA detection methods by a large margin across a wide range of noise levels, improving detection from 70% to over 90% for small perturbations. Machine learning systems can be tricked to make incorrect decisions, when presented with samples that were slightly perturbed, but in special, adversarial ways . This sensitivity, by now widely studied, can hurt networks regardless of the application domain, and can be applied without knowledge of the model . Detecting such adversarial attacks is currently a key problem in machine learning. To motivate our approach, consider how most conferences decide on which papers get accepted for publication. Human classifiers, known as reviewers, make classification decisions, but unfortunately these are notoriously noisy. To verify that their decision are sensible, reviewers are also asked to explain and justify their decision. Then, a second classifier, known as an area-chair or an editor, examines the classification, together with the explanation and the paper itself, to verify that the explanation supports the decision. If the justification is not valid, the review may be discounted or ignored. In this paper, we build on a similar intuition: Explaining a decision can reduce misclassification. Clearly, the analogy is not perfect, since unlike human reviewers, for deep models we do not have trustworthy methods to provide high level semantic explanation of decisions. Instead, we study below the effect of using the wider concept of explanation on detecting incorrect decisions, and in particular given adversarial samples that are designed to confuse a classifier. The key idea is that different classes have different explaining features, and that by probing explanations, one can detect classification decisions that are inconsistent with the explanation. For example, if an image is classified as a dog, but has an explanation that gives high weight to a striped pattern, it is more likely that the classification is incorrect. We focus here on the problem of detecting adversarial samples, rather than developing a system that provides robust classifications under adversarial attacks. This is because in many cases we are interested to detect that an attack occurs, even if we cannot automatically correct the decision. The key idea in detecting adversarial attacks, is to identify cases where the network behaves differently than when presented with untainted inputs, and previous methods focused on various different aspects of the network to recognize such different behaviours;;;;;;. To detect these differences, here we build on recent work in explainability Lundberg & Lee (2017b). The key intuition is that explainability algorithms are designed to point to input features that are the reason for making a decision. Even though leading explainability methods are still mostly based on high-order correlations and not necessarily identify purely causal features, they often yield features that people identify as causal (a). Explainability therefore operates directly against the aim of adversarial methods, which perturb images in directions that are not causal for a class. The is that detection methods based on explainability holds the promise to work particularly well with adversarial perturbations that lead to nonsensical classification decisions. There is second major reason why using explainable features for adversarial detection is promising. Explainable features are designed to explain the classification decision of a classifier trained on non-modified (normal) data. As a , they are independent of any specific adversarial attack. Some previous methods are based on learning the statistical abnormalities of the added perturbation. This makes them sensitive to the specific perturbation characteristics, which change from one attack method to another, or with change of hyperparameters. Instead, explainability models can be agnostic of the particular perturbation method. The challenge in detecting adversarial attacks becomes more severe when the perturbations of the input samples are small. Techniques like C&W Carlini & Wagner (2017b) can adaptively select the noise level for a given input, to reach the smallest perturbation that causes incorrect classification. It is therefore particularly important to design detection methods that can operate in the regime of small perturbations. Explanation-based detection is inherently less sensitive to the magnitude of the perturbation, because it focuses on those input features that explain a decision for a given class. In this paper we describe an EXAID (EXplAIn-then-Detect), an explanation-based method to detect adversarial attacks. It is designed to capture low-noise perturbations from unknown attacks, by building an explanation model per-class that can be trained without access to any adversarial samples. Our novel contributions are as follows: We describe a new approach to detect adversarial attacks using explainability techniques. We study the effect of negative sampling techniques to train such detectors. We also study the robustness of this approach in the regime of low-noise (small perturbations). Finally, we show that the new detection provides state-of-the-art defense against the three leading attacks (FGSM, PGD, CW) both for known attacks and in the setting of detecting unfamiliar attacks. Explainable AI. Several methods have been recently proposed to address black box decisions of AI systems. LIME is based on locally approximating the model around a given prediction with a simple interpretable model (e.g. a decision tree). DeepLIFT uses a modified version of back propagation to compute the contribution of each input feature to the output. SHAP (a) approximates (a) (b) Figure 1: Illustration of EXAID. First, an image is classified by a standard image classification system like ResNet. Then, an explanation is created based on the image, the network activations and the network output. Finally, a detector checks if the generated explanation is consistent with the predicted label. (a) An image of an owl is correctly classified, and the produced explanation is consistent with the label "owl". (b) An image of peacock is perturbed and used as an attack. It is falsely classified as an owl, and is detected as adversarial because its explanation is inconsistent with the predicted label. the Shapley values of input features, which were derived in cooperative game theory to distribute the total gains to the players. Specifically, for explaining deep neural networks, SHAP use a variant of DeepLIFT as a approximation for Shapley values. The literature on adversarial attacks is vast. We focus here on three high-performing adversarial attacks which are relevant for the experiments. Each of the three represents a group of attacks that share the same main idea. This attack by creates a perturbation by "moving" an example one step in the direction of the gradient. Let c be the true class of x and J(C, x, c) be the loss function used to train our deep neural network C. The perturbation is computed as a sign of the model's loss function gradient ∆x = * sign(∇ x J(C, x, c)), where ranging from 0.0 to 1.0. The parameter controls the magnitude perturbation and can be thought as the noise-level of the adversarial sample. Projected Gradient Decent (PGD). suggested to improve FGSM, in the following way. One can interpret FGSM as a one-step scheme for maximizing the inner part of the saddle point formulation. A more powerful adversary will be a multi-step variant, which essentially applies projected gradient descent on the negative loss function Carlini and Wagner (C&W). Carlini & Wagner (2017b) employed an optimization algorithm to seek the smallest perturbation that enables an adversarial example to fool the classifier. As showed in (a), this attack is considered to be one of the most powerful attacks, and therefore is a common baseline. When designing attacks, previous studies took into account various factors: the probability that the attack is successful, the effect on the appearance of a perturbed image, and the time it takes to run the attack. The above three methods prioritize these aspects differently, reaching different tradeoff operating points. Specifically, FGSM is usually faster and the C&W attack yields less-visible perturbation of the input images. Several previous techniques have been proposed to detect adversarial examples. measured the effect of quantization and smoothing of the image on the network classification, both parameterized as a function of image entropy. suggested to reduce the degrees of freedom of the input space by applying transformations like quantization and smoothing, and then compare model predication before and after each transformation. measures the characteristics of the region surrounding a reference example. models the distribution of ac-tivation at the hidden layers of the classifier, using a Gaussian mixture model, and feeds the likelihood to a classifier. models the changes in the labels of a k-nn for each activation layer in the base model. propose using a new loss in training, which encourages the neural network to learn latent representations that better distinguish adversarial examples from normal ones. models the statistical robustness of log-odds to perturbations, for normal and adversarial examples. Generally speaking, these methods assume that adversarial examples differ intrinsically from natural images, either in the sample space or because the perturbation affects propagation of activity in the neural network. Some of those methods require modifying the base model. Very recently, described an explanation-based approach to detection, related to the current paper. 3 EXAID: EXplAIn then Detect EXAID consists of two components: Explain. Create per-class explanations for both correct predictions and incorrect ones. Detect. Train a binary classifier to decide if an explanation is consistent with the class decision. These two components are schematically shown in Figure 1. The first step in EXAID implements an explanation model. Given a pretrained classifier that may be attacked, we used an explainability model to extract explanations for every sample classified by the model. The explanation model can take as input the raw input image, as well as the whole base model architecture and weights, and produces an explanation in the terms of the input features. Formally it is a function that maps a sample and a classifier, and its prediction into explanation space E: (x, f theta (x)) → R n, where f theta is a classification model producing a predicted label y = output(f (x)). Since our goal is to learn which explanations are typical for each class, we collected both positive explanations -applying an explanation model to a correct prediction of the network, and negative explanations -corresponding to incorrect predictions of the model. Creating positive explanations E (x i, y i) is usually straight forward, as one simply applies the explanation model on each sample that was correctly classified f (x i) = y i. More care should be given to creating negative explanations. We consider three types of negative explanations: wrong negatives, adversarial negatives and other-class negatives. First, one may collect samples (x i, y i) where the model made an incorrect decision f (x i) = y i, and collect their explanations E (x i, f (y i)). We name these wrong negatives. For models that are well trained, the number of these explanations is small. Furthermore, not all classes are confused by other classes, and only some classes may lead to explanations of some other classes. Second, one can employ an adversarial attack on the training data and collect negative explanations of adversarially perturbed samples. We name them adversarial negatives. As with wrong negatives, these explanations correspond to cases where the model made an incorrect decision, but unlike wrong negatives the explanations may have a different distribution, because the input was designed to confuse the network. Even if the specific type of adversarial attack is not known, these samples may be useful because they are based on fooled decisions and may reflect typical patterns of adversarial examples. However, training against an incorrect attack may cause overfitting to a specific type of attack and hurt detection accuracy. Third, for every labeled sample (x i, y i), we produce explanations E (x i, y) for all incorrect classes y ∈ Y, y = y i. For example, for a car image correctly classified as a car, we produce explanations for classes like dogs and cats. These explanations are used as other-class negatives for the correct class y i. As an explainable AI approach we used SHAP deep explainer. As shown in (b) SHAP is considered a leading explainer, providing explanations that have stronger agreement with human explanations than other methods. We therefore believe it is likely to capture the "correct" features by which people make labeling decisions. In addition, Lundberg & Lee (2017b) has shown that SHAP is the only explainer that has both local accuracy and consistency, which are desirable properties. Given a set of positive and negative explanations per class, we train a deep binary classifier per class, to detect explanations that are inconsistent with model predictions. Note that in this settings, it is natural to train a detector as a binary multiclass multi-label classifier, and not as a multiclass classifier, because we wish to condition the decision on the prediction of the image classifier. When training the detector, one may consider two learning setups, aiming to protect against unknown-attacks, or against familiar attacks. It appears as if defending against a known attack would be an easier task, because one may learn the properties of the attack. Unfortunately, since new attacks can be easily designed, it is highly desirable to devise generic defenses. We address this topic by controlling the data that is used for training the detector. Specifically, we consider two variants of EXAID. EXAID familiar. During training, the binary detector is presented with adversarial negatives. It can therefore learn a distribution of explanations ing from a specific adversarial attack. Specifically, we trained using high-noise FGSM. EXAID unknown. The binary detector is not presented with any adversarial negatives during training. The only negative explanation the classifier trained on are other-class negatives and wrong negatives. Below we tested both variants on the known attack (FGSM) and on unfamiliar attacks (PGD, C&W). We evaluated EXAID on two benchmark data sets, in the task of attack detection. Our code will be available at https://github.com/[anonymous-author]/EXAID. We evaluated EXAID on two standard benchmarks: CIFAR10 (Krizhevsky et al.) and SVHN . As Carlini & Wagner (2017a) showed, MNIST is not a good dataset for evaluating adversarial defences. This is probably due to the fact that it is a low-dimension dataset, making it easier to detect changes an attacker made to the image. (a) show their on CIFAR-10. In order to show the validity of our on more than one dataset, we also used SVHN that has similar complexity. The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. The SVHN dataset is obtained from house numbers in Google Street View images. It consists of more than 600,000 32x32 colour images in 10 classes. While similar in flavor to MNIST, it comes from a significantly more diverse distribution. We used the 73,257 digits provided for training and the 26,032 digits for testing. For both CIFAR-10 and SVHN we used a pretrained Resnet34 as a base model. To train the EXAID detector we extract positive explanation, wrong negative and other-class negative from natural images as described in algorithm 1. The EXAID-unknown model was trained on those explanations. To train EXAID-familiar we extracted adversarial negative using a FGSM attack with a noise level of = 0.1. // Train classifier for class i 17: end for As described, we used SHAP as an explainability model (a). The original implementation of SHAP runs on CPU and is therefore suitable for use only on a small number of samples, rather than on entire datasets. To enable us to run SHAP on a large number of examples we modified the original implementation to run on GPU. The new version is available in our Git repository. We compared EXAID with three recently-published adversarial detection baselines, and two new variants of these baselines. ANR . A method based on measuring the effect of quantization and smoothing of the image on the network classification, both are parameterized as a function of the image entropy. We used the implementation provided by the authors. Since ANR was not tested in the original paper on CIFAR-10 and SVHN as done here, we tuned the hyperparameters of their method using hyperopt . Mahanalobis . This approach models the distribution of activations in the hidden layers of the classifier, as obtained in response to natural (unperturbed) samples, using a Gaussian mixture model. Given a set of likelihood scores from the GMM, a classifier is trained to determine if a set of activations is obtained in response to an adversarial example or a natural one. That classifier is trained on adversarial examples. We used the implementation provided by the authors, and as the original paper, we trained the classifier with adversarial examples crafted by FGSM. Mahanalobis Unsupervised. We modified the method of to reach an attack-agnostic baseline as follows. Instead of training an attack-dependent discriminator on adversarial samples, we estimated the likelihood of a set of network activations as the product of likelihoods of all layers. LID . LID measures the characteristics of the region surrounding a reference example, and give it a likelihood score. This is done separately for each representation of the example, in the classifier's hidden layers. As in Mahalanobis, a classifier is trained to determine if a set of activations is obtained in response to an adversarial example or a natural one. We used the implementation from , and trained the classifier with adversarial examples crafted by FGSM. We test the detection models against oblivious adversaries, an attack scenario in which an attacker has full knowledge about the model (white box attack), but is not aware of the existence of the defense model. We believe this is a relevant scenario, since in the real world, most attackers will not have direct access to the attacked model and its defense. In this case, the attacker will be forced to use a black box attack. However, as showed, adversarial examples are transferable between models. Given transferability, attacking a black box model is not marginally harder than a white box. Because of that, we baseline our model against a white box attack as which is harder to detect. This is not the case when the model is defended, since shows that the transferability of adversarial examples works well between vanilla neural networks, but fail to transfer between defended neural networks. We believe that the magnitude of perturbation used by an attack is a major factor that determines the success of adversarial detection methods. There is still no clear protocol in the literature about comparing attacks and detections depending on this factor, and different reported experiments use different values. We therefore repeated all experiments for a wide range of noise levels and report performance across that wide range. We used three attack methods to test EXAID: One step gradient attack (FGSM) , Iterative projected gradient (PGD) and the Carlini and Wagner attack, which uses optimization to add as small as possible perturbation (C&W) (b). All attacks were implemented using Advertorch . Opposed to other defense methods benchmarks, we examined the effect of noise-level on a range of three orders of magnitude. The for all detection methods are shown in figure 2. EXAID significantly outperforms the other methods when the noise level is small (small perturbations), and with attack methods that use adaptive noise levels (C&W). Typically, the AUC is increased from 70% to over 90%. LID and Mahalanobis both perform well in high noise scenarios, and slightly outperform EXAID on SVHN in these scenarios. However, when the noise level decreases LID and Mahalanobis performance suffers drastically, while EXAID's remains high. Interestingly, our unsupervised variant of LID, performs at least as well, and sometimes better, than the original LID. This may be because LID was trained with FGSM samples and may deteriorate in cross-attack scenarios. These findings show the importance of benchmarking defense models against a wide range of noise levels. In this paper we proposed EXAID, a novel attack-detection approach, which uses model explainability to identify images whose explanations are inconsistent with the predicted class. Our method outperforms previous state-of-the-art methods, for three attack methods, and many noise-levels. We demonstrated that the attack noise level has a major impact on previous defense methods. We hope this will encourage the research community to evaluate future defense methods on a large range of noise-levels.
A novel adversarial detection approach, which uses explainability methods to identify images whose explanations are inconsistent with the predicted class.
1,303
scitldr
We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrate--accuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training. Artificial neural networks (ANNs) have proven to be highly successful on a variety of tasks, and as a , there is an increasing interest in their practical deployment. However, ANN parameters tend to require a large amount of space compared to manually designed algorithms. This can be problematic, for instance, when deploying models onto devices over the air, where the bottleneck is often network speed, or onto devices holding many stored models, with only few used at a time. To make these models more practical, several authors have proposed to compress model parameters (; ; ;). While other desiderata often exist, such as minimizing the number of layers or filters of the network, we focus here simply on model compression algorithms that 1. minimize compressed size while maintaining an acceptable classification accuracy, 2. are conceptually simple and easy to implement, and 3. can be scaled easily to large models. Classic data compression in a Shannon sense requires discrete-valued data (i.e., the data can only take on a countable number of states) and a probability model on that data known to both sender and receiver. Practical compression algorithms are often lossy, and consist of two steps. First, the data is subjected to (re-)quantization. Then, a Shannon-style entropy coding method such as arithmetic coding is applied to the discrete values, bringing them into a binary representation which can be easily stored or transmitted. Shannon's source coding theorem establishes the entropy of the discrete representation as a lower bound on the average length of this binary sequence (the bit rate), and arithmetic coding achieves this bound asymptotically. Thus, entropy is an excellent proxy for the expected model size. The type of quantization scheme affects both the fidelity of the representation (in this case, the precision of the model parameters, which in turn affects the prediction accuracy) as well as the bit rate, since a reduced number of states coincides with reduced entropy. ANN parameters are typically represented as floating point numbers. While these technically have a finite (but large) number of states, the best in terms of both accuracy and bit rate are typically achieved for a significantly reduced number of states. Existing approaches to model compression often acknowledge this by quantizing each individual linear filter coefficient in an ANN to a small number of pre-determined values (; ; F.). This is known as scalar quantization (SQ). Other methods explore vector quantization (VQ), which is closely related to k-means clustering, in which each vector of filter coefficients is quantized jointly (Chen, J. ;). This is equivalent to enumerating a finite set of representers Figure 1: Visualization of representers in scalar quantization vs. reparameterized quantization. The axes represent two different model parameters (e.g., linear filter coefficients). Small black dots are samples of the model parameters, red and blue discs are the representers. Left: in scalar quantization, the representers must be given by a Kronecker product of scalar representers along the cardinal axes, even though the distribution of samples may be skewed. Right: in reparameterized scalar quantization, the representers are still given by a Kronecker product, but in a transformed (here, rotated) space. This allows a better adaptation of the representers to the parameter distribution. (representable vectors), while in SQ the set of representers is given by the Kronecker product of representable scalar elements. VQ is much more general than SQ, in the sense that representers can be placed arbitrarily: if the set of useful filter vectors all live in a subset of the entire space, there is no benefit in having representers outside of that subset, which may be unavoidable with SQ (Figure 1, left). Thus, VQ has the potential to yield better , but it also suffers from the "curse of dimensionality": the number of necessary states grows exponentially with the number of dimensions, making it computationally infeasible to perform VQ for much more than a handful of dimensions. One of the key insights leading to this paper is that the strengths of SQ and VQ can be combined by representing the data in a "latent" space. This space can be an arbitrary rescaling, rotation, or otherwise warping of the original data space. SQ in this space, while making quantization computationally feasible, can provide substantially more flexibility in the choice of representers compared to the SQ in the data space (Figure 1, right). This is in analogy to recent image compression methods based on autoencoders (Ballé, ;). The contribution of this paper is two-fold. First, we propose a novel end-to-end trainable model compression method that uses scalar quantization and entropy penalization in a reparameterized space of model parameters. The reparameterization allows us to use efficient SQ, while achieving flexibility in representing the model parameters. Second, we provide state-of-the-art on a variety of network architectures on several datasets. This demonstrates that more complicated strategies involving pretraining, multi-stage training, sparsification, adaptive coding, etc., as employed by many previous methods, are not necessary to achieve good performance. Our method scales to modern large image datasets and neural network architectures such as ResNet-50 on ImageNet. We consider the classification setup, where we are given a dataset D = {(x 1, y 1),...(x N, y N)} consisting of pairs of examples x i and corresponding labels y i. We wish to minimize the expected negative log-likelihood on D, or cross-entropy classification loss, over the set of model parameters Θ: where p(y | x; Θ) is the likelihood our model assigns to a dataset sample (x, y). The likelihood function is implemented using an ANN with parameters where W k and b k denote the weight (including convolutional) and bias terms at layer k, respectively. Compressing the model amounts to compressing each parameter in the set Θ. Instead of compressing each parameter directly, we compress reparameterized forms of them. To be precise, we introduce the reparameterizations Figure 2: Classifier architecture. The Φ tensors (annotated with a tilde) are stored in their compressed form. During inference, they are read from storage, uncompressed, and transformed via f into Θ, the usual parameters of a convolutional or dense layer (denoted without a tilde). The internals of f conv and f dense in our experiments for layer k, annotated with the dimensionalities. In f conv, H, W, I, O refer to the convolutional height, width, input channel, output channel, respectively. For f dense, I and O refer to the number of input and output activations. For f conv, we use an affine transform, while for f dense we use a scalar shift and scale, whose parameters are captured in Ψ. Note that in both cases, the number of parameters of f itself (labeled as ψ) is significantly smaller than the size of the model parameters it decodes. f bias such that We can think of each parameter decoder f as a mapping from reparameterization space to parameter space. For ease of notation, we write F = {f conv, f dense, f bias} and Θ = F(Φ). The parameter decoders themselves may have learnable parameters, which we denote Ψ. Our method is visually summarized in figures 2 and 3. A central component of our approach is partitioning the set of model parameters into groups. For the purpose of creating a model compression method, we interpret entire groups of model parameters as samples from the same learned distribution. We define a fully factorized distribution q(Φ) = φ∈Φ q φ (φ), and introduce parameter sharing within the factors q φ of the distribution that correspond to the same group, as well as within the corresponding decoders. These group assignments are fixed a priori. For instance, in figure 2, W 1 and W 2 can be assumed to be samples of the same distribution, that is q W 1 (·) = q W 2 (·). To be consistent, we also use the same parameter decoder f conv to decode them. Further, each of the reparameterizations φ is defined as a rank-2 tensor (a matrix), where each row corresponds to a "sample" from the learned distribution. The operations in f apply the same transformation to each row (figure 3). As an example, in f conv, each spatial H × W matrix of filter coefficients is assumed to be a sample from the same distribution. We describe how this aids in compression in the following section. Our method can be applied analogously to various model partitionings. In fact, in our experiments, we vary the size of the groups, i.e., the number of parameters assumed i.i.d., depending on the total number of parameters of the model (Θ). The size of the groups parameterizes a trade-off between compressibility and overhead: if groups consisted of just one scalar parameter each, compressibility would be maximal, since q would degenerate (i.e., would capture the value of the parameter with certainty). However, the overhead would be maximal, since F and q would have a large number of parameters that would need to be included in the model size (defeating the purpose of compression). On the other hand, encoding all parameters of the model with one and the same decoder and scalar distribution would minimize overhead, but may be overly restrictive by failing to capture distributional differences amongst all the parameters, and hence lead to suboptimal compressibility. We describe the group structure of each network that we use in more detail in the experiments section. In order to apply a Shannon-style entropy coder efficiently to the reparameterizations Φ, we need a discrete alphabet of representers and associated probabilities for each representer. Rather than handling an expressive set of representers, as in VQ, we choose to fix them to the integers, and achieve expressivity via the parameter decoders F instead. Each φ ∈ Z d× is a matrix interpreted as consisting of d samples from a discrete probability distribution producing vectors of dimension. We fit a factorized probability model to each column i of φ, using different probability models q i for each corresponding parameter decoder (the form of q i is described in the next section). Fitting of probability models is typically done by minimizing the negative log-likelihood. Assuming φ follows the distribution q, Shannon's source coding theorem states that the minimal length of a bit sequence encoding φ is the selfinformation of φ under q: which is identical to Shannon cross entropy up to an expectation operator, and identical to the negative log likelihood up to a constant factor. By minimizing I over q and φ during training, we thus achieve two goals: 1) we fit q to the model parameters in a maximum likelihood sense, and 2) we directly optimize the parameters for compressibility. After training, we design an arithmetic code for q, and use it to compress the model parameters. This method incurs only a small overhead over the theoretical bound due to the finite length of the bit sequence (arithmetic coding is asymptotically optimal). Practically, the overhead amounts to less than 1% of the size of the bit sequence; thus, self-information is an excellent proxy for model size. Further overhead from including a description of Ψ, the parameters of the parameter decoders, as well as of q itself (in the form of a table) in the model size. However, these can be considered constant and small compared to the total model size, and thus do not need to be explicitly optimized for. The overall loss function is simply the additive combination of the original cross-entropy classification loss under reparameterization with the self-information of all reparameterizations: We refer to the second term (excluding the constant λ) as the rate loss. By varying λ across different experiments, we can explore the Pareto frontier of compressed model size vs. model accuracy. To compare our method to other work, we varied λ such that our method produced similar accuracy, and then compared the ing model size. Since Φ is discrete-valued, we need to make some further approximations in order to optimize L over it using stochastic gradient descent. To get around this, we maintain continuous surrogatesΦ. For optimizing the classification loss, we use the "straight-through" gradient estimator , which provides a biased gradient estimate but has shown good in practice. This consists of rounding the continuous surrogate to the nearest integer during training, and ignoring the rounding for purposes of backpropagation. After training, we only keep the discretized values. In order to obtain good estimates for both the rate term and its gradient during training, we adopt a relaxation approach previously described in (Ballé,); the code is provided as an open source library 1. In a nutshell, the method replaces the probability mass functions q i with a set of non-parametric continuous density functions, which are based on small ANNs. These density models are fitted toφ j,i + n j,i, where n j,i ∼ U(− 1 2, 1 2) is i.i.d. uniformly distributed additive noise. This turns out to work well in practice, because the negative log likelihood of these noise-affected variates under the continuous densities approximates the self-information I: whereq i denote the density functions. Once the density models are trained, the values of the probability mass functions modeling φ are derived from the substitutesq i and stored in a table, which is included in the model description. The parameters ofq i are no longer needed after training. For our MNIST and CIFAR-10 experiments, we evaluate our method by applying it to four distinct image classification networks: LeNet300-100 and LeNet-5-Caffe 2 on MNIST , and VGG-16 3 and ResNet-20 (b;) with width multiplier 4 (ResNet-20-4) on CIFAR-10 . For our ImageNet experiments, we evaluate our method on the ResNet-18 and ResNet-50 (a) networks. We train all our models from scratch and compare them with recent state-of-the-art methods by quoting performance from their respective papers. Compared to many previous approaches, we do not initialize the network with pre-trained or pre-sparsified weights. We found it useful to use two separate optimizers: one to optimize the variables of the probability model and one to optimize the variables of the network. The optimizer for the probability model is always Adam with a learning rate of 0.0001. We chose to always use Adam because the parameter updates used by Adam are independent of any scaling of the objective (when its hyper-parameter is sufficiently small). In our method, the probability model variables only get gradients from the entropy loss which is scaled by the rate penalty λ. Adam normalizes out this scale and makes the learning rate of the probability model independent of λ and of other hyperparameters such as the model partitioning. We apply our method to two LeNet variants: LeNet300-100 and LeNet5-Caffe and report in Table 1. We train the networks using Adam with a constant learning rate of 0.001 for 200,000 iterations. To remedy some of the training noise from quantization, we maintain an exponential moving average (EMA) of the weights and evaluate using those. Note that this does not affect the quantization, as quantization is performed after the EMA variables are restored. LeNet300-100 consists of 3 fully connected layers. We partitioned this network into three parameter groups: one for the first two fully connected layers, one for the classifier layer, and one for biases. LeNet5-Caffe consists of two 5×5 convolutional layers followed by two fully connected layers, with max pooling following each convolutional layer. We partitioned this network into four parameter groups: One for both of the convolutional layers, one for the penultimate fully connected layer, one for the final classifier layer, and one for the biases. As evident from Table 1, for the larger LeNet300-100 model, our method outperforms all the baselines while maintaining a comparable error rate. For the smaller LeNet5-Caffe model, our method is second only to Minimal Random Code Learning . Note that in both of the MNIST models, the number of probability distributions = 1 in every parameter group, including in the convolutional layers. To be precise, the W k for the convolutional weights W k will be H · W · I · O × 1. We found that this gives a better trade-off, since the model is small to begin with, and having = 5 · 5 = 25 scalar probability models for 5 × 5 convolutional layers would have too much overhead. For both of the MNIST models, we found that letting each subcomponent of F be a simple dimension-wise scalar affine transform (similar to f dense in figure 3), was sufficient. Since each φ is quantized to integers, having a flexible scale and shift leads to flexible SQ, similar to in . Due to the small size of the networks, more complex transformation functions lead to too much overhead. We apply our method to VGG-16 and ResNet-20-4 (b;) and report the in Table 1. For both VGG-16 and ResNet-20-4, we use momentum of 0.9 with an initial learning rate of 0.1, and decay by 0.2 at iterations 256,000, 384,000, and 448,000 for a total of 512,000 iterations. This learning rate schedule was fixed from the beginning and was not tuned in any way other than verifying that our models' training loss had converged. VGG-16 consists of 13 convolutional layers of size 3 × 3 followed by 3 fully connected layers. We split this network into four parameter groups: one for all convolutional layers and one each all fully connected layers. We do not compress biases. We found that our biases in float32 format add up to about 20 KB, and we add that to our reported numbers. ResNet-20-4 consists of 3 ResNet groups with 3 residual blocks each. There is also an initial convolution layer and a final fully connected classification layer. We partition this network into two parameter groups: one for all convolutional layers and one for the final classification layer. We also do not compress biases but include them in our ; they add up to about 11 KB. For VGG-16 and ResNet-20-4 convolutions, = O × I = 9; f conv and f dense are exactly as pictured in figure 3. To speed up training, we fixed ψ W. We found that the inverse real-valued discrete Fourier transform (DFT) performs much better than SQ, or any random orthogonal matrix (Figure 4). From the error vs. rate plots, the benefit of reparameterization in the high compression regime is evident. VGG-16 and ResNet-20-4 both contain batch normalization layers that include a moving average for the mean and variance. Following , we do not include the moving averages in our reported numbers. We do, however, include the batch normalization bias term β and let it function as the bias for each layer (γ is set to a constant 1). For the ImageNet dataset , we reproduce the training setup and hyperparameters from He et al. (2016a). All 3x3 convolutional layers belong to a single parameter group, similar to our CIFAR experiments, 1x1 convolutional layers to a single group (applicable to, and all the remaining layers in their own groups. This gives a total of 4 parameter groups for ResNet-50 and 3 groups for ResNet-18. Analogously to the CIFAR experiments, we compare SQ to using random orthogonal or DFT matrices for reparameterizing the convolution kernels (figure 4a). Existing model compression methods are typically built on a combination of pruning, quantization, or coding. Pruning involves sparsifying the network either by removing individual parameters or higher level structures such as convolutional filters, layers, activations, etc. Various strategies for pruning weights include looking at the Hessian or just their p norm . focus on pruning individual units, and H. prunes convolutional filters. and 18.2 KB (58x) 1.8% Bayesian Compression (GHS) 18.0 KB (59x) 2.0% Sparse Variational Dropout 9.38 KB (113x) 1.8% Our Method (SQ) 8.56 KB (124x) 1.9% LeNet5-Caffe (MNIST) Uncompressed 1.72 MB 0.7% Sparse Variational Dropout 4.71 KB (365x) 1.0% Bayesian Compression (GHS) 2.23 KB (771x) 1.0% Minimal Random Code Learning 1.52 KB (1110x) 1.0% Our Method (SQ) 2.84 KB (606x) 0.9% Uncompressed 60 MB 6.6% Bayesian Compression 525 KB (116x) 9.2% DeepCABAC 960 KB (62.5x) 9.0% Minimal Random Code Learning 417 KB (159x) 6.6% Minimal Random Code Learning 168 KB (452x) 10.0% Our Method (DFT) 101 KB (590x) 10.0% ResNet-20-4 (CIFAR-10) 6.46 MB (16x) 26.0% DeepCABAC 6.06 MB (17x to in our compression experiments, also prune parts of the network. describes a dimensionality reduction technique specialized for CNN architectures. Pruning is a simple approach to reduce memory requirements as well as computational complexity, but doesn't inherently tackle the problem of efficiently representing the parameters that are left. Here, we primarily focus on the latter: given a model architecture and a task, we're interested in finding a set of parameters which can be described in a compact form and yield good prediction accuracy. Our work is largely orthogonal to the pruning literature, and could be combined if reducing the number of units is desired. Quantization involves restricting the parameters to a small set of unique values. There is work in binarizing or ternarizing networks (; F. ;) via either straight-through gradient approximation or stochastic rounding . introduced a new differentiable quantization procedure that relaxes quantization. We use the straight-through heuristic, but could possibly use other stochastic approaches to improve our methods. While most of these works focus on uniform quantization, also extend to non-uniform quantization, which our generalized transformation function amounts to. and share weights and quantize by clustering, Chen, J. randomly enforce weight sharing, and thus effectively perform VQ with a pre-determined assignment of parameters to representers. Other works also make the observation that representing weights in the frequency domain helps compression; Chen, J. T. randomly enforce weight sharing in the frequency domain and use K-means clustering in the frequency domain. Coding (entropy coding, or Shannon-style compression) methods produce a bit sequence that can allow convenient storage or transmission of a trained model. This generally involves quantization as a first step, followed by methods such as Huffman coding , arithmetic coding , etc. Entropy coding methods exploit a known probabilistic structure of the data to produce optimized binary sequences whose length ideally closely approximates the cross entropy of the data under the probability model. In many cases, authors represent the quantized values directly as binary numbers with few digits (; F. ;), which effectively leaves the probability distribution over the values unexploited for minimizing model size; others do exploit it . formulate model compression with an entropy constraint, but use (non-reparameterized) scalar quantization. Their model significantly underperforms all the state-of-the-art models that we compare with (Table 1). Some recent work has claimed improved compression performance by skipping quantization altogether . Our work focuses on coding with quantization. defined their method using a four-stage training process: 1. training the original network, 2. pruning and re-training, 3. quantization and re-training, and 4. entropy coding. This approach has influenced many follow-up publications. In the same vein, many current high-performing methods have significant complexity in implementation or require a multi-stage training process. requires several stages of training and retraining while keeping parts of the network fixed. require pre-sparsification of the network, which is computationally expensive, and use a more complex (context-adaptive) variant of arithmetic coding which may be affected by MPEG patents. These complexities can prevent methods from scaling to larger architectures or decrease their practical usability. In contrast, our method requires only a single training stage followed by a royalty-free version of arithmetic coding. In addition, we commit to releasing the source code of our method for easy reproducibility (upon publication). Our method has parallels to recent work in learned image compression (Ballé, ;) that uses end-to-end trained deep models for significant performance improvements in lossy image compression. These models operate in an autoencoder framework, where scalar quantization is applied in the latent space. Our method can be viewed as having just a decoder that is used to transform the latent representation into the model parameters, but no encoder. We describe a simple model compression method built on two ingredients: joint (i.e., end-to-end) optimization of compressibility and classification performance in only a single training stage, and reparameterization of model parameters, which increases the flexibility of the representation over scalar quantization, and is applicable to arbitrary network architectures. We demonstrate that stateof-the-art model compression performance can be achieved with this simple framework, outperforming methods that rely on complex, multi-stage training procedures. Due to its simplicity, the approach is particularly suitable for larger models, such as VGG and especially ResNets. In future work, we may consider the potential benefits of even more flexible (deeper) parameter decoders.
An end-to-end trainable model compression method optimizing accuracy jointly with the expected model size.
1,304
scitldr
Neural networks can converge faster with help from a smarter batch selection strategy. In this regard, we propose Ada-Boundary, a novel adaptive-batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model. Our key idea is to present confusing samples what the true label is. Thus, the samples near the current decision boundary are considered as the most effective to expedite convergence. Taking advantage of our design, Ada-Boundary maintains its dominance in various degrees of training difficulty. We demonstrate the advantage of Ada-Boundary by extensive experiments using two convolutional neural networks for three benchmark data sets. The experiment show that Ada-Boundary improves the training time by up to 31.7% compared with the state-of-the-art strategy and by up to 33.5% compared with the baseline strategy. Deep neural networks (DNNs) have achieved remarkable performance in many fields, especially, in computer vision and natural language processing BID13 BID5. Nevertheless, as the size of data grows very rapidly, the training step via stochastic gradient descent (SGD) based on mini-batches suffers from extremely high computational cost, which is mainly due to slow convergence. The common approaches for expediting convergence include some SGD variants BID28 BID11 that maintain individual learning rates for parameters and batch normalization BID9 that stabilizes gradient variance. Recently, in favor of the fact that not all samples have an equal impact on training, many studies have attempted to design sampling schemes based on the sample importance BID25 BID3 at the training accuracy of 60%. An easy data set (MNIST) does not have "too hard" sample but "moderately hard" samples colored in gray, whereas a relatively hard data set (CIFAR-10) has many "too hard" samples colored in black. (b) shows the of SGD on a hard batch. The moderately hard samples are informative to update a model, but the too hard samples make the model overfit to themselves. et al., 2017; BID10. Curriculum learning BID0 ) inspired by human's learning is one of the representative methods to speed up the training step by gradually increasing the difficulty level of training samples. In contrast, deep learning studies focus on giving higher weights to harder samples during the entire training process. When the model requires a lot of epochs for convergence, it is known to converge faster with the batches of hard samples rather than randomly selected batches BID21 BID17 BID4. There are various criteria for judging the hardness of a sample, e.g., the rank of the loss computed from previous epochs BID17.Here, a natural question arises: Does the "hard" batch selection always speed up DNN training? Our answer is partially yes: it is helpful only when training an easy data set. According to our indepth analysis, as demonstrated in FIG1 (a), the hardest samples in a hard data set (e.g., CIFAR-10) were too hard to learn. They are highly likely to make the decision boundary bias towards themselves, as shown in FIG1 (b). On the other hand, in an easy data set (e.g., MNIST), the hardest samples, though they are just moderately hard, provide useful information for training. In practice, it was reported that hard batch selection succeeded to speed up only when training the easy MNIST data set BID17 BID4, and our experiments in Section 4.4 also confirmed the previous findings. This limitation calls for a new sampling scheme that supports both easy and hard data sets. In this paper, we propose a novel adaptive batch selection strategy, called Ada-Boundary, that accelerates training and is better generalized to hard data sets. As opposed to existing hard batch selection, Ada-Boundary picks up the samples with the most appropriate difficulty, considering the learning progress of the model. The samples near the current decision boundary are selected with high probability, as shown in FIG3 (a). Intuitively speaking, the samples far from the decision boundary are not that helpful since they are either too hard or too easy: those on the incorrect (or correct) side are too hard (or easy). This is the reason why we regard the samples around the decision boundary, which are moderately hard, as having the appropriate difficulty at the moment. Overall, the key idea of Ada-Boundary is to use the distance of a sample to the decision boundary for the hardness of the sample. The beauty of this design is not to require human intervention. The current decision boundary should be directly influenced by the learning progress of the model. The decision boundary of a DNN moves towards eliminating the incorrect samples as the training step progresses, so the difficulty of the samples near the decision boundary gradually increases as the model is learned. Then, the decision boundary keeps updated to identify the confusing samples in the middle of SGD, as illustrated in FIG3 (b). This approach is able to accelerate the convergence speed by providing the samples suited to the model at every SGD iteration, while it is less prone to incur an overfitting issue. We have conducted extensive experiments to demonstrate the superiority of Ada-Boundary. Two popular convolutional neural network (CNN) 1 models are trained using three benchmark data sets. Compared to random batch selection, Ada-Boundary significantly reduces the execution time by 14.0-33.5%. At the same time, it provides a relative improvement of test error by 7.34-14.8% in the final epoch. Moreover, compared to the state-of-the-art hard batch selection BID17, Ada-Boundary achieves the execution time smaller by 18.0% and the test error smaller by 13.7% in the CIFAR-10 data set.2 Ada-Boundary COMPONENTS The main challenge for Ada-Boundary is to evaluate how close a sample is to the decision boundary. In this section, we introduce a novel distance measure and present a method of computing the sampling probability based on the measure. To evaluate the sample's distance to the decision boundary, we note that the softmax distribution, which is the output of the softmax layer in neural networks, clearly distinguishes how confidently the learner predicts and whether the prediction is right or wrong, as demonstrated in FIG5. If the prediction probability of the true label is the highest, the prediction is correct; otherwise, incorrect. If the highest probability dominates the distribution, the model's confidence is strong; otherwise, weak. Let h(y|x i ; θ t) be the softmax distribution of a given sample x i over y ∈ {1, 2, . . ., k} labels, where θ t is the parameter of a neural network at time t. Then, the distance from a sample x i with the true label y i to the decision boundary of the neural network with θ t is defined by the directional distance function in Eq.. More specifically, the function consists of two terms related to the direction and magnitude of the distance, determined by the model's correctness and confidence, respectively. The correctness is determined by verifying whether the label with the highest probability matches the true label y i, and the confidence is computed by the standard deviation of the softmax distribution. Intuitively, the standard deviation is a nice indicator of the confidence because the value gets closer to zero when the learner confuses. DISPLAYFORM0 One might argue that the cross-entropy loss, H(p, q) = −p(x i) log(q(x i)) where p(x i) and q(x i) are the true and softmax distributions for x i, can be adopted for the distance function. However, because p(x i) is formulated as a one-hot true label vector, the cross-entropy loss cannot capture the prediction probability for false labels, which is an important factor of confusing samples. Another advantage is that our distance function is bounded as opposed to the loss. For k labels, the maximum value of std (h(y|x i DISPLAYFORM1 The rank-based approach introduced by BID17 is a common way to make the sampling probability of being selected for the next mini-batch. This approach sorts the samples by a certain importance measure in descending order, and exponentially decays the sampling probability of a given sample according to its rank. Let N denote the total number of samples. Then, each r-th ranked sample is selected with the probability p(r) which drops by a factor of exp (log(s e)/N ).Here, s e is the selection pressure parameter that affects the probability gap between the most and the least important samples. When normalized to sum up to 1.0, the probability of the r-th ranked sample's being selected is defined by Eq.. DISPLAYFORM2 In the existing rank-based approach, the rank of a sample is determined by |dist(x i, y i ; θ t)| in ascending order, because it is inversely proportional to the sample importance. However, if the mass of the true sample distribution is skewed to one side (e.g., easy side) as shown in FIG6, the mini-batch samples are selected with high probability from the skewed side rather than around the decision boundary where |dist(x i, y i ; θ t)| is very small. This problem was attributed to unconditionally fixed probability to a given rank. In other words, the samples with similar ranks are selected with similar probabilities regardless of the magnitude of the distance values. To incorporate the impact of the distance into batch selection, we adopt the quantization method BID6 BID2 ) and use the quantization index q instead of the rank r. Let Δ be the quantization step size and d be the output of the function dist(x i, y i ; θ t) of a given sample x i. Then, the index q is obtained by the quantizer Q(d) as in Eq.. The quantization index gets larger as a sample moves away from the decision boundary. In addition, the difference between two indexes reflects the difference in the actual distances. DISPLAYFORM3 In Eq., we set Δ to be k −1 √ k − 1/N such that the index q is bounded to N (the total number of samples) by Eq.. The sampling probability of a given sample x i with the true label y i is defined as Eq.. As shown in FIG6, our quantization-based method provides a well-balanced distribution, even if the true sample distribution is skewed. DISPLAYFORM4 3 Ada-Boundary ALGORITHM 3.1 MAIN PROPOSED ALGORITHM Algorithm 1 describes the overall procedure of Ada-Boundary. The input to the algorithm consists of the samples of size N (i.e., training data set), the mini-batch size b, the selection pressure s e, and the threshold γ used to decide the warm-up period. In the early stages of training, since the quantization index for each sample is not confirmed yet, the algorithm requires the warm-up period during γ epochs. Randomly selected mini-batch samples are used to warm-up (Lines 6-7), and their quantization indexes are updated (Lines 11-16). After the warm-up epochs, the algorithm computes the sampling probability of each sample by Eq. and selects mini-batch samples based on the probability (Lines 8-10). Then, the quantization indexes are updated in the same way (Lines 11-16). Here, we compute the indexes using the model with θ t+1 after every SGD step rather than every epoch, in order to reflect the latest state of the model; besides, we asynchronously update the indexes of the samples only included in the mini-batch, to avoid the forward propagation of the entire samples which induces a high computational cost. For a more sophisticated analysis of sampling strategies, we modify a few lines of Algorithm 1 to present three heuristic sampling strategies, which are detailed in Appendix A. (i) Ada-Easy is designed to show the effect of easy samples on training, so it focuses on the samples far from the decision boundary to the positive direction. DISPLAYFORM5 (ii) Ada-Hard is similar to the existing hard batch strategy BID17, but it uses our distance function instead of the loss. That is, Ada-Hard focuses on the samples far from the decision boundary to the negative direction, which is the opposite of Ada-Easy. (iii) Ada-Uniform is designed to select the samples for a wide range of difficulty, so it samples uniformly over the distance range regardless of the sample distribution. FIG7 shows the distributions of mini-batch samples drawn by these three variants. The distribution of Ada-Easy is skewed to the easy side, that of Ada-Hard is skewed to the hard side, and that of Ada-Uniform tends to be uniform. To avoid additional inference steps of Ada-Boundary (Line 14 in Algorithm 1), we present a historybased variant, called Ada-Boundary(History). It updates the qunatization indexes using the previous model with θ t. See Appendix B for the detailed algorithm and experiment . In this section, all the experiments were performed on three benchmark data sets: MNIST 2 of handwritten digits with 60,000 training and 10,000 testing images; Fashion-MNIST 3 of various clothing BID26 with 60,000 training and 10,000 testing images; and CIFAR-10 4 of a subset of 80 million categorical images BID12 with 50,000 training and 10,000 testing images. We did not apply any data augmentation and pre-processing procedures. A simple model LeNet-5 was used for two easy data sets, MNIST and Fasion-MNIST. A complex model WideResNet-16-8 was used for a relatively difficult data set, CIFAR-10. Batch normalization BID9 was applied to both models. As for hyper-parameters, we used a learning rate of 0.01 and a batch size of 128; the training epoch was set to be 50 for LeNet-5 and 70 for WideResNet-16-8, which is early stopping to clearly show the difference in convergence speed. Regarding those specific to our algorithm, we set the selection pressure s e to be 100, which is the best value found from s e = {10, 100, 1000} on the three data sets, and set the warm-up threshold γ to be 10. Technically, a small γ was enough to warm-up, but to reduce the performance variance caused by randomly initialized parameters, we used the larger γ and shared model parameters for all strategies during the warm-up period. Due to the lack of space, the experimental using DenseNet (L = 25, k = 12) BID8 on two hard data sets, CIFAR-100 4 and Tiny-ImageNet 5, are discussed in Appendix C together with the impact of the selection pressure s e. We compared Ada-Boundary with not only random batch selection but also four different adaptive batch selections. Random batch selection selects the next batch uniformly at random from the entire data set. One of four adaptive selections is the state-of-the-art strategy that selects hard samples based on the loss-rank, which is called online batch selection BID17, and the remainders, Ada-Easy, Ada-Hard, and Ada-Uniform, are the three variants introduced in Section 3.2. All the algorithms were implemented using TensorFlow 6 and executed using a single NVIDIA Tesla V100 GPU on DGX-1. For reproducibility, we provide the source code at https://github. com/anonymized. To measure the performance gain over the baseline (random batch selection) as well as the state-ofart (online batch selection), we used the following three metrics. We repeated every test five times for robustness and reported the average. The wall-clock training time is discussed in Appendix D. (ii) Gain epo: Reduction in number of epochs to obtain the same error (%). In FIG9 (a), the test error of 1.014 · 10 2 achieved at the 50th epoch by random batch selection can be achieved only at the 29th epoch by Ada-Boundary. Thus, Gain err was (50 − 29)/50 × 100 = 42.0%.(iii) Gain tim: Reduction in running time to obtain the same error (%). In FIG9 FIG9 shows the convergence curves of training loss and test error for five batch selection strategies on three data sets, when we used the SGD optimizer for training. In order to improve legibility, only the curves for the baseline and proposed strategies are dark colored; thus, the three metrics in the figure were calculated against the baseline strategy, random batch selection. Owing to the lack of space, we discuss the with the momentum optimizer in Appendix E. Ada-Easy was excluded in FIG9 because its convergence speed was much slower than other strategies. That is, easy samples did not contribute to expedite training. We conduct convergence analysis of the five batch selection strategies for the same number of epochs, as follows: • MNIST FIG9 (a)): All adaptive batch selections achieved faster convergence speed compared with random batch selection. Ada-Boundary, Ada-Hard, and online batch selection showed similar performance. Ada-Uniform was the fastest at the beginning, but its training loss and test error increased sharply in the middle of the training or testing procedures.• Fashion-MNIST FIG9 ): Ada-Boundary showed the fastest convergence speed in both training loss and test error. In contrast, after warm-up epochs, the training loss of the other adaptive batch selections increased temporarily, and their test error at the final epoch became similar to that of random batch selection.• CIFAR-10 FIG9 ): Ada-Boundary and Ada-Hard showed the fastest convergence on training loss, but in test error, the convergence speed of Ada-Hard was much slower than that of Ada-Boundary. This means that focusing on hard samples in the overfitting to "too hard" samples, which is indicated by a larger difference between the converged training loss (error) and the converged test error. Also, the slow convergence speed of online batch selection in test error is explained by the same reason. In summary, in the easiest MNIST data set, all adaptive batch selections accelerated their convergence speed compared with random batch selection. However, as the training difficulty (complexity) increased from MNIST to Fashion-MNIST and further to CIFAR-10, only Ada-Boundary converged significantly (by Gain err) faster than random batch selection. We clarify the quantitative performance gains of Ada-Boundary over random batch and online batch selections in TAB3. Ada-Boundary significantly outperforms both strategies, as already shown in FIG9. There is only one exception in MNIST, because online batch selection is known to work well with an easy data set BID17. The noticeable advantage of AdaBoundary is to reduce the training time significantly by up to around 30%, which is really important for huge, complex data sets. There have been numerous attempts to understand which samples contribute the most during training. Curriculum learning BID0, inspired by the perceived way that humans and animals learn, first takes easy samples and then gradually increases the difficulty of samples in a manual manner. Self-paced learning BID14 uses the prediction error to determine the easiness of samples in order to alleviate the limitation of curriculum learning. They regard that the importance is determined by how easy the samples are. However, easiness is not sufficient to decide when a sample should be introduced to a learner BID4.Recently, BID24 used Bayesian optimization to optimize a curriculum for training dense, distributed word representations. BID20 emphasized that the right curriculum not only has to arrange data samples in the order of difficulty, but also introduces a small number of samples that are dissimilar to the previously seen samples. BID23 proposed a hard-example mining algorithm to eliminate several heuristics and hyper-parameters commonly used to select hard examples. However, these algorithms are designed to support only a designated task, such as natural language processing or region-based object detection. The neural data filter proposed by BID3 is orthogonal to our work because it aims at filtering the redundant samples from streaming data. As mentioned earlier, Ada-Boundary in general follows the philosophy of curriculum learning. More closely related to the adaptive batch selection, BID17 keep the history of losses for previously seen samples, and compute the sampling probability based on the loss rank. The sample probability to be selected for the next mini-batch is exponentially decayed with its rank. This allows the samples with low ranks (i.e., high losses) are considered more frequently for the next mini-batch.'s work is similar to BID17's work except that gradient norms are used instead of losses to compute the probability. In contrast to curriculum learning, both methods focus on only hard samples for training. Also, they ignore the difference in actual losses or gradient norms by transforming the values to ranks. We have empirically verified that Ada-Boundary outperforms online batch selection BID17, which is regarded as the state-of-the-art of this category. Similar to our work, BID1 claimed that the uncertain samples should be preferred during training, but their main contribution lies on training more accurate and robust model by choosing samples with high prediction variances. In contrast, our main contribution lies on training faster using confusing samples near the decision boundary. For the completeness of the survey, we mention the work to accelerate the optimization process of conventional algorithms based on importance sampling. BID19 re-weight the obtained gradients by the inverses of their sampling probabilities to reduce the variance. BID22 biased the sampling to the Lipschitz constant to quickly find the solution of a strongly-convex optimization problem arising from the training of conditional random fields. In this paper, we proposed a novel adaptive batch selection algorithm, Ada-Boundary, that presents the most appropriate samples according to the learning progress of the model. Toward this goal, we defined the distance from a sample to the decision boundary and introduced a quantization method for selecting the samples near the boundary with high probability. We performed extensive experiments using two CNN models for three benchmark data sets. The showed that Ada-Boundary significantly accelerated the training process as well as was better generalized in hard data sets. When training an easy data set, Ada-Boundary showed a fast convergence comparable to that of the state-of-the-art algorithm; when training relatively hard data sets, only Ada-Boundary converged significantly faster than random batch selection. The most exciting benefit of Ada-Boundary is to save the time needed for the training of a DNN. It becomes more important as the size and complexity of data becomes higher, and can be boosted with recent advance of hardware technologies. Our immediate future work is to apply Ada-Boundary to other types of DNNs such as the recurrent neural networks (RNN) BID18 and the long short-term memory (LSTM) BID7, which have a neural structure completely different from the CNN. In addition, we plan to investigate the relationship between the power of a DNN and the improvement of Ada-Boundary. For Ada-Easy which prefers easy samples to hard samples, q should be small for the sample located deep in the positive direction. For Ada-Hard, q should be small for the sample located deep in the negative direction. Thus, Ada-Easy and Ada-Hard can be implemented by modifying the quantizers Q(d) in Line 16 of Algorithm 1. When we set Δ = k −1 √ k − 1/N to make the index q bound to N, the quantizers of Ada-Easy and Ada-Hard are defined as Eqs. and, respectively. DISPLAYFORM0 Ada-Uniform can be implemented by using F −1 (x) to compute the sampling probability in Line 9 of Algorithm 1, where F (x) is the empirical sample distribution according to the sample's distance to the decision boundary. C Ada-Boundary ON TWO HARD DATA SETS As a practical paper, we include the experimental on two more challenging data sets: CIFAR-100 composed of 100 image classes with 50, 000 training and 10, 000 testing images; Tiny-ImageNet composed of 200 image classes with 100, 000 training and 10, 000 testing images. All images in Tiny-ImageNet were resized to 32 × 32 images. One of the state-of-the-art model DenseNet (L=25, k=12) BID8 was used for two hard data sets with momentum optimizer. Regarding algorithm parameters, we used a learning rate of 0.1 and a batch size of 128; The training epoch and warm-up threshold γ were set to be 90 and 10, respectively. We repeated every test five times for robustness and reported the average. The selection pressure s e determines how strongly the boundary samples are selected. The greater the s e, the greater the sampling probability of the boundary sample, so more boundary samples were chosen for the next mini-batch. On the other hand, the less s e makes Ada-Boundary closer to random batch selection. FIG14 shows the convergence curves of Ada-Boundary with varying s e on two hard data sets. To clearly analyze the impact of the selection pressure, we plotted the minimum of training loss and test error with a given epochs. Overall, the convergence speed of training loss was accelerated as the s e increased from 2 to 16, but that of test error was faster only when the s e was less than a certain value. The convergence speed of test error was faster than random batch selection, when s e was less than or equal to 4 (CIFAR-100) and 2 (Tiny-ImageNet). Surprisingly, the overexposure to the boundary samples using the large s e incurred the overfitting issue in hard data sets, whereas the large s e = 100 worked well for our easy or relatively hard data sets as discussed in Section 4. That is, the selection pressure s e should be chosen more carefully considering the difficulty of the given data set. We leave this challenge as our future work. TAB4 shows the performance gains of Ada-Boundary over random batch selection on two hard data sets. We only quantify the gains of Ada-Boundary(s e = 2) because its performance was the best as shown in FIG14. Ada-Boundary(s e = 2) always outperforms random batch selection. Especially, it reduces the training time significantly by up to around 20%. Table 3 shows the wall-clock training time for the same number of parameter updates on two hard data sets FIG14 ). Ada-Boundary(s e = 2) with momentum was 15.2%-16.0% slower than random batch selection. However, it reduced the running time by 18.0%-21.4% (by Gain tim) to obtain the same test error of random batch selection. D WALL-CLOCK TRAINING TIME The procedures for recomputing sampling probabilities and updating quantization indexes make Ada-Boundary slower than random batch selection. Table 4 shows the wall-clock training time for the same number of parameter updates (i.e., the same number of epochs) with SGD (FIG9) and momentum (FIG1). Ada-Boundary with SGD was 12.8%-14.7% and 6.06%-12.2% slower than random batch and online batch selections, respectively. Ada-Boundary with momentum was 13.1%-14.7% and 6.67%-12.2% slower than random batch and online batch selections, respectively. Although Ada-Boundary took longer for the same number of updates, Ada-Boundary achieved significant reduction in running time by 7.96%-33.5% (by Gain tim) to obtain the same test error of random batch selection due to the fast convergence. TAB3 E EXPERIMENT USING MOMENTUM OPTIMIZER E.1 CONVERGENCE ANALYSIS FIG1 shows the convergence curves of training loss and test error for five batch selection strategies on three data sets, when we used the momentum optimizer with setting the momentum to be 0.9. In the MNIST data set, we limited the number of epochs to be 30 because both training loss and test error were fully converged after 30 epochs. We repeat the convergence analysis, as follows:• MNIST FIG1 ): Except Ada-Uniform, all adaptive batch selections converged faster than random batch selection. Online batch selection showed much faster convergence speed than other adaptive batch selections in training loss, but converged similarly with the others in test error owing to the overfitting to hard samples.• Fashion-MNIST FIG1 ): Ada-Boundary showed the fastest convergence speed in test error, although it did not converge faster than online batch selection in training loss. In contrast, online batch selection was the fastest in training loss, but its convergence in test error was slightly slower than that of random batch selection. This emphasizes the need to consider the samples with appropriate difficulty rather than hard samples. The convergence speeds of Ada-Hard and Ada-Uniform in test error were slower than that of random batch selection.• CIFAR-10 FIG1 ): In both training loss and test error, Ada-Boundary and Ada-Hard showed slightly faster convergence speed than random batch selection. On the other hand, online batch selection converged slightly slower than random batch selection in both cases. In summary, in the easiest MNIST data set, most of adaptive batch selections accelerated their convergence speed compared with random batch selection. However, in Fashion-MNIST data set, only Ada-Boundary converged faster than random batch selection. In a relatively difficult CIFAR-10 data set, Ada-Boundary and Ada-Hard showed comparable convergence speed and then converged faster than random batch selection. We quantify the performance gains of Ada-Boundary over random batch and online batch selections in TAB7. Ada-Boundary always outperforms both strategies, as already shown in FIG1. Compared with TAB3, Gain tim over random batch selection tends to become smaller, whereas Gain tim over online batch selection tends to become larger.
We suggest a smart batch selection technique called Ada-Boundary.
1,305
scitldr
State of the art sound event classification relies in neural networks to learn the associations between class labels and audio recordings within a dataset. These datasets typically define an ontology to create a structure that relates these sound classes with more abstract super classes. Hence, the ontology serves as a source of domain knowledge representation of sounds. However, the ontology information is rarely considered, and specially under explored to model neural network architectures. We propose two ontology-based neural network architectures for sound event classification. We defined a framework to design simple network architectures that preserve an ontological structure. The networks are trained and evaluated using two of the most common sound event classification datasets. Results show an improvement in classification performance demonstrating the benefits of including the ontological information. Humans can identify a large number of sounds in their environments e.g., a baby crying, a wailing ambulance siren, microwave bell. These sounds can be related to more abstract categories that aid interpretation e.g., humans, emergency vehicles, home. These relations and structures can be represented by ontologies BID0, which are defined for most of the available datasets for sound event classification (SEC). However, sound event classification rarely exploits this additional available information. Moreover, although neural networks are the state of the art for SEC BID1 BID2 BID3, they are rarely designed considering such ontologies. An ontology is a formal representation of domain knowledge through categories and relationships that can provide structure to the training data and the neural network architecture. The most common type of ontologies are based on abstraction hierarchies defined by linguistics, where a super category represents its subcategories. Generally, the taxonomies are defined by either nouns or verbs e.g., animal contains dog and cat, dog contains dog barking and dog howling. Examples of datasets are ESC-50 BID4, UrbanSounds BID5, DCASE BID6, AudioSet BID7. Another taxonomy can be defined by interactions between objects and materials, actions and descriptors e.g., contains Scraping, which contains Scraping Rapidly and Scraping a Board BID8 BID9 BID10. Another example of this type is given by physical properties, such as frequency and time patterns BID11 BID12 BID13. There are multiple benefits of considering hierarchical relations in sound event classifiers. They can allow the classifier to back-off to more general categories when encountering ambiguity among subcategories. They can disambiguate classes that are acoustically similar, but not semantically. They can be used to penalize classification differently, where miss classifying sounds from different super classes is worse than within the same super class. Lastly, they can be used as domain knowledge to model neural networks. In fact, ontological information has been evaluated in computer vision BID14 and music BID15, but has rarely been used for sound event classification. Ontology-based network architectures have showed improvement in performance along with other benefits. Authors in BID16 proposed an ontology-based deep restricted Boltzmann machine for textual topic classification. The architecture replicates the tree-like structure adding intermediate layers to model the transformation from a super class to its sub classes. Authors showed improved performance and reduced overfitting in training data. Another example used a perceptron for each node of the hierarchy, which classified whether an image corresponded to such class or not BID17. Authors showed an improvement in performance due to the ability of class disambiguation by comparing predictions of classes and sub classes. Motivated by these approaches and by the flexibility to adapt structures in a deep learning model we propose our ontology-based networks detailed in the following section. In this section we present a framework to deal with ontological information using deep learning architectures. First, we describe a set of assumptions we consider along this paper. In particular, we describe the type of ontologies we work and some of their implications. Later, we present a Feed-forward model that includes the discussed constraints, defining our proposed ontological layer. Second, in order to preserve an embedding space consistent with the ontological structure, we extended the learning model to compute ontology-based embeddings using Siamese Neural Networks. The framework is defined to make use of the ontology structure and to model the neural network architectures. It should be noted that we considered ontologies with two levels, which are the most common in sound event datasets. Nevertheless, the presented framework can be easily generalized to more levels. In our framework, we considered the training data {(x 1, y 1),..., (x n, y n)}, where x i ∈ X is an audio representation, which is associated to a set of labels given by the ontology DISPLAYFORM0 In this case, C i is the set of possible classes at i-level. Assuming a hierarchical relation, we can consider that each possible class in C i is mapped to one element in C i+1. The higher the value of i, the higher the level in the ontology. For example, consider the illustration of an ontology in FIG0. In this case k = 2, C 1 = {cat, dog, breathing, eating, sneezing, violin, drums, piano, beep, boing, train, siren} and C 2 = {nature, human, music, effects, urban}. As the figure shows, every element in C 1 is related to one element in C 2; e.g., cat belongs to nature, or drums belongs to music. Furthermore, for a given representation x ∈ X, if we know the corresponding label y 1 in C 1, we can infer its label in C 2. This intuition can be formalized using a probabilistic formulation, where it is Net linear + softmax Figure 2: Architecture of the Feed-forward Network with Ontological Layer. The blue column represents the acoustic feature vector, the red columns are the output probabilities for both levels.straightforward to see that, assuming p(y 2 |y 1, x) = p(y 2 |y 1), the following is satisfied: DISPLAYFORM1 Therefore, if we want to estimate p(y 2 |x) using a model, we just need to compute the estimation of p(y 1 |x) and sum the values corresponding to the children of y 2. This case is valid for inference time, however, it is not clear that using the representation and label (x, y 1) should be enough to train the model. If at training time we can make use of knowledge to relate the different classes in y 1, it should improve the performance of the model, specially at making predictions for classes y 2.In the following sections we take our proposed framework and use it to design ontology-based neural network architectures. In this section, we describe how we use our proposed framework to design the architecture. Also, we introduce the ontological layer, which makes use of the ontology structure. The Feed-forward Network (FFN) with Ontological Layer consists of a base network (Net), an intermediate vector z, and two outputs, one for each ontology level. The base network weights are learned at every parameter update. The base network utilizes an input vector of audio features x and generates a vector z. This vector is used to generate two outputs, p(y 1 |x) a probability vector for C 1 and p(y 2 |x) a probability vector for C 2. First, the vector z is passed to a softmax layer of the size of C 1. Then, this output is multiplied by the ontological layer M and generates a layer of size of C 2. Once the FFN is trained, it can be used to predict any class in C 1 and C 2 for any input x. The ontological layer reflects the relation between super classes and sub classes given by the ontology. To describe how we used this layer, we refer to Equation 3, where p(y 2 |x) is the sum of all the values of p(y 1 |x) corresponding to the children of y 2. If we consider this equation as a directed graph where M is the |C 2 | × |C 1 | incidence matrix, then, it is clear that Equation 3 can be rewritten as, DISPLAYFORM0 Note that the ontological layer M defines the weights of a standard layer connection. Although we do not consider that these weights are trainable, they are part of our training data. In order to train this model, we simply propose to apply gradient-based method to minimize the loss function L, which is a convex combination between two categorical cross-entropy functions; L 1 the categorical cross entropy corresponding to p(y 1 |x) and L 2 corresponding to p(y 2 |x). Formally, DISPLAYFORM1 Hence, we consider λ ∈ as a hyper parameter to be tuned. Note that, when λ = 1, we are reducing the problem to train a standard classifier just using the information from the first level of the ontology. In this section, we describe how we learned the ontology-based embeddings. Our goal is to create embeddings that preserved the ontological structure. We used a Siamese neural network (SNN), which enforces samples of the same class to be closer, while separating samples of different classes. If two samples belong to different subclasses, but they belong to the same super class, they are closer than two samples that belong to different super classes. The architecture of the SNN with the Feed-forward Network with Ontological Layer is shown in FIG1. The blue rows represent the acoustic feature vectors of two different samples; they can be from the same subclass, different subclass but same super class, or different super class. Then, the twin networks have the same base architecture (Net) with shared weights. The weights are learned simultaneously at every parameter update. The white rows represent the ontological embeddings used to compute a Similarity metric (Euclidean Distance), where the distance of the embeddings z 1 and z 2 should indicate how different x 1 and x 2 are with respect to the ontology. For this work, we imposed that the distance between z 1 and z 2 is close to 0 if the samples are from the same subclass, close to 5 if they are from different sub classes, but the same super class, and close to 10 if they are from different super classes. Finally, the red rows are the output probabilities for both levels, p(y 1 |x 1), p(y 1 |x 2), p(y 2 |x 1) and p(y 2 |x 2).To train the Feed-forward Model with Ontological layer using Ontology-based embeddings, we provided the three types of pairs of audio examples and applied a gradient-based method to minimize DISPLAYFORM0 In this section, we evaluate the sound event classification performance of the ontological-based neural network architectures. We present the datasets and its ontologies, the baseline and proposed architectures, and the classification performance at different levels of the hierarchy. Making Sense of Sounds Challenge 2 -MSoS: The dataset is designed for a challenge which objective is to classify the most abstract classes or highest level in its taxonomy. The ontology, illustrated in FIG0 has two levels, the lowest level 1, has 97 classes and the highest level 2, has 5 classes. The audio files were taken from Freesound data base, the ESC-50 dataset and the Cambridge-MT Multitrack Download Library. The development dataset consists of 1500 audio files divided into the five categories, each containing 300 files. The number of different sound types within each category is not balanced. The evaluation dataset consists of 500 audio files, 100 files per category. All files have an identical format: single-channel 44.1 kHz, 16-bit.wav files. We randomly partitioned the set in 80% for training and tuning parameters and 10% for testing. All files are exactly 5 seconds long, but may feature periods of silence. The official blind evaluation set of the challenge consisted on 500 files distributed among the 5 classes. Urban Sounds -US8K: The dataset is designed to evaluate classification of urban sounds, which are organized using a taxonomy with more nodes than the annotated number of classes. Due to this reason, we adjusted the taxonomy to avoid redundant levels with only one annotated child. The ing ontology is illustrated in FIG2, with two levels, the lowest level 1, has 10 classes and the highest level 2, has 4 classes. The audio files were taken from Freesound data base and corresponded to real field recordings. All files have an identical format: single-channel 44.1 kHz, 16-bit.wav files. The dataset contains 8,732 audio files divided into 10 stratified subsets. We used 9 folds to train and tune parameters and one fold for testing. We used state-of-the-art Walnet features BID1 to represent audio recordings. For each audio, we computed a 128-dimensional logmel-spectrogram vector and transformed it via a convolutional neural network (CNN) that was trained separately on the balanced set of AudioSet. The architecture of the base network (Net) considered in this experiment, shown in Fig. 2, is a feed-forward multi-layer perceptron network. It consists of 4 layers: the input layer of dimensionality 1024, which takes audio feature vectors, 2 dense layers of dimensionality 512 and 256, respectively, and the output layer of dimensionality 128, which is the dimensionality of the vector z. The dense layers utilize Batch Normalization, a dropout rate of 0.5 and the ReLU activation function; max(0, x), where x is input to the function. We tuned the parameters in the Net box as well as the parameters that transformed z into p(y 1 |x). We considered baseline models in both level 1 and 2 for the different data sets. In this case, the baseline models did not consider any ontological information, hence the models consist of the Base Network Architecture with the addition of an output layer that was either for level 1 or level 2.Note that for level 1 this is equivalent to training the Feed-forward model with Ontological Layer using λ = 1. Indeed, with λ = 1 the loss function associated to level 2 is not considered. For level 2, the baseline model is different from the Feed-forward model with λ = 0, because in the baseline model there is no layer corresponding to the prediction of y 1. Table 1 shows the of baseline models for both, MSoS and US8K data set in level 1 and level 2.The baseline performance of the development set in the MSoS challenge was reported to be 0.81 for level 2 and no baseline was provided for level 1. To validate the architecture presented in Section 2.2 and analyze the utility of the ontological layer, we trained models taking different values of λ. FIG3 shows the effect of λ in both data sets. In general, we observe that considering values different from 0 and 1 helps to increase the performance. Note that the classification in both level is affected by the ontological layer. In the case of MSoS data set, the best performance was obtained using λ = 0.8, getting 0.74 and 0.913 of accuracy in level 1 and 2 respectively. Thus, using the ontological structure we can get an absolute improvement of 5.4% and 6% respect baseline models. Running the same experiment on the US8K data set, we observe a smaller improvement. The best performance was obtained using λ = 0.7, being the accuracy of 0.82 and 0.86 for level 1 and 2 respectively. This means an improvement of 2.5% and 0.2% only, respect baseline models. Best is achieved using λ = 0.8 (Right) Results in US8K data set. Best is achieved using λ = 0.7 Figure 6: The MSoS t-SNE plots of the samples in classes from level 2 (1st and 3rd) and level 1 (2nd and 4th). The first two boxes are from the base network vectors and the second two boxes are the ontology-based embeddings. We observe in 1st and 3rd, the groups of classes in level 2 and in 2nd and 4th the same level 2 groups, but using the level 1 class samples. The ontology-based embeddings in tighter and better defined clusters. We tested the architecture described in Section 2.3 to evaluate the performance of the ontology-based embeddings for sound event classification. Additionally, we include t-SNE plots, to illustrate how the embeddings cluster at different levels. We processed the Walnet audio features and chose different super and sub class pairs to train the Siamese neural network to produce the ontology-based embeddings. The embeddings are passed to the architecture of the base network (Net), which is the same as the one used in the previous section. We trained the SNN for 50 epochs using the Adam algorithm. We also tuned the hyper-parameters of the SNN to achieve good performance with the input features that are described in the next section. We also tried different number of pairs for the input training data, from 100 to 1,000,000 pairs and found that 100,000 yielded the best performance. For the loss function we used the values derived in the previous experiment. We used the value of 0.8 for the lambda of the classifiers of level 2 and 0.2 for the classifiers in level 1, and 0.2 for the similarity metric. Modifying the lambdas in the loss function affected the overall performance. The in Table 1 show that the accuracy performance of MSoS and US8K were respectively as follows, in level 1 0.736 and 0.818, and in level 2 0.886 and 0.856. Based on these we made the following . The performance of this architecture is better than the baseline, but slightly under performed the method without the embeddings. Nevertheless, the ontology-based embeddings have the benefit of better grouping as illustrated in Figure6. We took the MSoS data and created the t-SNE plots (perplexity=30) of the classes in level 2 and level 1. We observed that the FF + Ontology vectors and the ontology-based embeddings provided clustered groups of level 2 classes. However, the ontology-based embeddings have tighter and better defined clusters. In the case of the US8K data set performance was limited. We think this was because the number of sub classes was similar to the number of super classes. We had 10 sub classes for 4 classes unlike the MSoS data set, where we had 97 sub classes and 5 classes. It seems when the ratio between the number of sub classes and the number of classes is not large, the contribution of the ontology is negligible. Both approaches were used to compete in the Making Sense of Sounds Challenge. The baseline for the blind evaluation set was 0.80 accuracy for level 2. The Feed-forward Network with Ontological Layer achieved 0.88 while using the ontological-embeddings achieved 0.89. Again, both architectures outperformed significantly the baseline. In this paper we proposed a framework to design neural networks for sound event classification using hierarchical ontologies. We have shown two methods to add such structure into deep learning models in a simple manner without adding more learnable parameters. We used a Feed-forward Network with an ontological layer to relate predictions of different levels in the hierarchy. Additionally, we proposed a Siamese neural Network to compute ontology-based embeddings to preserve the ontology in an embedding space. The embeddings plots showed clusters of super classes containing different sub classes. Our in the datasets and MSoS challenge improved over the baselines. We expect that our pave the path to further explore ontologies and other relations, which is fundamental for sound event classification due to wide acoustic diversity and limited lexicalized terms to describe sounds.
We present ontology-based neural network architectures for sound event classification.
1,306
scitldr
In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so. In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks. A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs. Convolutional Neural Networks (CNNs) have achieved state-of-the-art in many computer vision tasks, e.g. object classification (; and detection (;, face recognition , semantic segmentation (; ;) and saliency detection . However, CNNs have faced some criticism in the context of deep learning for the lack of interpretability . The classic CNN model is considered to be spatially-agnostic and therefore capsule or recurrent networks have been utilized to model relative spatial relationships within learned feature layers. It is unclear if CNNs capture any absolute spatial information which is important in position-dependent tasks (e.g. semantic segmentation and salient object detection). As shown in Fig. 1, the regions determined to be most salient tend to be near the center of an image. While detecting saliency on a cropped version of the images, the most salient region shifts even though the visual features have not been changed. This is somewhat surprising, given the limited spatial extent of CNN filters through which the image is interpreted. In this paper, we examine the role of absolute position information by performing a series of randomization tests with the hypothesis that CNNs might indeed learn to encode position information as a cue for decision making. Our experiments reveal that position information is implicitly learned from the commonly used padding operation (zero-padding). Zero-padding is widely used for keeping the same dimensionality when applying convolution. However, its hidden effect in representation learning has been long omitted. This work helps to better understand the nature of the learned features in CNNs and highlights an important observation and fruitful direction for future investigation. Previous works try to visualize learned feature maps to demystify how CNNs work. A simple idea is to compute losses and pass these backwards to the input space to generate a pattern image that can maximize the activation of a given unit . However, it is very difficult to model such relationships when the number of layers grows. Recent work presents a non-parametric method for visualization. A deconvolutional network is leveraged to map learned features back to the input space and their reveal what types of patterns a feature map actually learns. Another work proposes to combine pixel-level gradients with weighted class activation mapping to locate the region which maximizes class-specific activation. As an alternative to visualization strategies, an empirical study has shown that a simple network can achieve zero training Cropping in a shift in position rightward of features relative to the centre. It is notable that this has a significant impact on output and decision of regions deemed salient despite no explicit position encoding and a modest change to position in the input. loss on noisy labels. We share the similar idea of applying a randomization test to study the CNN learned features. However, our work differs from existing approaches in that these techniques only present interesting visualizations or understanding, but fail to shed any light on spatial relationships encoded by a CNN model. In summary, CNNs have emerged as a way of dealing with the prohibitive number of weights that would come with a fully connected end-to-end network. A trade-off ing from this is that kernels and their learned weights only have visibility of a small subset of the image. This would seem to imply solutions where networks rely more on cues such as texture and color rather than shape . Nevertheless, position information provides a powerful cue for where objects might appear in an image (e.g. birds in the sky). It is conceivable that networks might rely sufficiently on such cues that they implicitly encode spatial position along with the features they represent. It is our hypothesis that deep neural networks succeed in part by learning both what and where things are. This paper tests this hypothesis, and provides convincing evidence that CNNs do indeed rely on and learn information about spatial positioning in the image to a much greater extent than one might expect. CNNs naturally try to extract fine-level high spatial-frequency details (e.g. edges, texture, lines) in the early convolutional stages while at the deepest layers of encoding the network produces the richest possible category specific features representation;;. In this paper, we propose a hypothesis that position information is implicitly encoded within the extracted feature maps and plays an important role in classifying, detecting or segmenting objects from a visual scene. We therefore aim to prove this hypothesis by predicting position information from different CNN archetypes in an end-to-end manner. In the following sections, we first introduce the problem definition followed by a brief discussion of our proposed position encoding network. Problem Formulation: Given an input image I m ∈ R h×w×3, our goal is to predict a gradient-like position information maskf p ∈ R h×w where each pixel value defines the absolute coordinates of an pixel from left→right or top→bottom. We generate gradient-like masks G pos ∈ R h×w (Sec. 2.2) for supervision in our experiments, with weights of the base CNN archetypes being fixed. Our Position Encoding Network (PosENet) (See Fig. 2) consists of two key components: a feedforward convolutional encoder network f enc and a simple position encoding module, denoted as f pem. The encoder network extracts features at different levels of abstraction, from shallower to deeper layers. The position encoding module takes multi-scale features from the encoder network as input and predicts the absolute position information at the end. Encoder: We use ResNet and VGG based architectures to build encoder networks (f enc) by removing the average pooling layer and the layer that assigns categories. As shown in Fig. 2, the encoder module consists of five feature extractor blocks denoted by (f where W a denotes weights that are frozen. * denotes the convolution operation. Note that in probing the encoding network, only the position encoding module f pem is trained to focus on extracting position information while the encoder network is forced to maintain their existing weights. Once we have the same spatial dimension for multi-scale features, we concatenate them together followed by a sequence of k × k convolution operations. In our experiments, we vary the value of k between {1, 3, 5, 7} and most experiments are carried out with a single convolutional layer in the position encoding module f pem. The key operations can be summarized as follows: where W c pos is the trainable weights attached with the transformation function T pos. The main objective of the encoding module is to validate whether position information is implicitly learned when trained on categorical labels. Additionally, the position encoding module models the relationship between hidden position information and the gradient like ground-truth mask. The output is expected to be random if there is no position information encoded in the features maps and vice versa (ignoring any guidance from image content). To validate the existence of position information in a network, we implement a randomization test by assigning a normalized gradient-like 1 position map as ground-truth shown in Fig. 3. We first generate gradient-like masks in Horizontal (H) and vertical (V) directions. Similarly, we apply a Gaussian filter to design another type of ground-truth map, Gaussian distribution (G). The key motivation of generating these three patterns is to validate if the model can learn absolute position on one or two axes. Additionally, We also create two types of repeated patterns, horizontal and vertical stripes, (HS, VS). Regardless of the direction, the position information in the multi-level features is likely to be modelled through a transformation by the encoding module f pem. Our design of gradient ground-truth can be considered as a type of random label because there is no correlation between the input image and the ground-truth with respect to position. Since the extraction of position information is independent of the content of images, we can choose any image datasets. Meanwhile, we also build synthetic images, e.g. black, white and Gaussian noise to validate our hypothesis. As we implicitly aim to encode the position information from a pretrained network, we freeze the encoder network f enc in all of our experiments. Our position encoding module f pem generates the position mapf p of interest. During training, for a given input image I m ∈ R h×w×3 and associated ground-truth position map G h pos, we apply the supervisory signal onf p by upsampling it to the size of G h pos. Then, we define a pixel-wise mean squared error loss to measure the difference between predicted and ground-truth position maps. The overall objective function of our network can be written as: where x ∈ IR n and y ∈ IR n (n denotes the spatial resolution) are the vectorized predicted position and ground-truth map respectively. x i and y i refer to a pixel off p and G dataset. The synthetic images (white, black and Gaussian noise) are also used as described in Section 2.2. Note that we follow the common setting used in saliency detection just to make sure that there is no overlap between the training and test sets. However, any images can be used in our experiments given that the position information is relatively content independent. Evaluation Metrics: As position encoding measurement is a new direction, there is no universal metric. We use two different natural choices for metrics (Spearmen Correlation (SPC) and Mean Absoute Error (MAE)) to measure the position encoding performance. The SPC is defined as the Spearman's correlation between the ground-truth and the predicted position map. For ease of interpretation, we keep the SPC score within range [-1 1]. MAE is the average pixel-wise difference between the predicted position map and the ground-truth gradient position map. We initialize the architecture with a network pretrained for the ImageNet classification task. The new layers in the position encoding branch are initialized with xavier initialization. We train the networks using stochastic gradient descent for 15 epochs with momentum of 0.9, and weight decay of 1e−4. We resize each image to a fixed size of 224×224 during training and inference. Since the spatial extent of multi-level features are different, we align all the feature maps to a size of 28 × 28. We report experimental for the following baselines that are described as follows: VGG indicates PosENet is based on the features extracted from the VGG16 model. Similarly, ResNet represents the combination of ResNet-152 and PosENet. PosENet alone denotes only the PosENet model is applied to learn position information directly from the input image. H, V, G, HS and VS represent the five different ground-truth patterns, horizontal and vertical gradients, 2D Gaussian distribution, horizontal and vertical stripes respectively. Position Information in Pretrained Models: We first conduct experiments to validate the existence of position information encoded in a pretrained model. Following the same protocol, we train the VGG and ResNet based networks on each type of the ground-truth and report the experimental in Table 1. We also report when we only train PosENet without using any pretrained model to justify that the position information is not driven from prior knowledge of objects. Our experiments do not focus on achieving higher performance on the metrics but instead validate how much position information a CNN model encodes or how easily PosENet can extract this information. Note that, we only use one convolutional layer with a kernel size of 3 × 3 without any padding in the PosENet for this experiment. As shown in PosENet can extract position information consistent with the ground-truth position map only when coupled with a deep encoder network. As mentioned prior, the generated ground-truth map can be considered as a type of randomization test given that the correlation with input has been ignored. Nevertheless, the high performance on the test sets across different groundtruth patterns reveals that the model is not blindly overfitting to the noise and instead is extracting true position information. However, we observe low performance on the repeated patterns (HS and VS) compared to other patterns due to the model complexity and specifically the lack of correlation between ground-truth and absolute position (last two rows of Table 1). The H pattern can be seen as one quarter of a sine wave whereas the striped patterns (HS and VS) can be considered as repeated periods of a sine wave which requires a deeper comprehension. The qualitative for several architectures across different patterns are shown in Fig. 4. We can see the correlation between the predicted and the ground-truth position maps corresponding to H, G and HS patterns, which further reveals the existence of position information in these networks. The quantitative and qualitative strongly validate our hypothesis that position information is implicitly encoded in every architecture without any explicit supervision towards this objective. Moreover, PosENet alone shows no capacity to output a gradient map based on the synthetic data. We further explore the effect of image semantics in Sec. 4.1. It is interesting to note the performance gap among different architectures specifically the ResNet based models achieve higher performance than the VGG16 based models. The reason behind this could be the use of different convolutional kernels in the architecture or the degree of prior knowledge of the semantic content. We show an ablation study in the next experiment for further investigation. For the rest of this paper, we only focus on the natural images, PASCAL-S dataset, and three representative patterns, H, G and HS. In this section, we conduct ablation studies to examine the role of the proposed position encoding network by highlighting two key design choices. the role of varying kernel size in the position encoding module and stack length of convolutional layers we add to extract position information from the multi-level features. Table 1 show the existence of position information learned from an object classification task. In this experiment, we change the design of PosENet to examine if it is possible to extract hidden position information more accurately. The PosENet used in the prior experiment (Table 1) has only one convolutional layer with a kernel size of 3 × 3. Here, we apply a stack of convolutional layers of varying length to the PosENet and report the experimental in Table 2 (a). Even though the stack size is varied, we aim to retain a relatively simple PosENet to only allow efficient readout of positional information. As shown in Table 2, we keep the kernel size fixed at 3 × 3 while stacking multiple layers. Applying more layers in the PosENet can improve the readout of position information for all the networks. One reason could be that stacking multiple convolutional filters allows the network to have a larger effective receptive field, for example two 3 × 3 convolution layers are spatially equal to one 5 × 5 convolution layer. An alternative possibility is that positional information may be represented in a manner that requires more than first order inference (e.g. a linear readout). Our previous experiments reveal that the position information is encoded in a pretrained CNN model. It is also interesting to see whether position information is equally distributed across the layers. In this experiment, we train PosENet on each of the extracted features, f separately using VGG16 to examine which layer encodes more position information. Similar to Sec. 3.3, we only apply one 3 × 3 kernel in F pem to obtain the position map. As shown in Table 3, the VGG based PosENet with top f 5 pos features achieves higher performance compared to the bottom f 1 pos features. This may partially a of more feature maps being extracted from deeper as opposed to shallower layers, 512 vs 64 respectively. However, it is likely indicative of stronger encoding of the positional information in the deepest layers of the network where this information is shared by high-level semantics. We further investigate this effect for VGG16 where the top two layers (f We believe that the padding near the border delivers position information to learn. Zero-padding is widely used in convolutional layers to maintain the same spatial dimensions for the input and output, with a number of zeros added at the beginning and at the end of both axes, horizontal and vertical. To validate this, we remove all the padding mechanisms implemented within VGG16 but still initialize the model with the ImageNet pretrained weights. Note that we perform this experiment only using VGG based PosENet since removing padding on ResNet models will lead to inconsistent sizes of Table 4 : Quantitative comparison subject to padding in the convolution layers used in PosENet and VGG (w/o and with zero padding) on natural images. skip connections. We first test the effect of zero-padding used in VGG, no padding used in PosENet. As we can see from Table 4, the VGG16 model without zero-padding achieves much lower performance than the default setting(padding=1) on the natural images. Similarly, we introduce position information to the PosENet by applying zero-padding. PosENet with padding=1 (concatenating one zero around the frame) achieves higher performance than the original (padding=0). When we set padding=2, the role of position information is more obvious. This also validates our experiment in Section 3.3, that shows PosENet is unable to extract noticeable position information because no padding was applied, and the information is encoded from a pretrained CNN model. This is why we did not apply zero-padding in PosENet in our previous experiments. Moreover, we aim to explore how much position information is encoded in the pretrained model instead of directly combining with the PosENet. Fig. 6 illustrates the impact of zero-padding on encoding position information subject to padding using a Gaussian pattern. Recall that the position information is considered to be content independent but our in Table 1 show that semantics within an image may affect the position map. To visualize the impact of semantics, we compute the content loss heat map using the following equation: As shown in Figure 7, the heatmaps of PosENet have larger content loss around the corners. While the loss maps of VGG and ResNet correlate more with the semantic content. Especially for ResNet, the deeper understanding of semantic content leads to a stronger interference in generating a smooth gradient. The highest losses are from the face, person, cat, airplane and vase respectively (from left to right). This visualization can be an alternative method to show which regions a model focuses on, especially in the case of ResNet. Saliency Detection: We further validate our findings in the position-dependent tasks (semantic segmentation and salient object detection (SOD)). First, we train the VGG network with and without zero-padding from scratch to validate if the position information delivered by zero-padding is critical for detecting salient regions. For these experiments, we use the publicly available MSRA dataset as our SOD training set and evaluate on three other datasets (ECSSD, PASCAL-S, and DUT-OMRON). From Table 5 (a), we can see that VGG without padding achieves much worse on both of the metrics (F-measure and MAE) which further validates our findings that zero-padding is the key source of position information. Semantic Segmentation: We also validate the impact of zero-padding on the semantic segmentation task. We train the VGG16 network with and without zero padding on the training set of PASCAL VOC 2012 dataset and evaluate on the validation set. Similar to SOD, the model with zero padding significantly outperforms the model with no padding. We believe that CNN models pretrained on these two tasks can learn more position information than classification task. To validate this hypothesis, we take the VGG model pretrained on ImageNet as our baseline. Meanwhile, we train two VGG models on the tasks of semantic segmentation and saliency detection from scratch, denoted as VGG-SS and VGG-SOD respectively. Then we finetune these three VGG models following the protocol used in Section 3.3. From Table 6, we can see that the VGG-SS and VGG-SOD models outperform VGG by a large margin. These experiments further reveal that the zero-padding strategy plays an important role in a position-dependent task, an observation that has been long-ignored in neural network solutions to vision problems. Table 6: Comparison of VGG models pretrained for classification, SOD, and semantic segmentation. In this paper we explore the hypothesis that absolute position information is implicitly encoded in convolutional neural networks. Experiments reveal that positional information is available to a strong degree. More detailed experiments show that larger receptive fields or non-linear readout of positional information further augments the readout of absolute position, which is already very strong from a trivial single layer 3 × 3 PosENet. Experiments also reveal that this recovery is possible when no semantic cues are present and interference from semantic information suggests joint encoding of what (semantic features) and where (absolute position). Results point to zero padding and borders as an anchor from which spatial information is derived and eventually propagated over the whole image as spatial abstraction occurs. These demonstrate a fundamental property of CNNs that was unknown to date, and for which much further exploration is warranted.
Our work shows positional information has been implicitly encoded in a network. This information is important for detecting position-dependent features, e.g. semantic and saliency.
1,307
scitldr
Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data. Inspired by the idea of coarse-to-fine, we propose a general-to-detailed neural network(GDNN) by incorporating cross-domain sketch(CDS) among utterances and their logic forms. For utterances in different domains, the General Network will extract CDS using an encoder-decoder model in a multi-task learning setup. Then for some utterances in a specific domain, the Detailed Network will generate the detailed target parts using sequence-to-sequence architecture with advanced attention to both utterance and generated CDS. Our experiments show that compared to direct multi-task learning, CDS has improved the performance in semantic parsing task which converts users' requests into meaning representation language(MRL). We also use experiments to illustrate that CDS works by adding some constraints to the target decoding process, which further proves the effectiveness and rationality of CDS. Recently many natural language processing (NLP) tasks based on the neural network have shown promising and gained much attention because these studies are purely data-driven without linguistic prior knowledge. Semantic parsing task which maps a natural language sentence into a machine-readable representation BID6 ), as a particular translation task, can be treated as a sequence-to-sequence problem BID3 ). Lately, a compositional graph-based semantic meaning representation language (MRL) has been introduced BID14 ), which converts utterance into logic form (action-object-attribute), increasing the ability to represent complex requests. This work is based on MRL format for semantic parsing task. Semantic parsing highly depends on the amount of annotated data and it is hard to annotate the data in logic forms such as Alexa MRL. Several researchers have focused on the area of multi-task learning and transfer learning BID10, BID6, BID15 ) with the observation that while these tasks differ in their domains and forms, the structure of language composition repeats across domains BID12 ). Compared to the model trained on a single domain only, a multi-task model that shares information across domains can improve both performance and generalization. However, there is still a lack of interpretations why the multi-task learning setting works BID26 ) and what the tasks have shared. Some NLP studies around language modeling BID18, BID29, BID2 ) indicate that implicit commonalities of the sentences including syntax and morphology exist and can share among domains, but these commonalities have not been fully discussed and quantified. To address this problem, in this work, compared to multi-task learning mentioned above which directly use neural networks to learn shared features in an implicit way, we try to define these cross-domain commonalities explicitly as cross-domain sketch (CDS). E.g., Search weather in 10 days in domain Weather and Find schedule for films at night in domain ScreeningEvent both have action SearchAction and Attribute time, so that they share a same MRL structure like SearchAction(Type(time@?)), where Type indicates domain and? indicates attribute value which is copying from the original utterance. We extract this domain general MRL structure as CDS. Inspired by the research of coarse-to-fine BID4 ), we construct a two-level encoder-decoder by using CDS as a middle coarse layer. We firstly use General Network to get the CDS for every utterance in all domains. Then for a single specific domain, based on both utterance and extracted CDS, we decode the final target with advanced attention while CDS can be seen as adding some constraints to this process. The first utterance-CDS process can be regarded as a multi-task learning setup since it is suitable for all utterances across the domains. This work mainly introducing CDS using multi-task learning has some contributions listed below: 1) We make an assumption that there exist cross-domain commonalities including syntactic and phrasal similarity for utterances and extract these commonalities as cross-domain sketch (CDS) which for our knowledge is the first time. We then define CDS on two different levels (action-level and attribute-level) trying to seek the most appropriate definition of CDS.2) We propose a general-to-detailed neural network by incorporating CDS as a middle coarse layer. CDS is not only a high-level extraction of commonalities across all the domains, but also a prior information for fine process helping the final decoding.3) Since CDS is cross-domain, our first-level network General Network which encodes the utterance and decodes CDS can be seen as a multi-task learning setup, capturing the commonalities among utterances expressions from different domains which is exactly the goal of multi-task learning. Traditional spoken language understanding (SLU) factors language understanding into domain classification, intent prediction, and slot filling, which proves to be effective in some domains BID9 ). Representations of SLU use pre-defined fixed and flat structures, which limit its expression skills like that it is hard to capture the similarity among utterances when the utterances are from different domains BID15 ). Due to SLU's limited representation skills, meaning representation language (MRL) has been introduced which is a compositional graph-based semantic representation, increasing the ability to represent more complex requests BID14 ). There are several different logic forms including lambda-calculus expression BID17 ), SQL BID33 ), Alexa MRL BID14 ). Compared to fixed and flat SLU representations, MRL BID14 ) based on a large-scale ontology, is much stronger in expression in several aspects like cross-domain and complex utterances. Mapping a natural language utterance into machine interpreted logic form (such as MRL) can be regarded as a special translation task, which is treated as a sequence-to-sequence problem BID28 ). Then BID1 and BID22 advance the sequence-tosequence network with attention mechanism learning the alignments between target and input words, making great progress in the performance. BID23 explore the attention mechanism with some improvements by replacing attention function with attention sparsity. Besides, to deal with the rare words, BID8 incorporate the copy mechanism into the encoder-decoder model by directly copying words from inputs. Lately, many researchers have been around improving sequence-to-sequence model itself, in interpreting the sentence syntax information. BID5 encode the input sentence recursively in a bottom-up fashion. BID30 generate the target sequence and syntax tree through actions simultaneously. Another aspect which has caught much attention is constrained decoding. BID16 and BID25 add some constraints into decoding process, making it more controllable. BID3 use the recurrent network as encoder which proves effective in sequence representation, and respectively use the recurrent network as decoder and tree-decoder. BID16 employ the grammar to constrain the decoding process. BID4, believe utterance understanding is from high-level to low-level and by employing sketch, improve the performance. For semantic parsing task especially in MRL format, it is expensive and time-consuming to annotate the data, and it is challenging to train semantic parsing neural models. Multi-task learning aims to use other related tasks to improve target task performance. BID20 deal with traditional SLU piper-line network by jointly detecting intent and doing slot filling. BID27 share parameters among various tasks, according to the low-level and high-level difference. BID11 divide the representation network into task-specific and general which is shared during multi-task learning. BID6 and BID12 directly share the encoder or decoder neural layers (model params) through different semantic parsing tasks. In BID15, multi-task learning also mainly acts sharing the params of the network. For human language expressions, especially task-oriented requests, there exist commonalities across sentences including linguistic syntax and phrase similarity. They can be seen with general sentence templates. TAB1 Since sentence templates are too many, we try to leverage these common regularities more abstractly. We extract these invariant commonalities which are implicit across domains, and call them as crossdomain sketch (CDS) in a canonical way. We define CDS in meaning representation language (MRL) format (action-object-attribute) and on two levels (action-level and attribute-level). Action-level CDS means to acquire the same action for utterances from different domains while the attribute-level CDS means to extract more detailed information. See examples in TAB1. Instead of extracting CDS from utterance directly, we try converting from logic form into CDS reversely, because it is easier to deal with structural logic form than utterance in natural language form. We analyze the dataset Snips and use a rule-based method to obtain CDS. We strip logic forms from domain-specific components and preserve domainindependent parts including general actions and attributes. We do some statistics on the dataset Snips BID7 ) used in this paper. We convert attributes [object type, movie type, restaurant type] into {object type}, [object name, restaurant name, movie name] into {object name}, [year, timeRange] into {time}, [location name, current location] into {object location}, [country, city] into {place}. All those attributes account for 55% of all attributes which indicate the existence and feasibility of CDS. Figure 1 shows our overall network, which contains a two-level encoder-decoder. The General Network encodes utterance and decodes cross-domain sketch (CDS). Since this process is domaingeneral, it can be done to all domains, which is a multi-task setup. The Detailed Network firstly encodes the CDS and the utterance, then it decodes the target based on both utterance and CDS. This process is domain-dependent, so that it is a fine-tuning process in a specific domain. For an input utterance u = u 1, u 2,...u |u|, and its middle cross-domain sketch (CDS) c = c 1, c 2,...c |c|, and its final logic form y = y 1, y 2,...y |y|, the conditional probability is: Figure 1: Overall Network. General Network (red dashed box below) encodes the utterance with bi-directional LSTM and decodes cross-domain sketch (CDS) using unidirectional LSTM with attention to utterance in all domains. For identical encoding, general utterance encoding and specific utterance encoding share the same encoder while for separate encoding, they are not (see Section 3.2.2). Then Detailed Network, in one specific domain, encodes CDS and utterance using bi-directional LSTM, decodes the final target with advanced attention to both utterance and CDS. DISPLAYFORM0 where y <t = y 1, y 2,...y |t−1|, and c <t = c 1, c 2,...c |t−1|. The neural encoder of our model is similar to neural machine translation (NMT) model, which uses a bi-directional recurrent neural network. Firstly each word of utterance is mapped into a vector u t ∈ R d via embedding layer and we get a word sequence u = (u 1, ..., u |u|). Then we use a bi-directional recurrent neural network with long short-term memory units (LSTM) BID13 ) to learn the representation of word sequence. We generate forward hidden state DISPLAYFORM0 The t-th word will be h DISPLAYFORM1 We construct two kinds of utterance encoders, general utterance encoder for General Network and specific utterance encoder for Detailed Network (see in Figure 1), so as to extract different information for different purposes. The general utterance encoder, meant to pay more attention to cross-domain commonalities of utterances, is used by all utterances from all domains. The specific utterance encoder, which is domain-dependent, belongs to one specific domain and is more sensitive to details. We call encoder outputs h ug t from general utterance encoder and h us t from specific utterance encoder. When the two encoders share the same parameters that is h ug t = h us t, we call it identical encoding and when they are not, we call it separate encoding, inspired by BID27; BID16; BID21; BID0 ) which explore the sharing mechanisms of multi-task learning and propose some improvements. The General Network is meant to obtain cross-domain sketch (CDS) c conditioned on utterance u, using an encoder-decoder network. After encoding utterance by general utterance encoder for all domains, we obtain h ug t (see Section 3.2.2). Then we start to decode CDS.The decoder is based on a unidirectional recurrent neural network, and the output vector is used to predict the word. The cd represents CDS decoder. DISPLAYFORM0 where c t is the previously predicted word embedding. The LuongAttention BID22 ) between decoding hidden state d t and encoding sequence e i (i = 1, 2, ...|e|) at time step t is computed as: DISPLAYFORM1 DISPLAYFORM2 Based on equations FORMULA4 and FORMULA5, we compute the attention a The t-th predicted output token will be: DISPLAYFORM0 DISPLAYFORM1 where W, b are parameters. After decoding CDS words c = (c 1, ..., c |c|), we use an encoder to represent its meaning and due to words' relation with forward and backward contexts, we choose to use a bi-directional LSTM. We generate forward hidden state − → h Through specific utterance encoder and cross-domain sketch (CDS) encoder, we acquired t-th word representation h us t and h ce t. Finally with advanced attention to both encoded utterance u and CDS c, we decode the final target y. The decoder is based on a unidirectional recurrent neural network, and the output vector is used to predict the word. The y represents target decoder. DISPLAYFORM0 where y t is the previously predicted word embedding. During target decoding process and at time step t, we not only compute the attention to utterance encoding outputs h us but also compute the attention to CDS encoding outputs h ce. The attention between target hidden state and utterance is a 1, 2,...|c|) in the same way. Then the t-th predicted output token will be based on the advanced two-aspect attention: DISPLAYFORM1 For training process, the objective is: DISPLAYFORM0 T is the training corpus. For inference process, we firstly obtain cross-domain sketch (CDS) via c = argmax p(c|u) then we get the final target logic form via y = argmax p(y|u, c). For both decoding processes, we use greedy search to generate words one by one. Existed semantic parsing datasets, e.g., GEO BID32 ), ATIS BID31 ), collect data only from one domain and have a very limited amount, which can not fully interpret the effectiveness of cross-domain sketch (CDS) since it needs large dataset among different domains. In this case, we mainly consider the semantic parsing task Snips (Goo et al. FORMULA0) based on MRL format (action-object-attribute). Snips collects users' requests from a personal voice assistant. The original dataset is annotated in spoken language understanding (SLU) format (intent-slot). It has 7 intent types and 72 slot labels, and more statistics are shown in TAB4. Based on the format (intent-slot), we pre-process this dataset into MRL format by some pre-defined rules, then we regard the intent as domain/task and share CDS among them. The details are shown in Target SearchAction (ScreeningEventType ( object type @ 2, movie type @ 4, timeRange @ 6, location name @ 8 9 10) ) TAB3: Several examples of Snips. Utterance is the user's request which is a natural language expression. Intent and slots are in formats from original dataset. Cross-domain sketch (CDS) has two levels (action-level and attribute-level). Target is the final logic form with numbers indicating copying words from utterance (index starting from 0). We use Tensorflow in all our experiments, with LuongAttention BID22 ) and copy mechanism. The embedding dimension is set to 100 and initialized with GloVe embeddings (Pennington et al. FORMULA0). The encoder and decoder both use one-layer LSTM with hidden size 50. We apply the dropout selected in {0.3,0.5}. Learning rate is initialized with 0.001 and is decaying during training. Early stopping is applied. The mini-batch size is set to 16. We use the logic form accuracy as the evaluation metric. Firstly, in order to prove the role of the cross-domain sketch (CDS) in helping to guide decoding process with multi-tasking learning setup, we do several experiments, and the are shown in Table 4. For joint learning, we apply several multi-task architectures from BID6 ), including one-to-one, one-to-many and one-to-shareMany. One-to-one architecture applies a single sequence-to-sequence model across all the tasks. One-to-many only shares the encoder across all the tasks while the decoder including the attention parameters is not shared. In one-to-shareMany model, tasks share encoder and decoder (including attention) params, but the output layer of decoder is task-independent. From the Table 4, in general, joint learning performs better than single task learning. In joint learning, one-to-one is the best and performs way better than one-to-many and one-to-shareMany, probably limited by the dataset's size and similarity among tasks. By incorporating CDS, our GDNN (general-to-detailed neural network) models have all improved the performance to different degrees. The CDS is defined on two levels (action-level and attribute-level, see examples in TAB3) and attribute-level CDS improves greater than action-level CDS, which is in our expectation since it offers more information for tasks to share. We also experiment on different utterance encoding setups with identical encoding and separate encoding (see Section 3.2.2). The separate encoding setup performs better than sharing the same encoder for utterance, which integrates the fact that different encoders pay different attention to the utterances due to different purposes which means one is more general and the other is more specific detailed. Method Snips Accuracy Single Seq2Seq BID28 62.3 Joint Seq2Seq (one-to-many) BID6 62.0 Joint Seq2Seq (one-to-shareMany) BID6 64.2 Joint Seq2Seq (one-to-one) BID6 71.4 GDNN with Action-level CDS (identical encoding) 74.9 GDNN with Action-level CDS (separate encoding) 75.1 GDNN with Attribute-level CDS (identical encoding) 76.7 GDNN with Attribute-level CDS (separate encoding) 78.1 Table 4: Multi-task Results. Single Seq2Seq means each task has a sequenece-to-sequence model. Joint Seq2Seq show with three multi-task mechanisms. Our include GDNN (generalto-detailed neural network) models with different levels of CDS (action-level/attribute level) and different utterance encoding mechanisms (identical encoding/separate encoding).We also list the full of GDNN in TAB6 Moreover, we compare our experiments with traditional models which regard the task as intent classification and slot filling (IC SF). The are shown in TAB7 below. From TAB7, we can see compared to IC SF models (based on sequence labeling format), Seq2Seq perform worse (71.4% compared to 73.2%) due to its fewer assumptions and larger decode size as well as its difficulty of training, which is usual in comparing IC SF models and sequence-tosequence models. Through using CDS, the performance has significantly improved. On the one Method Snips Accuracy Joint Seq. BID10 73.2 Atten.-Based BID19 74.1 Slot.-Gated (Intent Atten.) BID7 74.6 Slot.-Gated (Full Atten.) BID7 75.5 Joint Seq2Seq BID6 71.4 GDNN with Action-level CDS 73.2 GDNN with Attribute-level CDS 74.6 hand, CDS extract the cross-domain commonalities among tasks helping to make the multi-task learning more specific, which can be seen as an advance to multi-task learning. On the other hand, CDS can be seen adding some constraints to the final target decoding process which has offered more information for the decoding process, compared to direct joint Seq2Seq. To better prove and explain this idea, we do some experiments according to constraint decoding aspect. We try to compare the sub-process of converting utterance to CDS through different models, e.g., IC SF, Seq2Seq. From the Table 7, we can see that Seq2Seq achieve the comparable (87.7%) to IC SF model (84.9%) for generating CDS from utterance, which further explains that, the fact joint seq2seq performs worse (71.4%, see TAB7) than IC SF model (73.2%) is owing to the lack of guidance and constraints during the follow-up decoding process. By incorporating CDS, we add some constraints to this decoding process thus obtaining better performance. Table 7: Results of CDS generation in dataset Snips by two methods. IC SF is using intent classification and slot filling with evaluation metric (intent accuracy, slot labelling accuracy and final accuracy). Seq2Seq generates CDS based on utterance using an encoder-decoder. In this paper, we propose the concept of cross-domain sketch (CDS) which extracts some shared information across domains, trying to fully utilize the cross-domain commonalities such as syntactic and phrasal similarity in human expressions. We try to define CDS on two levels and give some examples to illustrate our idea. We also present a general-to-detailed neural network (GDNN) for converting an utterance into a logic form based on meaning representation language (MRL) form. The general network, which is meant to extract cross-domain commonalities, uses an encoderdecoder model to obtain CDS in a multi-task setup. Then the detailed network generates the final domain-specific target by exploiting utterance and CDS simultaneously via attention mechanism. Our experiments demonstrate the effectiveness of CDS and multi-task learning. CDS is able to generalize over a wide range of tasks since it is an extraction to language expressions. Therefore, in the future, we would like to perfect the CDS definition and extend its' ontology to other domains and tasks. Besides, in this paper, we use attention mechanism to make use of CDS which is still a indirect way. We would like to explore more effective ways such as constraint decoding to further enhance the role of CDS.
General-to-detailed neural network(GDNN) with Multi-Task Learning by incorporating cross-domain sketch(CDS) for semantic parsing
1,308
scitldr
The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification. This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks. Deep learning has rapidly become one of the most pervasively applied techniques in machine learning. From computer vision BID15 ) and reinforcement learning BID18 ) to natural language processing BID27 ) and speech recognition ), the core principles of hierarchical representation and optimization central to deep learning have revolutionized the state of the art; see BID10. In each domain, a major difficulty lies in selecting the architectures of models that most optimally take advantage of structure in the data. In computer vision, for example, a large body of work BID24, BID25, BID12, etc.) focuses on improving the initial architectural choices of BID15 by developing novel network topologies and optimization schemes specific to vision tasks. Despite the success of this approach, there are still not general principles for choosing architectures in arbitrary settings, and in order for deep learning to scale efficiently to new problems and domains without expert architecture designers, the problem of architecture selection must be better understood. Theoretically, substantial analysis has explored how various properties of neural networks, (eg. the depth, width, and connectivity) relate to their expressivity and generalization capability, BID6, BID11 ). However, the foregoing theory can only be used to determine an architecture in practice if it is understood how expressive a model need be in order to solve a problem. On the other hand, neural architecture search (NAS) views architecture selection as a compositional hyperparameter search BID23, BID9, BID31 ). As a NAS ideally yields expressive and powerful architectures, but it is often difficult to interperate the ing architectures beyond justifying their use from their emperical optimality. We propose a third alternative to the foregoing: data-first architecture selection. In practice, experts design architectures with some inductive bias about the data, and more generally, like any hyperparameter selection problem, the most expressive neural architectures for learning on a particular dataset are solely determined by the nature of the true data distribution. Therefore, architecture selection can be rephrased as follows: given a learning problem (some dataset), which architectures are suitably regularized and expressive enough to learn and generalize on that problem?A natural approach to this question is to develop some objective measure of data complexity, and then characterize neural architectures by their ability to learn subject to that complexity. Then given some new dataset, the problem of architecture selection is distilled to computing the data complexity and chosing the appropriate architecture. For example, take the two datasets D 1 and D 2 given in FIG0 (ab) and FIG0 (cd) respectively. The first dataset, D 1, consists of positive examples sampled from two disks and negative examples from their compliment. On the right, dataset D 2 consists of positive points sampled from two disks and two rings with hollow centers. Under some geometric measure of complexity D 2 appears more'complicated' than D 1 because it contains more holes and clusters. As one trains single layer neural networks of increasing hidden dimension on both datasets, the minimum number of hidden units required to achieve zero testing error is ordered according to this geometric complexity. Visually in FIG0, regardless of initialization no single hidden layer neural network with ≤ 12 units, denoted h ≤12, can express the two holes and clusters in D 2. Whereas on the simpler D 1, both h 12 and h 26 can express the decision boundary perfectly. Returning to architecture selection, one wonders if this characterization can be extrapolated; that is, is it true that for datasets with'similar' geometric complexity to D 1, any architecture with ≥ 12 hidden learns perfectly, and likewise for those datasets similar in complexity to D 2, architectures with ≤ 12 hidden units can never learn to completion? In this paper, we formalize the above of geometric complexity in the language of algebraic topology. We show that questions of architecture selection can be answered by understanding the'topological capacity' of different neural networks. In particular, a geometric complexity measure, called persistent homology, characterizes the capacity of neural architectures in direct relation to their ability to generalize on data. Using persistent homology, we develop a method which gives the first empirical insight into the learnability of different architectures as data complexity increases. In addition, our method allows us to generate conjectures which tighten known theoretical bounds on the expressivity of neural networks. Finally, we show that topological characterizations of architectures are possible and useful for architecture selection in practice by computing the persistent homology of CIFAR-10 and several UCI datasets. In order to more formally describe notions of geometric complexity in datasets, we will turn to the language of topology. Broadly speaking, topology is a branch of mathematics that deals with characterizing shapes, spaces, and sets by their connectivity. In the context of characterizing neural networks, we will work towards defining the topological complexity of a dataset in terms of how that dataset is'connected', and then group neural networks by their capacity to produce decision regions of the same connectivity. BID14 ).In topology, one understands the relationships between two different spaces of points by the continuous maps between them. Informally, we say that two topological spaces A and B are equivalent (A ∼ = B) if there is a continuous function f: A → B that has an inverse f −1 that is also continuous. When f exists, we say that A and B are homeomorphic and f is their homeomorphism; for a more detailed treatment of general topology see BID2. Take for example, the classic example of the coffee cup and the donut in FIG1. They are homeomorphic because one can define a continuous deformation of one into the other which shrinks, twists, and morphs without tearing or gluing, as in FIG1. Note that if the donut had two holes, it would no longer be equivalent to the mug. Likewise, in an informal way, FIG0 since if there were a homeomorphism f: D 1 → D 2 at least one of the clusters in D 1 would need to be split in order to produce the four different regions in D 2. DISPLAYFORM0 The power of topology lies in its capacity to differentiate sets (topological spaces) in a meaningful geometric way that discards certain irrelevant properties such as rotation, translation, curvature, etc. For the purposes of defining geometric complexity, non-topological properties 1 like curvature would further fine-tune architecture selection-say if D 2 had the same regions but with squigly (differentially complex) boundaries, certain architectures might not converge-but as we will show, grouping neural networks by'topological capacity' provides a powerful minimality condition. That is, we will show that if a certain architecture is incapable of expressing a decision region that is equivalent in topology to training data, then there is no hope of it ever generalizing to the true data. Algebraic topology provides the tools necessary to not only build the foregoing notion of topological equivalence into a measure of geometric complexity, but also to compute that measure on real data (Betti, BID7 BID2 ). At its core, algebraic topology takes topological spaces (shapes and sets with certain properties) and assigns them algebraic objects such as groups, chains, and other more exotic constructs. In doing so, two spaces can be shown to be topologically equivalent (or distinct) if the algebraic objects to which they are assigned are isomorphic (or not). Thus algebraic topology will allow us to compare the complexity of decision boundaries and datasets by the objects to which they are assigned. Although there are many flavors of algebraic topology, a powerful and computationally realizable tool is homology. Definition 2.1 . If X is a topological space, then H n (X) = Z βn is called the nth homology group of X if the power β n is the number of'holes' of dimension n in X. Note that β 0 is the number of separate connected components. We call β n (X) the nth Betti number of X. Finally, the homology 2 of X is defined as DISPLAYFORM0 Immediately homology brings us closer to defining the complexity of D 1 and D 2. If we assume that D 1 is not actually a collection of N datapoints, but really the union of 2 solid balls, and likewise that D 2 is the union of 2 solid balls and 2 rings, then we can compute the homology directly. In this case H 0 (D 1) = Z 2 since there are two connected components 3; H 1 (D 1) = {0} since there are no circles (one-dimensional holes); and clearly, H n (D 1) = {0} for n ≥ 2. Performing the same computation in the second case, we get H 0 (D 2) = Z 4 and H 1 (D 2) = Z 2 as there are 4 seperate clusters and 2 rings/holes. With respect to any reasonable ordering on homology, D 2 is more complex than D 1. The measure yields non-trivial differentiation of spaces in higher dimension. For example, the homology of a hollow donut is DISPLAYFORM1 Surprisingly, the homology of a space contains a great deal of information about its topological complexity 1. The following theorem suggests the absolute power of homology to group topologically similar spaces, and therefore neural networks with topologically similar decision regions. DISPLAYFORM2 Intuitively, Theorem 2.2 states that number of'holes' (and in the case of H 0 (X), connected components) are topologically invariant, and can be used to show that two shapes (or decision regions) are different. In order to compute the homology of both D 1 and D 2 we needed to assume that they were actually the geometric shapes from which they were sampled. Without such assumptions, for any dataset DISPLAYFORM0 where N is the number of data points. This is because, at small enough scales each data point can be isolated as its own connected component; that is, as sets each pair of different positive points d 1, d 2 ∈ D are disjoint. To properly utilize homological complexity in better understanding architecture selection, we need to be able to compute the homology of the data directly and still capture meaningful topological information. Persistent homology, introduced in BID30, avoids the trivialization of computation of dataset homology by providing an algorithm to calculate the homology of a filtration of a space. Specifically, a filtration is a topological space X equipped with a sequence of subspaces X 0 ⊂ X 1 ⊂ · · · ⊂ X. In FIG2 one such particular filtration is given by growing balls of size centered at each point, and then letting X be the ing subspace in the filtration. Define β n (X) to be the nth Betti number of the homology H(X) of X. Then for example at = 1.5, β 0 (X) = 19 and β 1 (X) = 0 as every ball is disjoint. At = 5.0 some connected components merge and β 0 (X) = 12 and β 1 (X) = 0. Finally at = 7, the union of the balls forms a hole towards the center of the dataset and β 1 (X) > 0 with β 0 (X) = 4.All together the change in homology and therefore Betti numbers for X as changes can be summarized succinctly in the persistence barcode diagram given in FIG2. Each bar in the section β n (X) denotes a'hole' of dimension n. The left endpoint of the bar is the point at which homology detects that particular component, and the right endpoint is when that component becomes indistinguishable in the filtration. When calculating the persistent homology of datasets we will frequently use these diagrams. With the foregoing algorithms established, we are now equipped with the tools to study the capacity of neural networks in the language of algebraic topology. In the forthcoming section, we will apply persistent homology to emperically characterize the power of certain neural architectures. To understand why homological complexity is a powerful measure for differentiating architectures, we present the following principle. Suppose that D is some dataset drawn from a joint distribution F with continuous CDF on some topological space X × {0, 1}. Let X + denote the support of the distribution of points with positive labels, and X − denote that of the points with negative labels. Then let H S (f):= H[f −1 ((0, ∞))] denote the support homology of some function f: X → {0, 1}. Essentially H S (f) is homology of the set of x such that f (x) > 0. For a binary classifier, f, H S (f) is roughly a characterization of how many'holes' are in the positive decision region of f. We will sometimes use β n (f) to denote the nth Betti number of this support homology. Finally let F = {f : X → {0, 1}} be some family of binary classifiers on X. Theorem 3.1 (The Homological Principle of Generalization). If X = X − X + and for all f ∈ F with H S (f) = H(X +), then for all f ∈ F there exists A ⊂ X + so f misclassifies every x ∈ A.Essential Theorem 3.1 says that if an architecture (a family of models F) is incapable of producing a certain homological complexity, then for any model using that architecture there will always be a set A of true data points on which the model will fail. Note that the above principle holds regardless of how f ∈ F is attained, learned or otherwise. However, the principle does imply that no matter how well some F learns to correctly classify D there will always be a counter examples in the true data. In the context of architecture selection, the foregoing minimality condition significantly reduces the size of the search space by eliminating smaller architectures which cannot even express the'holes' (persistent homology) of the data H(D). This allows us to return to our original question of finding suitably expressive and generalizeable architectures but in the very computable language of homological complexity: Let F A the set of all neural networks with'architecture' A, then Given a dataset D, for which architectures A does there exist a neural network f ∈ F A such that H S (f) = H(D)? We will resurface a contemporary theoretical view on this question, and thereafter make the first steps towards an emperical characterization of the capacity of neural architectures in the view of topology. Theoretically, the homological complexity of neural network can be framed in terms of the sum of the number of holes expressible by certain architectures. In particular, BID1 gives an analysis of how the maximum sum of Betti numbers grows as F A changes. The , summarized in TAB0, show that the width, depth, and activation of a fully connected architecture effect its topological expressivity to varying polynomial and exponential degrees. What is unclear from this analysis is how these bounds describe expressivity in terms of individual Betti numbers. For example, with a tanh activation function, n inputs, layers, and h hidden units, there is no description of what the number of connected components max f ∈F A β 0 (f) or 1-dimensional holes max f ∈F A β 1 (f) actually is. With regards to tighter bounds BID1 stipulate that improvements to their are deeply tied to several unsolved problems in algebraic topology. BID1.) DISPLAYFORM0 Figure 4: Topological phase transitions in low dimensional neural networks as the homological complexity of the data increases. The upper right corner of each plot is a dataset on which the neural networks of increasing hidden dimension are trained. To understand how the homology of data determines expressive architectures we turn to an empirical characterization of neural networks. In this setting, we can tighten the bounds given in TAB0 by training different architectures on datasets with known homologies and then recording the decision regions observed over the course of training. In the most basic case, one is interested in studying how the number of hidden units in a single hidden layer neural network affect its homological capacity. The of BID1 say for certain activation functions we should expect a polynomial dependence on the sum of Betti numbers β n, but is this true of individual numbers? Having an individual characterization would allow for architecture selection by computing the homology of the dataset, and then finding which architectures meet the minimal criterion for each Betti number β n. Restricting 5 our analysis to the case of two inputs, n = 2, we characterize the capacities of architectures with an increasing number of hidden units to learn on datasets with homological complexities ranging from {Z 1, 0} to {Z 20, Z 20}. In our experiment, we generate datasets of each particular homological complexity by sampling different combinations of balls, rings, and scaling, twisting, and gluing them at random. After generating the foregoing datasets with N ≈ 90000 samples we train 100 randomly (truncated normal) initialized single hidden layer architectures with hidden units h ∈ {1, . . ., 255} and tanh activation functions for 10 6 minibatches of size 128. During training, every 2500 batches we sample the decision boundary of each neural network over a grid of 500 × 500 samples, producing 1.02 × 10 6 recorded decision boundaries. Using the ing data, we not only characterize different architectures but observed interesting topological phenomena during learning. First, neural networks exhibit a statistically significant topological phase transition in their convergence which depends directly on the homological complexity of the data. For any dataset in the experiment and any random homeomorphism applied thereto, the best test error of architectures with h hidden units is strictly ordered in magnitude and convergence time for h < h phase where h phase is a number of hidden units required to express the homology of the data. In Figure 4 we plot the best performing test error of architectures h ∈ {1, . . ., 15} on some example datasets DISPLAYFORM0 In this example h phase (D 0) = 4, h phase (D 1) = 6, and h phase (D 2) = 10. Surprisingly, leading up to the phase transition point, each different architecture falls into its own band of optimal convergence. This suggests that additional hidden units do in fact add to the topological capacity of an architecture in a consistent way. Using topological phase transitions we now return to the original question of existence of expressive architectures. In FIG3, we accumulate the probabilities that neural networks of varying hidden dimension train to zero-error on datasets of different homological complexities. The table gives different views into how expressive an architecture need be in order to converge, and therefore we are able to conjecture tighter bounds on the capacity of hidden units. Extrapolating from the first view, if H 0 (D) = Z m then there exists a single hidden layer neural network with h = m + 2 that converges to zero error on D. Likewise we claim that if H 0 (D) = Z m and H 1 (D) = 1 then the same holds with h ≥ 3m − 1. Further empirical analysis of convergence probabilities yields additional conjectures. However, claiming converse conjectures about a failure to generalize in the view of Theorem 3.1 requires exhaustive computation of decision boundary homologies. By applying persistent homology to the decision boundaries of certain networks during training, we observe that given sufficient data, neural networks exhibit topological stratification. For example, consider the homologies of different architecture decision regions as training progresses in FIG4. At the beginning of training every model captures the global topological information of the dataset and is homologically correlated with one another. However as training continues, the architectures stratify into two groups with homological complexities ordered by the capacities of the models. In this example, h 3, h 4, and h 5 are unable to express as many holes as the other architectures and so never specialize to more complex and local topological properties of the data. FIG4 (b) depicts topological stratification in terms of the correlation between Betti numbers. Topologically speaking, networks with less than 6 hidden units are distinct from those with more for most of training. Furthermore, this correlative view shows that stratification is consistent with topological phase transition; that is, across all decision boundary homologies recorded during the experiment stratification occurs just when the number of hidden units is slightly less than h phase. We have thus far demonstrated the discriminatory power of homological complexity in determining the expressivity of architectures. However, for homological complexity to have any practical use in architecture selection, it must be computable on real data, and more generally real data must have non-trivial homology; if all data were topologically simple our characterization would have no predictive power. In the following section we will compute the persistent homologies up to dimension 2 of different real world datasets. CIFAR-10. We compute the persistent homology of several classes of CIFAR-10 using the Python library Dionysus. Currently algorithms for persistent homology do not deal well with high dimensional data, so we embed the entire dataset in R 3 using local linear embedding (LLE;) with K = 120 neighbors. After embedding the dataset, we take a sample of 1000 points from example class'car' and build a persistent filtration by constructing a Vietoris-Rips complex on the data. The ing complex has 20833750 simplices and took 4.3 min. to generate. UCI Datasets. We further compute the homology of three low dimensional UCI datasets and attempt to assert the of non-trivial, h phase. Specifically, we compute the persistent homology of the majority classes in the Yeast Protein Localization Sites, UCI Ecoli Protein Localization Sites, and HTRU2 datasets. For these datasets no dimensionality reduction was used. In FIG5 (left), the persistence barcode exhibits two seperate significant loops (holes) at ∈ [0.19, 0.31] and ∈ [0.76, 0.85], as well as two major connected components in β 0 (D). The Other persistence diagrams are relegated to the appendix. Existing Data. Outside of the primary machine learning literature, topological data analysis yields non-trivial computations in wide variety of fields and datasets. Of particular interest is the work of BID3, which computes the homological complexity of collections of n × n patches of natural images. Even in these simple collections of images, the authors found topologies of Klein Bottles (H(·) = {Z, Z 2 /2Z, 0 . . .}) and other exotic topological objects. Other authors have calculated non-trivial dataset homologies in biological BID26 ), natural language BID17 ), and other domains BID28, BID29 ). We will place this work in the context of deep learning theory as it relates to expressivity. Since the seminal work of BID5 which established standard universal approximation for neural networks, many theorists have endeavored to understand the expressivity of certain neural architectures. BID19 BID16 provided the first analysis relating the depth and width of architectures to the complexitity of the sublevel sets they can express. Motivated therefrom, BID1 expressed this theme in the language of Pfefferian functions, thereby bounding the sum of Betti numbers expressed by sublevel sets. gave an account of how topological assumptions on the input data lead to optimally expressive architectures. In parallel, BID8 presented the first analytical minimality in expressivity theory; that is, the authors show that there are simple functions that cannot be expressed by two layer neural networks with out exponential dependence on input dimension. This work spurred the work ofPoole et al., which reframed expressivity in a differential geometric lense. Our work presents the first method to derive expressivity empirically. Our topological viewpoint sits dually with its differential geometric counterpart, and in conjunction with the work of and BID1, this duallity implies that when topological expression is not possible, exponential differential expressivity allows networks to bypass homological constaints at the cost of adversarial sets. Furthermore, our work opens a practical connectio nbetween the foregoing theory on neural expressivity and architecture selection, with the potential to drastically improve neural architecture search BID31 ) by directly computing the capacities of different architectures. Architectural power is deeply related to the algebraic topology of decision boundaries. In this work we distilled neural network expressivity into an empirical question of the generalization capabilities of architectures with respect to the homological complexity of learning problems. This view allowed us to provide an empirical method for developing tighter characterizations on the the capacity of different architectures in addition to a principled approach to guiding architecture selection by computation of persistent homology on real data. There are several potential avenues of future research in using homological complexity to better understand neural architectures. First, a full characterization of neural networks with many layers or convolutional linearities is a crucial next step. Our empirical suggest that the their are exact formulas describing the of power of neural networks to express decision boundaries with certain properties. Future theoretical work in determining these forms would significantly increase the efficiency and power of neural architecture search, constraining the search space by the persistent homology of the data. Additionally, we intend on studying how the topological complexity of data changes as it is propagated through deeper architectures. A.1 HOMOLOGY Homology is naturally described using the language of category theory. Let T op 2 denote the category of topological spaces and Ab the category of abelian groups. A → X and j: X → (X, A) the sequence sequence of inclusions and connecting homomorphisms are exact.3. Given the pair (X, A) and an open set U ⊂ X such that cl(U) ⊂ int(A) then the inclusion k: (X − U, A − U) → (X, A) induces an isomorphism k *: H * (X − U, A − U) → H * (X, A)4. For a one point space P, H i (P) = 0 for all i = 0.5. For a topological sum X = + α X α the homomorphism DISPLAYFORM0 is an isomorphism, where i α: X α → X is the inclusion. For related definitions and requisite notions we refer the reader to.A.2 PROOF OF THEOREM 3.1 Theorem A.2. Let X be a topological space and X + be some open subspace. If F ⊂ 2 X such that f ∈ F implies H S (f) = H(X +), then for all f ∈ F there exists A ⊂ X so that f (A ∩ X +) = {0} and f (A ∩ (X \ X +)) = {1}.Proof. Suppose the for the sake of contraiction that for all f ∈ F, H S (f) = H(X +) and yet there exists an f such that for all A ⊂ X, there exists an x ∈ A such that f (x) = 1. Then take A = {x} x∈X, and note that f maps each singleton into its proper partition on X. We have that for any open subset of V ⊂ X +, f (V) = {1}, and for any closed subset W ⊂ X \ X +, f (W) = {0}. Therefore X + = A∈τ X + ∩X A ⊂ supp(f) as the subspace topology τ X + ∩X = τ X + ∩ τ X where τ X + = {A ∈ τ X | A ⊂ X +} and τ X denotes the topology of X. Likewise, int(X −) ⊂ X \supp(F) under the same logic. Therefore supp(f) has the exact same topology as X + and so by Theorem 2.2 H(X +) = H(supp(f)) but this is a contradiction. This completes the proof.
We show that the learnability of different neural architectures can be characterized directly by computable measures of data complexity.
1,309
scitldr
Challenges in natural sciences can often be phrased as optimization problems. Machine learning techniques have recently been applied to solve such problems. One example in chemistry is the design of tailor-made organic materials and molecules, which requires efficient methods to explore the chemical space. We present a genetic algorithm (GA) that is enhanced with a neural network (DNN) based discriminator model to improve the diversity of generated molecules and at the same time steer the GA. We show that our algorithm outperforms other generative models in optimization tasks. We furthermore present a way to increase interpretability of genetic algorithms, which helped us to derive design principles The design of optimal structures under constraints is an important problem spanning multiple domains in the physical sciences. Specifically, in chemistry, the design of tailor-made organic materials and molecules requires efficient methods to explore the chemical space. Purely experimental approaches are often time consuming and expensive. Reliable computational tools can accelerate and guide experimental efforts to find new materials faster. We present a genetic algorithm (GA) (; ; ;) for molecular design that is enhanced with two features: Inverse design is the systematic development of structures with desired properties. In chemistry , the challenge of inverse design has been tackled as an optimization problem, among others in the form of variational autoencoders (VAEs), generative adversarial networks (GANs) and genetic algorithms. Variational autoencoders and generative adversarial networks VAEs are a widely used method for direct generation of molecular string or graph representations (Gómez-). They encode discrete representations into a continuous (latent) space. Molecules resembling a known structure can be found by searching around the region of the encoded point. Making using of the continuous latent representation, it is possible to search via gradients or Bayesian Optimization (BO). However, the generation of semantically and syntactically valid molecules is a challenging task. Thus, several follow up works to the VAE have been proposed for inverse design in chemistry. Among them, CVAE (Gómez-), GVAE and SD-VAE work directly on string molecular representations. Alternatively, JT-VAE (a) as well as work on molecular graphs. Unlike latent space property optimization, the policy network (PN) based GCPN model proposes a reinforcement learning (RL)-based method for direct optimization on molecular graphs. ORGAN demonstrate training string-based generative adversarial networks (GANs) via RL. Another method based on adversarial training is VJTNN (b). first introduced molecule generating models based on language models and reinforcement learning, where actions in an environment are taken to construct a molecule, receiving reward from an external scoring function. This model has also shown strong performance in the GuacaMol benchmark . In all the approaches mentioned above, generative models are trained to mimic the reference data set distributions, thus limiting the exploration ability of VAEs and GANs. There exist several examples of GA based molecule optimization algorithms in literature (; ; ; ; ;) ). While some of these examples pre-define mutations on a SMILES level to ensure validity of the molecules, other approaches use fragmentbased assembly of molecules. GAs are likely to get trapped in regions of local optima . Thus, for selection of the best molecules, report multiple restarts upon stagnation. We include the aforementioned models as baselines for numerical comparison. 3 GA-D ARCHITECTURE 3.1 OVERVIEW Our approach is illustrated in Figure 1. Our generator is a genetic algorithm with a population of molecules m. In each generation, the fitness of all molecules is evaluated as a linear combination of molecular properties J(m) and the discriminator score D(m): Random mutations of high fitness (best performing) molecules replace inferior members, while bestperforming molecules continue to a subsequent generation. The probability of replacing a molecule is evaluated using a smooth logistic function based on a ranking of fitness among the molecules of a generation. At the end of each generation, a neural network based discriminator is trained jointly on molecules generated by the GA and a reference data set. The fitness evaluation accounts for the discriminator predictions for each molecule. Therefore, the discriminator plays a role in the selection of the subsequent population. Mutation of molecules to populate subsequent generations is an important element of the GA. A low degree of mutation can lead to a slow exploration of the chemical space, causing stagnation Figure 1: Overview of our hybrid structure, which augments genetic algorithms with ML based neural networks. of the fitness function. Robustness of SELFIES allows us to do random mutations to molecular strings while preserving their validity. Thus, our mutations only include 50% insertions or 50% replacements of single SELFIES characters. To accelerate the exploration of the GA, we add one domain-specific mutation rule -the direct addition of phenyl groups in approximately 4% of cases. Character deletion is implicitly taken into account in SELFIES mutations, and we do not use crossover rules. The fundamental role of the discriminator is to increase molecular diversity by removing longsurviving molecules. Consider the realistic scenario in which a GA has found a molecule close to a local optimum, where all mutations are lowering the fitness. As a , this molecule survives for multiple generations while occupying a large fraction of the population. During such periods, the GA has limited view of the chemical space as it repeatedly explores mutations of the same high-fitness molecule. A straightforward solution would be the addition of a linear penalty in the fitness function that accounts for the number of successive generations a molecule survives. However, this method assigns independent scores to similar-looking molecules, which again in less variety. Our solution is the addition of an adaptive penalty (in our case a neural network based discriminator), thus resolving the problem of stagnation. Molecules with similar representation receive similar classification scores. Furthermore, long-surviving molecules are trained longer and receive weaker scores, ing in decreasing fitness -reducing the chance of long periods of stagnation (illustrated in Figure 2). The task of the discriminator thus is to memorize families of high performing molecules and penalize their fitness to force the GA to explore different regions in chemical space. Comparing to the literature standard, we aim at maximizing the penalized logP objective J(m) proposed by (Gómez-). For molecule m, the penalized logP function is defined as where logP indicates the water-octanol partition coefficient, SA represents the synthetic accessibility and prevents the formation of chemically unfeasible molecules, and RingPenalty linearly penalizes the presence of rings of size larger than 6. Our reference data set consists of 250,000 commercially available molecules extracted from the ZINC database . All three quantities in the above equation are normalized based on this data set. 5.6 ± 0.5 RNN GCPN 12.61 ± 0.81 GA GA + D (here) 13.31 ± 0.63 20.72 ± 3.14) GA + DNN 1 average of three best molecules after performing Bayesian optimization on the latent representation of a trained VAE 2 two best molecules of a single run 3 averaged over 10 runs with 20 molecules per generation with a molecular weight (excl. hydrogen) smaller than 39.15 ± 3.50 g/mol run for 50 generations 4 averaged over 10 runs with 500 molecules up to 81 characters per generation and 100 generations 5 averaged over 5 runs with 1000 generations each We define our fitness function according to Eq. 1 and 2 with β = 0 and 10. The algorithm is run for 100 generations with a population size of 500. All generated molecules are constrained to a canonical smile length of 81 characters (as in ). We train the discriminator (fully connected neural network with ReLU activation and sigmoid output layer, the input is a vector of chemical and geometrical properties characterizing the molecules) at the end of each generation for 10 epochs on 500 molecules proposed by the GA and 500 randomly drawn molecules from the ZINC data set. We report maximum J(m) scores, averaged over 10 independent runs. The highest J(m) achieved by our approach are 13.31 ± 0.63 (β = 10) and 12.61 ± 0.81 (β = 0), respectively, which is almost twice as high es the highest literature value of 7.87±0.07 (See Table 1). Furthermore, we compare to 50,000 random valid SELFIES strings, which surprisingly outperforms some existing generative models. The GA-D(t) are explained in the next section. In Figure 3, we show the of runs where we use a time-dependent adaptive penalty. During periods of saturation, the weight of the discriminator predictions is switched from 0 to 1000 until stagnation is overcome. The genetic algorithm is hence forced to propose new families of molecules to increase J(m). As it can be observed in Figure 3, even after steep decreases in max J(m), the scores recover and potentially reach values higher than in previous plateaus. We observe that this approach significantly outperforms all previous methods in maximizing the objective J(m). Visual inspection of the highest performing molecules shows that the GA is exploiting deficiencies of the (penalized) logP metric by generating chemically irrelevant motifs such as sulfur chains. While being of limited relevance for application, this nonetheless shows the that the GA is very efficient in exploring the chemical space and finding solutions for a given task. Analysis of these solutions will help us to better understand and eventually improve objective functions. At the same time, surprising solutions found by an unbiased algorithm can lead to unexpected discovery or boost human creativity . While exploitative tasks that follow some reference database are significant for questions involving drug design, more explorative behaviour could be beneficial in domains such as solar cell design , flow battery design , and in particular for questions where datas sets are not available, such as design of targets in molecular beam interferometry . Figure 4 shows classes of molecules explored by the GA in a trajectory with 1000 generations and a generation size of 500 (see trajectories in Figure 3). A K-means clustering analysis based in the RDKit fingerprint bit-vectors with 20 clusters was used to automatically generate labels for the 50 best performing molecules (in terms of their properties J(m)) in each generation. We find that the algorithm starts with a class of relatively small molecules (≈ 40 generations, see class 10), after which the algorithm explores several different classes of large molecules with e.g. aromatic rings and conjugates carbon chains (see class 15) and long linear sulfur chains (see class 1). Class 1 includes the molecules with the highest J(m) scores found in this work, exceeding values of 20. The clustering analysis furthermore helps to analyze the large number of generated molecules. This allows us to learn from the data and derive design rules to achieve high target properties. In case of the penalized logP score, we find that representative examples (short distance to the cluster center) of high performing classes contain aromatic rings, conjugated carbon chains and linear sulfur chains, which can act as design rules for molecules with high penalized logP scores. Using these design rules to construct molecules of the maximally allowed length consisting of only sulfur chains, conjugated carbon chains and chains of benzene rings with sulfur bridges yields J(m) scores of 31.79, 8.58 and 10.02, respectively. The J(m) score reached by the linear sulfur chain outperforms the best scores found by the GA and other generative models. To visualize the explorative behaviour of the GA, we did a two-dimensional principal component analysis (PCA) of the molecular fingerprint vectors of all molecules generated by the GA-D(t) in the trajectory shown in Figure 4. Five snapshots of the trajectory are shown in Figure 5, colored according to their chemical family. The property score J(m) of the trajectory is visualized in the images. We find that the GA sequentially explores different parts of the 2D projection of the chemical space, finding different families of high performing molecules, while the discriminator prevents the algorithm from searching in the same space repeatedly. In the previous sections, we aimed to maximize the penalized logP objective. Here, we consider two different tasks: firstly, generating molecules of specific chemical interest and secondly, modifying existing molecules to increase penalized logP scores. We used β = 0 throughout Section 4.4. We run 250 instances of randomly selected sets of target properties (logP in the interval from -5 to 10, SA scores in the range from 1 to 5 and ring penalty for 0 to 3). The fitness function is modified to minimize the summed squared difference between the actual and desired properties. We compute the number of experiments that successfully proposed molecules with a squared difference less than 1.0. Each run is constrained to run for 100 generations with a maximum canonical SMILES length of 81 characters. In 90.0% of the cases, our approach proposes the desired molecules. Improvement Success Improvement Success JT-VAE (a) 0.84 ± 1.45 83.6% 0.21 ± 0.71 46.4% GCPN 2.49 ± 1.30 100.0% 0.79 ± 0.63 100.0% MMPA (b) 3.29 ± 1.12 -1.65 ± 1.44 -DEFactor 3.41 ± 1.67 85.9% 1.55 ± 1.19 72.6% VJTNN (b) 3.55 ± 1.67 -2.33 ± 1.17 -GA (here) 5.93 ± 1.41 100.0% 3.44 ± 1.09 99.8% Figure 6: Distributions of logP and QED for a) ZINC and b) GuacaMol data set compared to molecules generated using the GA. c) Examples of molecules generated by the GA with high logP and QED scores. In this experiment, we follow the experimental setup proposed by. We optimize the penalized logP score of 800 low-scoring molecules from the ZINC data set. Our genetic algorithm is initiated with a molecule from the data set, and we run each experiment for 20 generations and a population size of 500 without the discriminator. For each run, we report the molecule m that increases the penalized logP the greatest, while possessing a similarity sim(m, m) > δ with the respective reference molecules m. We calculate molecular similarity based on Morgan Fingerprints of radius 2. To ensure generation of molecules possessing a certain similarity, for molecule m we modify the fitness to: Here, SimilarityPenalty(m) is 0 if sim(m, m) > δ and −10 6 otherwise. In Table 2, we report the average improvement for each molecule. Success is determined when the GA successfully improve upon the penalized logP score, while not violating the similarity constraint. Figure S1 shows examples of improved molecular structures. To show the performance of the GA on a drug-discovery task , we modified the objective function to include the drug-likeness score QED . Solubility metric logP and drug-likeness cannot be maximized at the same time, which is shown in Figure 6 at the example of the ZINC and the GuacaMol data set . The experiment shows that the GA is able to efficiently and densely sample the edge of the property distributions of both data sets. Example molecules that simultaneously maximize logP and QED are shown in Figure 6c. The definition of the fitness function used in this work (see Equation 1) has a free parameter (β), that balances the relative importance of molecular target properties J(m) and discriminator score Figure 7 illustrates the of this test, where each curve is obtained from averaging five independent runs with equal settings. In Figure 7a, we show the average property score J(m) for 11 different values β and observe, as expected, that low values of β on average lead to higher J(m) score while very high values of β limit the algorithm to stay inside the reference data set distribution, which, per definition, has a mean J(m) score of 0. Figure 7b illustrates the low average discriminator scores encountered after a few generations in case of low values of β. The curves shown in Figure 7a and b suggest that β allows us to interpolate between high and low J(m) and high and low resemblance with the reference data set. However, as shown in Figure 7c, this is not the case. At values of β ≈ 20, there is a rather fast transition from molecules with high values of J(m) of approximately 12-13 to a J(m) distribution centered around 0 which is similar to the J(m) distribution of the reference data set. The same abrupt change is visible in the distributions of the discriminator scores. Only at β ≈ 18, we find a slightly wider distribution of intermediate J(m) values, which we assume is partially related to the fact that the β = 18 curve shown in Figure 7a is not converged yet. Figure 7d shows examples of molecules generated with β = 0 (upper panel) and β = 50 (lower panel). While in the first case, the GA finds large molecules with many aromatic rings and long linear chains, that do not resemble the molecules in the reference data set, the latter case shows molecules with structures and properties resembling the reference data set. In this case, the distribution of J(m) is comparable to that of the data set (with 0 mean and unit standard deviation), while the discriminator encounters difficulty in correctly classification of the GA molecules, indicated by a mean discriminator score varying around 0.5 across many generations. We presented a hybrid GA and ML-based generative model and demonstrated its application in molecular design. The model outperforms literature approaches in generating molecules with desired properties. A detailed analysis of the data generated by the genetic algorithm allowed us to interpret the model and learn rules for the design of high performing molecules. This human expert design inspired from GA molecules outperformed all molecules created by generative models. For computationally more expensive property evaluations, we will extend our approach by the introduction of an on-the-fly trained ML property evaluation method, which will open new ways of solving the inverse design challenge in chemistry and materials sciences. Our approach is independent of domain knowledge, thus applicable to design questions in other scientific disciplines beyond chemistry. We therefore plan to generalize the GA-D approach to make it a more general concept of generative modelling. 6 SUPPLEMENTARY INFORMATION Figure S1 shows examples of the molecules optimized in Section 4.4. Figure S1: Molecular modifications ing in increased penalized logP scores under similarity constraint sim(m, m) > 0.4, 0.6. We show the molecules that ed in largest score improvement. Figures S2-S4 show comparisons between the property distributions observed in molecule data sets such as the ZINC and the GuacaMol data set and property distributions of molecules generated using random SELFIES (Figure S2), GA generated molecules with the penalized logP objective (Figure S3) and GA generated molecules with an objective function which includes logP and QED (Figure S4). While the average logP scores of average SELFIES are low, the tail of the distribution reaches to high values, explaining the surprisingly high penalized logP scores shown in Table 1. The QED and weight distributions of molecules optimized using the penalized logP objective significantly differ from the distributions of the ZINC and the GuacaMol data set (see Figure S3). As soon as the QED score is simultaneously optimized, the distributions of GA generated molecules and molecules from the reference data sets become more similar (see Figure S4). Figure 6 shows that the GA can simultaneously optimize logP and QED. Figure S2: Distributions of a) logP, b) SA, c) QED and d) molecular weight for randomly generated SELFIES, molecules from the ZINC data set and molecules from the GuacaMol data set. Figure S3: Distributions of a) logP, b) SA, c) QED and d) molecular weight for GA generated SELFIES (penalized logP objective function), molecules from the ZINC data set and molecules from the GuacaMol data set. Figure S4: Distributions of a) logP, b) SA, c) QED and d) molecular weight for GA generated SELFIES (logP and QED as objective function), molecules from the ZINC data set and molecules from the GuacaMol data set.
Tackling inverse design via genetic algorithms augmented with deep neural networks.
1,310
scitldr
Bidirectional Encoder Representations from Transformers (BERT) reach state-of-the-art in a variety of Natural Language Processing tasks. However, understanding of their internal functioning is still insufficient and unsatisfactory. In order to better understand BERT and other Transformer-based models, we present a layer-wise analysis of BERT's hidden states. Unlike previous research, which mainly focuses on explaining Transformer models by their \hbox{attention} weights, we argue that hidden states contain equally valuable information. Specifically, our analysis focuses on models fine-tuned on the task of Question Answering (QA) as an example of a complex downstream task. We inspect how QA models transform token vectors in order to find the correct answer. To this end, we apply a set of general and QA-specific probing tasks that reveal the information stored in each representation layer. Our qualitative analysis of hidden state visualizations provides additional insights into BERT's reasoning process. Our show that the transformations within BERT go through phases that are related to traditional pipeline tasks. The system can therefore implicitly incorporate task-specific information into its token representations. Furthermore, our analysis reveals that fine-tuning has little impact on the models' semantic abilities and that prediction errors can be recognized in the vector representations of even early layers. In recent months, Transformer models have become more and more prevalent in the field of Natural Language Processing. Originally they became popular for their improvements over RNNs in Machine Translation BID36. Now however, with the advent of large models and an equally large amount of pre-training being done, they have Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CIKM'19, November 3rd-7th, 2019, Beijing, China. © 2019 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM... $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn proven adept at solving many of the standard Natural Language Processing tasks. Main subject of this paper is BERT BID8, arguably the most popular of the recent Transformer models and the first to display significant improvements over previous state-of-the-art models in a number of different benchmarks and tasks. Problem of black box models. Deep Learning models achieve increasingly impressive across a number of different domains, whereas their application to real-world tasks has been moving somewhat more slowly. One major impediment lies in the lack of transparency, reliability and prediction guarantees in these largely black box models. While Transformers are commonly believed to be moderately interpretable through the inspection of their attention values, current research suggests that this may not always be the case BID15. This paper takes a different approach to the interpretation of said Transformer Networks. Instead of evaluating attention values, our approach examines the hidden states between encoder layers directly. There are multiple questions this paper will address: Do Transformers answer questions decompositionally, in a similar manner to humans? Do specific layers in a multi-layer Transformer network solve different tasks? What influence does fine-tuning have on a network's inner state? Can an evaluation of network layers help come to a on why and how a network failed to predict a correct answer?We discuss these questions on the basis of fine-tuned models on standard QA datasets. We choose the task of Question Answering as an example of a complex downstream task that, as this paper will show, requires solving a multitude of other Natural Language Processing tasks. Additionally, it has been shown that other NLP tasks can be successfully framed as QA tasks BID22, therefore our analysis should translate to these tasks as well. While this work focuses on the BERT architecture, we perform preliminary tests on the small GPT-2 model BID28 as well, which yield similar . First, we propose a layer-wise visualisation of token representations that reveals information about the internal state of Transformer networks. This visualisation can be used to expose wrong predictions even in earlier layers or to show which parts of the context the model considered as Supporting Facts. Second, we apply a set of general NLP Probing Tasks and extend them by the QA-specific tasks of Question Type Classification and Supporting Fact Extraction. This way we can analyse the abilities within BERT's layers and how they are impacted by fine-tuning. Third, we show that BERT's transformations go through similar phases, even if fine-tuned on different tasks. Information about general language properties is encoded in earlier layers of BERT and implicitly used to solve the downstream task at hand in later layers. Transformer Models. Our analyses focus on BERT, which belongs to the group of Transformer networks, named after how representations are transformed throughout the network layers. We also partly include the more recent Transformer model GPT-2 BID28. This model represents OpenAI's improved version of GPT BID27 and while GPT-2 has not yet climbed leaderboards like BERT has, its larger versions have proven adept enough at the language modeling task, that Open-AI has decided not to release their pre-trained models. There are also other Transformer models of note, where a similar analysis might prove interesting in future work. Chief among them are the Universal Transformer BID7 and TransformerXL BID6, both of which aim to improve some of the flaws of the Transformer architecture by adding a recurrent inductive bias. Interpretability and Probing. Explainability and Interpretability of neural models have become an increasingly large field of research. While there are a multitude of ways to approach these topics BID9 BID12 BID19, we especially highlight relevant work in the area of research that builds and applies probing tasks and methodologies, post-hoc, to trained models. There have been a number of recent advances on this topic. While the majority of the current works aim to create or apply more general purpose probing tasks BID2 BID4 BID33, BERT specifically has also been probed in previous papers. Tenney et al. BID34 proposes a novel "edge-probing" framework consisting of nine different probing tasks and applies it to the contextualized word embeddings of ELMo, BERT and GPT-1. Both semantic and syntactic information is probed, but only pre-trained models are studied, and not specifically fine-tuned ones. A similar analysis BID11 adds more probing tasks and addresses only the BERT architecture. Qiao et al. BID26 focus specifically on analysing BERT in the context of a Ranking task. The authors probe attention values in different layers and measure performance for representations build from different BERT layers. Like BID34, they only discuss pre-trained models. There has also been work which studies models not through probing tasks but through qualitative visual analysis. Zhang and Zhu BID41 offer a survey of different approaches, though limited to CNNs. Nagamine et al. BID24 explore phoneme recognition in DNNs by studying single node activations in the task of speech recognition. Hupkes et al. BID14 go one step further, by not only doing a qualitative analysis, but also training diagnostic classifiers to support their hypotheses. Finally, Li et al. BID17 take a look at word vectors and the importance of some of their specific dimensions on both sequence tagging and classification tasks. The most closely related previous work is proposed by Liu et al. BID20. Here, the authors also perform a layer-wise analysis of BERT's token representations. However, their work solely focuses on probing pre-trained models and disregards models fine-tuned on downstream tasks. Furthermore, it limits the analysis to the general transferability of the network and does not analyze the specific phases that BERT goes through. Additionally, our work is motivated by Jain and Wallace BID15. In their paper, the authors argue that attention, at least in some cases, is not well suited to solve the issues of explainability and interpretability. They do so both by constructing adversarial examples and by a comparison with more traditional explainability methods. In supporting this claim, we propose revisiting evaluating hidden states and token representations instead. We focus our analysis on fine-tuned BERT models. In order to understand which transformations the models apply to input tokens, we take two approaches: First, we analyse the transforming token vectors qualitatively by examining their positions in vector space. Second, we probe their language abilities on QA-related tasks to examine our quantitatively. The architecture of BERT and Transformer networks in general allows us to follow the transformations of each token throughout the network. We use this characteristic for an analysis of the changes that are being made to the tokens' representations in every layer. We use the following approach for a qualitative analysis of these transformations: We randomly select both correctly and falsely predicted samples from the test set of the respective dataset. For these samples we collect the hidden states from each layer while removing any padding. This in the representation of each token throughout the model's layers. The model can transform the vector space freely throughout its layers and we do not have references for semantic meanings of positions within these vector spaces. Therefore we consider distances between token vectors as indication for semantic relations. Dimensionality reduction. BERT's pre-trained models use vector dimensions of 1024 (large model) and 512 (base model). In order to visualize relations between tokens, we apply dimensionality reduction and fit the vectors into two-dimensional space. To that end we apply T-distributed Stochastic Neighbor Embedding (t-SNE) BID35, Principal Component Analysis (PCA) BID10 and Independent Component Analysis (ICA) BID3 to vectors in each layer. As the of PCA reveal the most distinct clusters for our data, we use it to present our findings. K-means clustering. In order to verify that clusters in 2D space represent the actual distribution in high-dimensional vector space, we additionally apply a k-means clustering BID21. We choose the number of clusters k in regard to the number of observed clusters in PCA, which vary over layers. The ing clusters correspond with our observations in 2D space. Our goal is to further understand the abilities of the model after each transformation. We therefore apply a set of semantic probing tasks to analyze which information is stored within the transformed tokens after each layer. We want to know whether specific layers How Does BERT Answer Questions?CIKM'19, November 3rd-7th, 2019, Beijing, China. are reserved for specific tasks and how language information is maintained or forgotten by the model. We use the principle of Edge Probing introduced by Tenney et al. BID34. Edge Probing translates core NLP tasks into classification tasks by focusing solely on their labeling part. This enables a standardized probing mechanism over a wide range of tasks. We adopt the tasks Named Entity Labeling, Coreference Resolution and Relation Classification from the original paper as they are prerequisites for language understanding and reasoning BID39. We add tasks of Question Type Classification and Supporting Fact Identification due to their importance for Question Answering in particular. BID0 Named Entity Labeling. Given a span of tokens the model has to predict the correct entity category. This is based on Named Entity Recognition but formulated as a Classification problem. The task was modeled by BID34, annotations are based on the OntoNotes 5.0 corpus BID38 and contain 18 entity categories. Coreference Resolution. The Coreference task requires the model to predict whether two mentions within a text refer to the same entity. The task was built from the OntoNotes corpus and enhanced with negative samples by BID34.Relation Classification. In Relation Classification the model has to predict which relation type connects two known entities. The task was constructed by BID34 with samples taken from the SemEval 2010 Task 8 dataset consisting of English web text and nine directional relation types. BID0 We will make the source code to all experiments publicly available. Question Type Classification. A fundamental part of answering a question is to correctly identify its question type. For this Edge Probing task we use the Question Classification dataset constructed by Li and Roth BID18 based on the TREC-10 QA dataset BID37. It includes 500 fine-grained types of questions within the larger groups of abbreviation, entity, description, human, location and numeric value. We use the whole question as input to the model with its question type as label. Supporting Facts. The extraction of Supporting Facts is a main prerequisite for Question Answering tasks, especially in the multihop case. We examine what BERT's token transformations can tell us about the mechanism behind distinguishing distracting from important context parts. To understand at which stage this distinction is done, we construct a probing task for identifying Supporting Facts. The model has to predict whether a sentence contains supporting facts regarding a specific question or whether it is irrelevant. Through this task we test the hypothesis that token representations contain information about their significance to the question. Both HotpotQA and bAbI contain information about sentencewise Supporting Facts for each question. SQuAD does not require multi-hop reasoning, we therefore consider the sentence containing the answer phrase the Supporting Fact. We also exclude all QA-pairs that only contain one context sentence. We construct a different probing task for each dataset in order to check their task-specific ability to recognize relevant parts. All samples are labeled sentencewise with true if they are a supporting fact or false otherwise. Probing Setup. Analogue to the authors of BID34, we embed input tokens for each probing task sample with our fine-tuned BERT model. Contrary to previous work, we do this for all layers (N = 12 for BERT-base and N = 24 for BERT-large), using only the output embedding from n-th layer at step n. The concept of Edge Probing defines that only tokens of "labeled edges" (e.g. tokens of two related entities for Relation Classification) within a sample are considered for classification. These tokens are first pooled for a fixed-length representation and afterwards fed into a two-layer Multi-layer Perceptron (MLP) classifier, that predicts label-wise probability scores (e.g. for each type of relation). A schematic overview of this setting is shown in FIG0. We perform the same steps on pretrained BERT-base and BERT-large models without any fine-tuning. This enables us to identify which abilities the model learns during pre-training or fine-tuning. Our aim is to understand how BERT works on complex downstream tasks. Question Answering (QA) is one of such tasks that require a combination of multiple simpler tasks such as Coreference Resolution and Relation Modeling to arrive at the correct answer. We take three current Question Answering datasets into account, namely SQUAD BID30, bAbI BID39 and HotpotQA BID40. We intentionally choose three very different datasets to diversify the of our analysis. What is a common punishment in the UK and Ireland? What is Emily afraid of? Answer detention cats Currently detention is one of the most common punishments in schools in the United States, the UK, Ireland, Singapore and other countries. It requires the pupil to remain in school at a given time in the school day (such as lunch, recess or after school); or even to attend school on a non-school day, e.g. "Saturday detention" held at some schools. During detention, students normally have to sit in a classroom and do work, write lines or a punishment essay, or sit quietly. HotpotQA. This Multihop QA task contains 112.000 natural questionanswer pairs. The questions are especially designed to combine information from multiple parts of a context. For our analysis we focus on the distractor-task of HotpotQA, in which the context is composed of both supporting and distracting facts with an average size of 900 words. As the pre-trained BERT model is restricted to an input size of 512 tokens, we reduce the amount of distracting facts by a factor of 2.7. We also leave out yes/no-questions (7% of questions) as they require additional specific architecture, diluting our analysis.bAbI. The QA bAbI tasks are a set of artificial toy tasks developed to further understand the abilities of neural models. The 20 tasks require reasoning over multiple sentences (Multihop QA) and are modeled to include Positional Reasoning, Argument Relation Extraction and Coreference Resolution. The tasks strongly differ from the other QA tasks in their simplicity (e.g. vocabulary size of 230 and short contexts) and the artificial nature of sentences. In this section we briefly discuss the models our analysis is based on, BERT BID8 and GPT-2 BID28. Both of these models are Transformers that extend and improve on a number of different recent ideas. These include previous Transformer models BID36, SemiSupervised Sequence Learning BID5, ELMo BID25 and ULMFit BID13. Both have a similar architecture, and they each represent one half of the original encoder-decoder Transformer BID36. While GPT-2, like its predecessor, consists of only the decoder half, BERT uses a FIG0 depicts how these models integrate into our probing setup. We base our training code on the Pytorch implementation of BERT available at. We use the publicly available pre-trained BERT models for our experiments. In particular, we study the monolingual models bert-base-uncased and bert-large. For GPT-2 the small model (117M Parameters) is used, as a larger model has not yet been released. However, we do not apply these models directly, and instead fine-tune them on each of our datasets. Training Modalities. Regarding hyperparameters, we tune the learning rate, batch size and learning rate scheduling according to a grid search and train each model for 5 Epochs with evaluations on the development set every 1000 iterations. We then select the model of the best evaluation for further analysis. The input length chosen is 384 tokens for the bAbI and SQuAD tasks and the maximum of 512 tokens permitted by the pre-trained models' positional embedding for the HotpotQA tasks. For BAbI we evaluate both models that are trained on a single bAbI task and also a multitask model, How Does BERT Answer Questions? CIKM'19, November 3rd-7th, 2019, Beijing, China. that was trained on the data of all 20 tasks. We further distinguish between two settings: Span prediction, which we include for better comparison with the other datasets, and Sequence Classification, which is the more common approach to bAbI. In order to make span prediction work, we append all possible answers to the end of the base context, since not all answers can be found in the context by default. For HotpotQA, we also distinguish between two tasks. In the HotpotQA Support Only (SP) task, we use only the sentences labeled as Supporting Facts as the question context. This simplifies the task, but more importantly it reduces context length and increases our ability to distinguish token vectors. Our HotpotQA Distractor task is closer to the original HotpotQA task. It includes distracting sentences in the context, but only enough to not exceed the 512 token limit. Training Results. TAB3 shows the evaluation of our best models. Accuracy on the SQuAD task is close to human performance, indicating that the model can fulfill all sub-tasks required to answer SQuAD's questions. As expected the tasks derived from HotpotQA prove much more challenging, with the distractor setting being the most difficult to solve. Unsurprisingly too, bAbI was easily solved by both BERT and GPT-2. While GPT-2 performs significantly worse in the more difficult tasks of SQuAD and HotpotQA, it does considerably better on bAbi reducing the validation error to nearly 0. Most of BERT's error in the bAbI multi-task setting comes from tasks 17 and 19. Both of these tasks require positional or geometric reasoning, thus it is reasonable to assume that this is a skill where GPT-2 improves on BERT's reasoning capabilities. Presentation of Analysis Results. The qualitative analysis of vector transformations reveals a range of recurring patterns. In the following, we present these patterns by two representative samples from the SQuAD and bAbI task dataset described in TAB1. Examples from HotpotQA can be found in the supplementary material as they require more space due to the larger context. Results from probing tasks are displayed in FIG1. We compare in macro-averaged F1 over all network layers. FIG1 shows from three models of BERT-base with twelve layers: Fine-tuned on SQuAD,on bAbI tasks and without fine-tuning. FIG2 reports of two models based on BERT-large with 24 layers: Fine-tuned on HotpotQA and without fine-tuning. The PCA representations of tokens in different layers suggest that the model is going through multiple phases while answering a question. We observe these phases in all three selected QA tasks despite their diversity. These findings are supported by of the applied probing tasks. We present the four phases in the following paragraphs and describe how our experimental are linked. Semantic Clustering. Early layers within the BERT-based models group tokens into topical clusters. Figures 4a and 5a reveal this behaviour and show the second layer of each model. Resulting vector spaces are similar in nature to embedding spaces from e.g. Word2Vec BID23 and hold little task-specific information. Therefore, these initial layers reach low accuracy on semantic probing tasks, as shown in FIG1. BERT's early layers can be seen as an implicit replacement of embedding layers common in neural network architectures. Connecting Entities with Mentions and Attributes. In the middle layers of the observed neural networks, we see clusters of entities that are less connected by their topical similarity. Rather, they are connected by their relation within a certain input context. These task-specific clusters appear to already include a filtering of question-relevant entities. FIG4 shows a cluster with words like countries, schools, detention and country names, in which'detention' is a common practice in schools. This cluster helps to solve the question " What is a common punishment in the UK and Ireland?". Another question-related cluster is shown in FIG6. The main challenge within this sample is to identify the two facts that Emily is a wolf and Wolves are afraid of cats. The highlighted cluster implies that Emily has been recognized as a relevant entity that holds a relation to the entity Wolf. The cluster also contains other mentions of these entities including the plural form Wolves. We observe similar clusters in the HotpotQA model, which includes more cases of coreferences. The probing support these observations. The model's ability to recognize entities (Named Entity Labeling), to identify their mentions (Coreference Resolution) and to find relations (Relation Recognition) improves until higher network layers. FIG7 visualizes these abilities. Information about Named Entities is learned first, whereas recognizing coreferences or relations are more difficult tasks and require input from additional layers until the model's performance peaks. These patterns are equally observed in the from BERT-base models and BERT-large models. Matching Questions with Supporting Facts. Identifying relevant parts of the context is crucial for Question Answering and Information Retrieval in general. In traditional pipeline models this step is often achieved by filtering context parts based on their similarity to the question BID16. We observe that BERT models perform a corresponding step by transforming the tokens so that question tokens are matched onto relevant context tokens. FIG4 show two examples in which the model transforms the token representation of question and Supporting Facts into the same area of the vector space. Some samples show this behaviour in lower layers. However, from our probing tasks reveal that the models hold the strongest ability to distinguish relevant from irrelevant information wrt. the question in their higher layers. FIG1 demonstrates how the performance for this task increases over successive layers for SQuAD and bAbI. Performance of the fine-tuned HotpotQA model in FIG2 is less distinct from the model without fine-tuning and does not reach high accuracy. BID1 This inability indicates why the BERT model does not perform well on this dataset as it is not able to identify the correct Supporting Facts. The vector representations enable us to tell which facts a model considered important (and therefore matched with the question). This helps retracing decisions and makes the model more transparent. Answer Extraction. In the last network layers we see that the model dissolves most of the previous clusters. Here, the model separates the correct answer tokens, and sometimes other possible candidates, from the rest of the tokens. The remaining tokens form one or multiple homogeneous clusters. The vector representation at this point is largely task-specific and learned during fine-tuning. This becomes visible through the performance drop in general NLP probing tasks, visualized in FIG7. We especially observe this loss of information in last-layer representations in the large BERTmodel fine-tuned on HotpotQA, as shown in FIG2. While the model without fine-tuning still performs well on tasks like NEL or COREF, the fine-tuned model loses this ability. Analogies to Human Reasoning. The phases of answering questions can be compared to the human reasoning process, including decomposition of input into parts BID0. The first phase of semantic clustering represents our basic knowledge of language and the second phase how a human reader builds relations between parts of the context to connect information needed for answering a question. Separation of important from irrelevant information (phase 3) and grouping of potential answer candidates (phase 4) are also known from human reasoning. However, the order of these steps might differ from the human abstraction. One major difference is that while humans read sequentially, BERT can see all parts of the input at once. Thereby it is able to run multiple processes and phases concurrently depending on the task at hand. FIG7 shows how the tasks overlap during the answering process. In this section we compare our insights from the BERT models to the GPT-2 model. We focus on the qualitative analysis of token representations and leave the application of probing tasks for future work. One major difference between GPT-2's and BERT's hidden states is that GPT-2 seems to give particular attention to the first token of a sequence. While in our QA setup this is often the question word, this also happens in cases where it is not. During dimensionality reduction this in a separation of two clusters, namely the first token and all the rest. This problem holds true for all layers of GPT-2 except for the Embedding Layer, the first Transformer block and the last one. For this reason we mask the first token during dimensionality reduction in further analysis. FIG8 shows an example of the last layer's hidden state for our bAbI example. Like BERT, GPT-2 also separates the relevant Supporting Facts and the question in the vector space. Additionally, GPT-2 extracts another sentence, which is not a Supporting Fact, but is similar in meaning and semantics. In contrast to BERT, the correct answer "cats" is not particularly separated and instead simply left as part of its sentence. These findings in GPT-2 suggest that our analysis extends beyond the BERT architecture and hold true for other Transformer networks as well. Our future work will include more probing tasks to confirm this initial observation. Observation of Failure States. One important aspect of explainable Neural Networks is to answer the questions of when, why, and how the network fails. Our visualizations are not only able to show such failure states, but even the rough difficulty of a specific task can be discerned by a glance at the hidden state representations. While for correct predictions the transformations run through the phases discussed in previous sections, for wrong predictions there exist two possibilities: If a candidate answer was found that the network has a reasonable amount of confidence in, the phases will look very similar to a correct prediction, but now centering on the wrong answer. Inspecting early layers in this case can give insights towards the reason why the wrong candidate was chosen, e.g. wrong Supporting Fact selected, misresolution of coreferences etc. An example of this is shown in FIG9, where a wrong answer is based on the fact that the wrong Supporting Fact was matched with the question in early layers. If network confidence is low however, which is often the case when the predicted answer is far from the actual answer, the transformations do not go through the phases discussed earlier. The vector space is still transformed in each layer, but tokens are mostly kept in a single homogenous cluster. If something is extracted it usually has little to do with the prediction. In some cases, especially when the confidence of the network is low in general, the network maintains Phase,'Semantic Clustering' analogue to Word2Vec, even in later layers. An example is depicted in the supplementary material. Impact of Fine-tuning. FIG1 show how little impact fine-tuning has on the core NLP abilities of the model. The pretrained model already holds sufficient information about words and their relations, which is the reason it works well in multiple downstream tasks. Fine-tuning only applies small weight changes and forces the model to forget some information in order to fit specific tasks. However, the model does not forget much of the previously learned encoding when fitting the QA task, which indicates why the Transfer Learning approach proves successful. Maintained Positional Embedding. It is well known that the positional embedding is a very important factor in the performance of Transformer networks. It solves one major problem that Transformers have in comparison with RNNs, that they lack sequential information BID36. Our visualizations support this importance and show that even though the positional embedding is only added once before the first layer, its effects are maintained even into very late layers depending on the task. FIG10 demonstrates this behavior on the SQuAD dataset. Abilities to resolve Question Type. The performance curves regarding the Question Type probing task illustrate another interesting . FIG1 demonstrates that the model fine-tuned on SQuAD outperforms the base model from layer 5 onwards. This indicates the relevancy of resolving the question type for the SQuAD task, which leads to an improved ability after fine-tuning. The opposite is the case for the model fine-tuned on the bAbI tasks, which loses part of its ability to distinguish question types during fine-tuning. This is likely caused by the static structure of bAbI samples, in which the answer candidates can be recognized by sentence structure and occurring word patterns rather than by the question type. Surprisingly, we see that the model fine-tuned on HotpotQA does not outperform the model without fine-tuning in FIG2. Both models can solve the task in earlier layers, which suggests that the ability to recognize question types is pre-trained in BERT-large. Our work reveals important findings about the inner functioning of Transformer networks. The impact of these findings and how future work can build upon them is described in the following: CIKM'19, November 3rd-7th, 2019, Beijing, China. Anon. Interpretability. The qualitative analysis of token vectors reveals that there is indeed interpretable information stored within the hidden states of Transformer models. This information can be used to identify misclassified examples and model weaknesses. It also provides clues about which parts of the context the model considered important for answering a question -a crucial part of decision legitimisation. We leave the development of methods to further process this information for future work. Transferability. We further show that lower layers might be more applicable to certain problems than later ones. For a Transfer Learning task, this means layer depth should be chosen individually depending on the task at hand. We also suggest further work regarding skip connections in Transformer layers to examine whether direct information transfer between non-adjacent layers (that solve different tasks) can be of advantage. Modularity. Our findings support the hypothesis that not only do different phases exist in Transformer networks, but that specific layers seem to solve different problems. This hints at a kind of modularity that can potentially be exploited in the training process. For example, it could be beneficial to fit parts of the network to specific tasks in pre-training, instead of using an end-to-end language model task. Our work aims towards revealing some of the internal processes within Transformer-based models. We suggest to direct further research at thoroughly understanding state-of-the-art models and the way they solve downstream tasks, in order to improve on them.
We investigate hidden state activations of Transformer Models in Question Answering Tasks.
1,311
scitldr
We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations. Recent advances in deep reinforcement learning (RL) have performed very well in several challenging domains such as video games BID25 and Go. For robotics, RL in combination with powerful function approximators provides a general framework for designing sophisticated controllers that would be hard to handcraft otherwise. Yet, despite significant leaps in other domains the application of deep RL to control and robotic manipulation has proven challenging. While there have been successful demonstrations of deep RL for manipulation (e.g. BID26 BID31) and also noteworthy applications on real robotic hardware (e.g. BID48 there have been very few examples of learned controllers for sophisticated tasks even in simulation. Robotics exhibits several unique challenges. These include the need to rely on multi-modal and partial observations from noisy sensors, such as cameras. At the same time, realistic tasks often come with a large degree of variation (visual appearance, position, shapes, etc.) posing significant generalization challenges. Training on real robotics hardware can be daunting due to constraints on the amount of training data that can be collected in reasonable time. This is typically much less than the millions of frames needed by modern algorithms. Safety considerations also play an important role, as well as the difficulty of accessing information about the state of the environment (like the position of an object) e.g. to define a reward. Even in simulation when perfect state information and large amounts of training data are available, exploration can be a significant challenge. This is partly due to the often high-dimensional and continuous action space, but also due to the difficulty of designing suitable reward functions. In this paper, we present a general deep reinforcement learning method that addresses these issues and that can solve a wide range of robot arm manipulation tasks directly from pixels, most of which have not been solved previously. Our key insight is 1) to reduce the difficulty of exploration in continuous domains by leveraging a handful of human demonstrations; 2) several techniques to stabilize the learning of complex manipulation policies from vision; and 3) to improve generalization by increasing the diversity of the training conditions. As a , the trained policies work well under significant variations of system dynamics, object appearances, task lengths, etc. We ground these policies in the real world, demonstrating zero-shot transfer from simulation to real hardware. We develop a new method to combine imitation learning with reinforcement learning. Our method requires only a small number of human demonstrations to dramatically simplify the exploration problem. It uses demonstration data in two ways: first, it uses a hybrid reward that combines sparse environment reward with imitation reward based on Generative Adversarial Imitation Learning (Ho Figure 1: Our proposal of a principled robot learning pipeline. We used 3D motion controllers to collect human demonstrations of a task. Our reinforcement and imitation learning model leveraged these demonstrations to facilitate learning in a simulated physical engine. We then performed sim2real transfer to deploy the learned visuomotor policy to a real robot. &), which produces more robust controllers; second, it uses demonstration as a curriculum to initiate training episodes along demonstration trajectories, which facilitates the agent to reach new states and solve longer tasks. As a , it solves dexterous manipulation tasks that neither the state-of-the-art reinforcement learning nor imitation learning method can solve alone. Previous RL-based robot manipulation policies BID26 BID31 ) largely rely on low-level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper. This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable. Our method learns an end-to-end visuomotor policy that maps RGB camera observations to joint space control over the full 9-DoF arm (6 arm joints plus 3 actuated fingers).To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising BID14 BID35. Through the use of a physics engine and high-throughput RL algorithms, we can simulate parallel copies of a robot arm to perform millions of complex physical interactions in a contact-rich environment while eliminating the practical concerns of robot safety and system reset. Furthermore, we can, during training, exploit privileged information about the true system state with several new techniques, including learning policy and value in separate modalities, an object-centric GAIL discriminator, and auxiliary tasks for visual modules. These techniques stabilize and speed up policy learning from pixels. Finally, we diversify training conditions such as visual appearance as well as e.g. the size and shape of objects. This improves both generalization with respect to different task conditions as well as transfer from simulation to reality. To demonstrate our method, we use the same model and the same algorithm for visuomotor control of six diverse robot arm manipulation tasks. Combining reinforcement and imitation, our policies solve the tasks that the state-of-the-art reinforcement and imitation learning cannot solve and outperform human demonstrations. Our approach sheds light on a principled deep visuomotor learning pipeline illustrated in Fig. 1, from collecting real-world human demonstration to learning in simulation, and back to real-world deployment via sim2real policy transfer. Reinforcement learning methods have been extensively used with low-dimensional policy representations such as movement primitives to solve a variety of control problems both in simulation and in reality. Three classes of RL algorithms are currently dominant for continuous control problems: guided policy search methods (GPS; BID18, value-based methods such as the deterministic policy gradient (DPG; BID41 BID9 or the normalized advantage function (NAF; BID7) algorithm, and trust-region based policy gradient algorithms such as trust region policy optimization (TRPO) and proximal policy optimization (PPO). TRPO BID38 and PPO hold appeal due to their robustness to hyper-parameter settings as well as their scalability but the lack of sample efficiency makes them unsuitable for training directly on robotics hardware. GPS BID18 has been used e.g. by and BID48 to learn visuomotor policies directly on a real robotics hardware after a network pretraining phase. and BID17 use GPS for learning controllers for robotic hand models. Value-based methods have been employed, e.g. by BID6 who use NAF to learn a door opening task directly on a robot while BID31 demonstrate how to solve a stacking problem efficiently using a distributed variant of DPG.The idea of using large-scale data collection for training visuomotor controllers has been the focus of and BID29 who train a convolutional network to predict grasp success for diverse sets of objects using a large dataset with 10s or 100s of thousands of grasp attempts collected from multiple robots in a self-supervised setting. An alternative strategy for dealing with the data demand is to train in simulation and transfer the learned controller to real hardware, or to augment real-world training with synthetic data. BID36 learn simple visuomotor policies for a Jaco robot arm and transfer to reality using progressive networks BID35 ). BID46 minimize the reality gap by relying on depth. BID43 use visual variations to learn robust object detectors that can transfer to reality; BID14 combine randomization with supervised learning. BID1 augments the training with simulated data to learn grasp prediction of diverse shapes. Suitable cost functions and exploration strategies for control problems are challenging to design, so demonstrations have long played an important role. Demonstrations can be used to initialize policies, design cost functions, guide exploration, augment the training data, or a combination of these. Cost functions can be derived from demonstrations either via tracking objectives (e.g. or, via inverse RL (e.g. BID0 BID3, or, as in our case, via adversarial learning BID12 . When expert actions or expert policies are available, behavioral cloning or DAgger can be used BID32 BID14 BID2 . Alternatively, expert trajectories can be used as additional training data for off-policy algorithms such as DPG (e.g. . Most of these methods require observation and/or action spaces to be aligned between robot and demonstrations. Recently, methods for third person imitation have been proposed (e.g. BID40 BID23 BID4 .Concurrently with our work several papers have presented on manipulation tasks. BID33 BID26 both use human demonstrations to aid exploration. BID26 extends the DDPGfD algorithm to learn a block stacking task on a position-controlled arm in simulation. BID33 use the demonstrations with a form of behavioral cloning and data augmentation to learn several complex manipulation tasks. In both cases, controllers observe a low-dimensional state space representation and the methods inherently require aligned state and action spaces with the demonstrations. BID30 and BID28 address the transfer from simulation to reality, focusing on randomizing visual appearance and robot dynamics respectively. Peng et al. transfer a block-pushing policy operating from state features to a 7-DoF position controlled Fetch robotics arm. Pinto et al. consider different tasks using visual input with end-effector position control. Our goal is to learn a deep visuomotor policy for robot manipulation tasks. The policy takes both an RGB camera observation and a proprioceptive feature that describes the joint positions and angular velocities. These two sensory modalities are also available on the real robot, enabling us to perform zero-shot policy transfer once trained in simulation. Fig. 2 provides an overview of our model. The deep visuomotor policy encodes the pixel observation with a convolutional network (CNN) and the proprioceptive feature with a multilayer perceptron (MLP). The features from these two modules are concatenated and passed to a recurrent LSTM layer before producing the joint velocities. The whole network is trained end-to-end. We start with a brief review of the basics of generative adversarial imitation learning (GAIL) and proximal policy optimization (PPO). Our model extends upon these two methods for visuomotor skills. DISPLAYFORM0 Figure 2: Model overview. The core of our model is the deep visuomotor policy, which takes the camera observation and the proprioceptive feature as input and produces the next joint velocities. Imitation learning (IL) is the problem of learning a behavior policy by mimicking a set of demonstrations. Here we assume that human demonstration is provided as a dataset of state-action pairs DISPLAYFORM0 Traditional IL methods cast it as a supervised learning problem, i.e., behavior cloning. These methods use maximum likelihood to train a parameterized policy π θ: S → A, where S is the state space and A is the action space, such that θ * = arg max θ N log π θ (a i |s i). The behavior cloning approach works effectively in cases when the demonstrations abound BID34. However, as robot demonstrations can be costly and time-consuming, we are in favor of a method that can learn from a handful of demonstrations. GAIL BID12 makes a more efficient use of demonstration data by allowing the agent to interact with the environment and learn from its own experiences. Similar to Generative Adversarial Networks BID5, GAIL has two networks, a policy network π θ: S → A and a discriminator network D ψ: S × A →. GAIL uses a similar min-max objective function as in GANs: DISPLAYFORM1 where π E denotes the expert policy that generated the demonstration trajectories. This learning objective encourages the policy π θ to have an occupancy measure close to the expert policy π E.In practice, we train π θ with policy gradient methods to maximize the discounted sum of the reward function r gail (s t, a t) = − log(1 − D ψ (s t, a t)), clipped by a max value of 10. In continuous domains, trust region methods greatly stabilize policy training. The original GAIL model uses TRPO BID38 in the policy update steps. Recently, PPO ) is proposed as a simple and scalable approximation to TRPO. PPO only relies on the first-order gradients and can be easily implemented with recurrent networks in a distributed setting. The key idea of PPO is to use the Kullback-Leibler (KL) divergence to dynamically change the coefficient of a regularization term, where the coefficient is adapted based on whether the previous policy update step violates the KL constraint. We use distributed PPO to perform data collection and synchronous gradient updates across many workers in parallel. We trained all our policies with 256 CPU workers, which brings a significant speedup in wall-clock time. A common approach to guiding exploration is by engineering a shaping reward. Although reward shaping sometimes provides informative guidance for policy search, it has the well-known drawback of producing suboptimal behaviors BID27. Hence, we use sparse piecewise constant rewards in this work. Training agents in continuous domains under sparse rewards is particularly challenging. Inspired by reward augmentation introduced in BID21, we design a hybrid reward function to mix the imitation reward r gail with the sparse task reward r task: DISPLAYFORM0 Maximizing this hybrid reward can be interpreted as simultaneous reinforcement and imitation learning, where the imitation reward encourages the policy to generate trajectories closer to demonstration trajectories, and the task reward encourages the policy to achieve high returns in the task. Setting λ to either 0 or 1 reduces this method to the standard RL or GAIL setups. Our experiments suggest that with a balanced contribution of these two rewards, the agents can solve tasks that neither GAIL nor RL can solve alone. Further, the final agents achieved higher returns than the human demonstrations owing to the exposure to task rewards. The use of simulated system provides us access to the underlying physical states. Even though such privileged information is unavailable on a real system, we can take advantage of it when training the policy in simulation. We propose four techniques to leverage the physical states in simulation to stabilize and accelerate learning, including demonstration curriculum, learning value from states, building object-centric discriminator, and auxiliary tasks. Demonstration as a curriculum. The problem of exploration in continuous domains is exacerbated by the long duration of realistic tasks. Previous work indicates that shaping the distribution of start states towards states where the optimal policy tends to visit can greatly improve policy learning BID15 BID31. We alter the start state distribution with demonstration states. We build a curriculum that contains clusters of states in different stages of a task. For instance, we define three clusters for the pouring task, including reaching the mug, grasping the mug, and pouring. For a training episode, with probability we start it from a random initial state, and with probability 1 − we uniformly select a cluster and reset the episode to a demonstration state from the cluster. This is possible as our simulated system is fully characterized by the physical states. Learning value functions from states. PPO uses a learnable value function V φ to estimate the advantage for policy gradient. During training, each PPO worker executes the policy for K steps and uses the discounted sum of rewards and the value as an advantage function estimator t = DISPLAYFORM0, where γ is the discount factor. As the policy gradient relies on the value function to reduce variance, it is beneficial to accelerate learning of the value function. Rather than using pixel inputs as the policy network, we take advantage of the low-level physical states (e.g., the position and velocity of the 3D objects and the robot arm) to train the value V φ with a smaller multilayer perceptron. We find that training the policy and value in two different modalities stabilizes training and reduces oscillation in the agent's performance. This technique has also been adopted by a concurrent work by BID30.Object-centric discriminator. Similar to the value function, the GAIL discriminator leverages the physical states to construct task-specific features as its input. In manipulation tasks, we find that object-centric representations (e.g., absolute and relative positions of the objects) provide the salient and relevant signals to the discriminator. The states of the robot arm, on the contrary, tend to make the discriminator too strong and stagnate the training of the policy. Inspired by information hiding strategies used in locomotion domains, our discriminator only takes the object-centric features as input while masking out arm-related information. State prediction auxiliary tasks. Auxiliary tasks have been shown effective in improving the learning efficiency and the final performance of deep RL methods BID13. To facilitate learning visuomotor policies, we add a state prediction layer on the top of the CNN module to predict the locations of objects from the camera observation. We use a fully-connected layer to regress the 3D coordinates of objects in the task. We train this auxiliary task by minimizing the 2 loss between the predicted and ground-truth object locations. Policy transfer is shown to a real-world Kinova Jaco robot arm. The simulation was manually aligned to generally match the visuals and dynamics: a Kinect camera was visually calibrated to match the position and orientation of the simulated camera, and the simulation's dynamics parameters were manually adjusted to match the dynamics of the real arm. Instead of using professional calibration equipment, our approach to sim2real policy transfer relies on domain randomization of camera position and orientation BID43 BID14. in contrast, we do not create intermediate position goals using object position information in reality, but rather train an end-to-end, pixels to velocities, feedback control policy. In addition, to alleviate the issues caused by latency on the real robot, we also fine-tune our policies while subjecting them to action dropping. Detailed descriptions are available in Appendix B. Here we demonstrate that our proposed approach offers a general framework to visuomotor policy learning. We evaluate the performance of our model in six manipulation tasks illustrated in Fig. 3. We provide more qualitative in this video. We use a Kinova Jaco arm that has 9 degrees of freedom, including six arm joints and three actuated fingers. The robot arm interacts with a diverse set of objects on a tabletop. The visuomotor policy controls the robot using joint velocity commands. Our policy produces 9-dimensional continuous velocities in the range of [−1, 1] at 20Hz. The proprioceptive features consist of the positions and angular velocities of the arm joints and the fingers. We use a positioned camera to collect real-time RGB observations. The proprioceptive features and the camera observations are available in both simulation and real environments. Thus, it enables policy transfer. We use the MuJoCo physics simulator BID44 as our training platform. We use a large variety of objects, from basic geometric shapes to procedurally generated 3D objects as ensembles of primitive shapes. We increase the diversity of objects by randomizing various physical properties, including dimension, color, mass, friction, etc. We used a 3D motion controller called SpaceNavigator, which allows us to operate the robot arm with a position controller, to collect 30 episodes of demonstration for each task and recorded the observations, actions, and physical states into a dataset. As each episode takes less than a minute to complete, demonstrating each task can be done within half an hour. Fig. 3 shows visualizations of the six manipulation tasks in our experiments. The first column shows the six tasks in simulated environments, and the second column shows the real-world setup of the block lifting and stacking tasks. We see obvious visual discrepancies of the same task in simulation and reality. These six tasks exhibit learning challenges to varying degrees. The first three tasks use simple colored blocks, which allows us to easily construct the tasks for a real robot. We study sim2real policy transfer with the block lifting and stacking tasks in Sec. 4.4. Block lifting. The goal is to grasp and lift a randomized block, allowing us to evaluate the model's robustness. We vary several random factors, including the robot arm dynamics (friction and armature), lighting conditions, camera poses, colors, as well as the properties of the block. Each episode starts with a new configuration with these random factors uniformly drawn from a preset range. Block stacking. The goal is to stack one block on top of the other block. Together with the block lifting task, this is evaluated in sim2real transfer experiments. Clearing blocks. This task aims at clearing the tabletop that has two blocks. One strategy to do this task using a single arm is to stack the blocks and pick up both together. This task requires longer time and a more dexterous controller, introducing a significant challenge for exploration. The next three tasks involve a large variety of procedurally generated 3D shapes, making them difficult to recreate in real environments. We use them to examine the model's ability to generalize across object variations in long and complex tasks. Figure 3: Visualizations of the six manipulation tasks in our experiments. The left column shows RGB images of all six tasks in the simulated environments. These images correspond to the actual pixel observations as input to the visuomotor policies. The right column shows the two tasks with color blocks on the real robot. Clearing tabletop. In this task, the goal is to clear the tabletop that has a box and a toy car. One strategy is to grasp the toy, put it into the box, and lift the box. Both the box and the toy car are randomly generated for each episode. Pouring liquid. Modeling and reasoning about deformable objects and fluids is a long-standing challenge in the robotics community BID37. We design a pouring task where we use many small spheres to simulate liquid. The goal is to pour the "liquid" from one mug to the other container. This task is particularly challenging due to the dexterity required. Even trained humans struggled to demonstrate the task with our 3D motion controller. Order fulfillment. In this task, we randomly place a variable number of procedurally generated toy planes and cars on the table. The goal is to place all the planes into the green box and all the cars into the red box. This task requires the policy to generalize at an abstract level. It needs to recognize the object categories, perform successful grasps on diverse shapes, and handle tasks with variable lengths. Our full model can solve all six tasks, with only occasional failures, using the same policy network, the same training algorithm, and a fixed set of hyperparameters. On the contrary, neither reinforcement nor imitation alone can solve all tasks. We compare the full model with three baseline methods, where we evaluate degenerated versions of our model, which correspond to RL, GAIL, and RL w/o demonstration curriculum. These baselines use the same setup as the full model, except that we set λ = 0 for RL and λ = 1 for GAIL, while our model uses a balanced contribution of the hybrid reward, where λ = 0.5. In the third baseline, all the training episodes start from random initial states rather than resetting to demonstration states. This corresponds to a standard RL setup. We report the mean episode returns as a function of the number of training iterations in Fig. 4. Our full model achieves the highest returns in all six tasks. The only case where the baseline model is on par with the full model is the block lifting task, in which both the RL baseline and the full model achieved similar levels of performance. We hypothesize that this is due to the short length of the lifting task, where random exploration in RL is likely to reach the goal states without the aid of GAIL. In the other five tasks, the full model outperforms both the reinforcement learning and imitation learning baselines by a large margin, demonstrating the effectiveness of combining reinforcement and imitation for learning complex tasks. Comparing the two variants of RL with and without using demonstration as a curriculum, we see a pronounced effect of altering the start state distribution. We see that RL from scratch leads to a very slow learning progress; while initiating episodes along demonstration trajectories enables the agent to train on states from different stages of a task. As a , it greatly reduces the burden of exploration and improves the learning efficiency. We also report the mean episode returns of human demonstrations in these figures. While demonstrations using the 3D motion controller are imperfect, especially for pouring (see video), the trained agents can surpass them via interacting with the environment. Two findings are noteworthy. First, the RL agent learns faster than the full model in the table clearing task, but the full model eventually outperforms. This is because the full model discovers a novel strategy, different from the strategy demonstrated by human operators (see video). In this case, imitation gave contradictory signals but eventually, reinforcement learning guided the policy towards a better strategy. Second, pouring liquid is the only task where GAIL outperforms its RL counterpart. Imitation can effectively shape the agent's behaviors towards the demonstration trajectories. This is a viable solution for the pouring task, where a controller that generates similar-looking behaviors can complete the task. In contact-rich domains, however, a controller learned solely from dozens of demonstrations would struggle to handle complex object dynamics and to infer the true task goal. We hypothesize that this is why the baseline RL agent outperforms the GAIL agent in the other five tasks. We further perform an ablation study in the block stacking task to understand the impacts of different components of our model. In Fig. 5a, we trained our agents with a number of configurations, each with a single modification to the full model. We see that the final performances of the experiments cluster into two groups: agents that learn to stack (with average returns greater than 400) and agents that only learn to lift (with average returns between 200 and 300). These indicate that the hybrid RL/IL reward, learning value function from states, and object-centric discriminator play an integral role in learning good policies. Using sole RL or GAIL reward, learning value function on pixels, or no information hiding for discriminator input (no discriminator mask) all in inferior performances. In contrast, the optional components include the recurrent policy core (LSTM), the use of state prediction auxiliary tasks, and whether to include actions in discriminator input. We then examine the model's sensitivity to the λ values in Eq. 2. We see in Fig. 5b that, our model works well with a broad range of λ values from 0.3 to 0.7 that provide a balanced mix of the RL and GAIL rewards. To assess the robustness of the simulation-trained policy, we evaluate zero-shot transfer (no additional training) on a real Jaco arm. Given a real-world set up that mirrored the simulated domain to a large extent, including camera positions and robot kinematics and approximate object size and color, we ran the trained network policy and counted the number of successful trials for both the lifting and stacking tasks. Although the sim and real domains were similar, there was still a sizable reality gap that made zero-shot transfer challenging. For example, the objects were non-rigid foam blocks which deformed and bounced unpredictably. The arm position was randomly initialized and the target block(s) placed in a number of repeatable start configurations for each task. The zero-shot transfer of the lifting policy had a success rate of 64% over 25 trials (split between 5 block configurations). The stacking policy had a success rate of 35% over 20 trials (split between 2 block configurations). 80% of the stacking trajectories, however, contain successful lifting behavior. Qualitatively, the policies are notably robust even on failed attempts -rather than exhibiting "open-loop" behaviors such as attempting to stack a non-existent block, the policy repeatedly chases the block to get a successful grasp before trying to stack (see video). For more detailed descriptions of the sim2real , refer to Appendix B. We have shown that combining reinforcement and imitation learning considerably improves the agents' ability to solve challenging dexterous manipulation tasks from pixels. Our proposed method sheds light on the three stages of a principled pipeline for robot skill learning: first, we collected a small amount of demonstration data to simplify the exploration problem; second, we relied on physical simulation to perform large-scale distributed robot training; and third, we performed sim2real transfer for real-world deployment. In future work, we seek to improve the sample efficiency of the learning method and to leverage real-world experience to close the reality gap for policy transfer. The policy network takes the pixel observation and the proprioceptive feature as input. The pixel observation is an RGB image of size 64 × 64 × 3. We used the Kinect for Xbox One camera in the real environment. The proprioceptive feature describes the joint positions and velocities of the Kinova Jaco arm. Each joint position is represented as the sin and cos of the angle of the joint in joint coordinates. Each joint velocity is represented as the scalar angular velocity. This in a 24-dimensional proprioceptive feature that contains the positions (12-d) and velocities (6-d) of the six arm joints and the positions (6-d) of the three fingers. We exclude the finger velocities due to the noisy sensory readings on the real robot. We used Adam BID16 to train the neural network parameters. We set the learning rate of policy and value to 10 −4 and 10 −3 respectively, and 10 −4 for both the discriminator and the auxiliary tasks. The pixel observation is encoded by a two-layer convoluational network. We use 2 convolutional layers followed by a fully-connected layer with 128 hidden units. The first convolutional layer has 16 8 × 8 filters with stride 4 and the second 32 4 × 4 filters with stride 2. We add a recurrent layer of 100 LSTM units before the policy and value outputs. The policy output is the mean and the standard deviation of a conditional Gaussian distribution over the 9-dimensional joint velocities. The initial policy standard deviation is set to exp(−3) for the clearing table with blocks task and exp(−1) for the other five tasks. The auxiliary head of the policy contains a separate 3-layer MLP sitting on top of the convolutional network. The first two layers of the MLP has 200 and 100 hidden units respectively, while the third layer predicts the auxiliary outputs. Finally, the discriminator is a simple three-layer MLP of 100 and 64 hidden units for the first two layers with the third layer producing log probabilities. The networks use tanh nonlinearities. We trained the visuomotor policies using the distributed PPO algorithm with synchronous gradient updates from 256 CPU workers. Each worker runs the policy to complete an entire episode before the parameter updates are computed. We set a constant episode length for each task based on its difficulty, with the longest being 1000 time steps (50 seconds) for the clearing table with blocks and order fulfillment tasks. We set K = 50 as the number of time steps for computing K-step returns and truncated backpropagation through time to train the LSTM units. After a worker collects a batch of data points, it performs 50 parameter updates for the policy and value networks, 5 for the discriminator and 5 for the auxiliary prediction network. To better facilitate sim2real transfer, we lower the frequency at which we sample the observations. Pixel observations are only observed at the rate of 5Hz despite the fact that our controller runs at 20Hz. Similarly, the proprioceptive features are observed at a rate of 10Hz. In addition to observation delays, we also apply domain variations. Gaussian noise (of standard deviation 0.01) are added proprioceptive features. Uniform integers noise in the range of [−5, 5] are added to each pixel independently. Pixels of values outside the range of are clipped. We also vary randomly the shade of grey on the Jaco arm, the color of the table top, as well as the location and orientation of the light source (see FIG3).In the case of block lifting, we vary in addition the dynamics of the arm. Specifically, we dynamically change the friction, damping, armature, and gain parameters of the robot arm in simulation to further robustify the agent's performance. Our analysis indicates that, on the real robot, there is often a delay in the execution of actions. The amount of delay also varies significantly. This has an adverse effect on the performance of our agent on the physical robot since our agents' performance depends on the timely execution of their actions. To better facilitate the transfer to the real robot, we fine-tune our trained agent in simulation while subjecting them to a random chance of dropping actions. Specifically, each action emitted by the agent has a 50% chance of being executed immediately in which case the action is flagged as the last executed action. If the current action is not executed, the last executed action will then be executed. Using the above procedure, we fine-tune our agents on both block lifting and block stacking for a further 2 million iterations. To demonstrate the effectiveness of action dropping, we compare our agent on the real robot over the task of block lifting. Without action dropping, the baseline agent lifts 48% percent of the time. After fine-tuning using action dropping, our agent succeeded 64% percent of the time. For the complete set of , please see TAB1. We use a fixed episode length for each task, which is determined by the amount of time a skilled human demonstrator can complete the task. An episode terminates when a maximum number of agent steps are performed. The robot arm operates at a control frequency of 20Hz, which means each time step takes 0.05 second. We segment into a sequence of stages that represent an agent's progress in a task. For instance, the block stacking task can be characterized by three stages, including reaching the block, lifting the block and stacking the block. We define functions on the underlying physical state to determine the stage of a state. This way, we can cluster demonstration states according to their corresponding stages. These clusters are used to reset training episodes in our demonstration as a curriculum technique proposed in Sec. 3.2.2. The definition of stages also gives rise to a convenient way of specifying the reward functions without hand-engineering a shaping reward. We define a piecewise constant reward function for each task, where we assign the same constant reward to all the states that belong to the same stage. We detail the stages, reward functions, auxiliary tasks, and objectcentric features for the six tasks in our experiments. Block lifting. Each episode lasts 100 time steps. We define three stages and their rewards (in parentheses) to be initial, reaching the block (0.125) and lifting the block (1.0). The auxiliary task is to predict the 3D coordinates of the color block. The object-centric feature consists of the relative position between the gripper and the block. Block stacking. Each episode lasts 500 time steps. We define four stages and their rewards to be initial, reaching the orange block (0.125), lifting the orange block (0.25), and stacking the orange block onto the pink block (1.0). The auxiliary task is to predict the 3D coordinates of the two blocks. The object-centric feature consists of the relative positions between the gripper and the two blocks respectively. Clearing table with blocks. Each episode lasts 1000 time steps. We define five stages and their rewards to be initial, reaching the orange block (0.125), lifting the orange block (0.25), stacking the orange block onto the pink block (1.0), and lifting both blocks off the ground (2.0). The auxiliary task is to predict the 3D coordinates of the two blocks. The object-centric feature consists of the 3D positions of the two blocks as well as the relative positions between the gripper and the two blocks respectively. Clearing table with a box. Each episode lasts 500 time steps. We define five stages and their rewards to be initial, reaching the toy (0.125), grasping the toy (0.25), putting the toy into the box (1.0), and lifting the box (2.0). The auxiliary task is to predict the 3D coordinates of the toy and the box. The object-centric feature consists of the 3D positions of the toy and the box as well as the relative positions between the gripper and these two objects respectively. Pouring liquid. Each episode lasts 500 time steps. We define three stages and their rewards to be initial, grasping the mug (0.05), pouring (0.1N), where N is the number of small spheres in the other container. The auxiliary task is to predict the 3D coordinates of the mug. The object-centric feature consists of the 3D positions of the mug, the relative position between the gripper and the mug, and the relative position between the mug and the container. Order fulfillment. Each episode lasts 1000 time steps. The number of objects varies from 1 to 4 across episodes. We define five stages that correspond to the number of toys in the boxes. The immediate reward corresponds to the number of toys placed in the correct boxes (number of toy planes in the green box and toy cars in the red box). To handle the variable number of objects, we only represent the objects nearest to the gripper for the auxiliary task and the object-centric feature. The auxiliary task is to predict the 3D coordinates of the nearest plane and the nearest car to the gripper. The object-centric feature consists of the relative positions from the gripper to these two nearest objects.
combine reinforcement learning and imitation learning to solve complex robot manipulation tasks from pixels
1,312
scitldr
Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble's cost for both training and testing increases linearly with the number of networks. In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and contextual bandits tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks. Ensembling is one of the oldest tricks in machine learning literature . By combining the outputs of several models, an ensemble can achieve better performance than any of its members. Many researchers demonstrate that a good ensemble is one where the ensemble's members are both accurate and make independent errors . In neural networks, SGD and its variants are the most common optimization algorithm. The random noise from sampling mini-batches of data in SGD-like algorithms and random initialization of the deep neural networks, combined with the fact that there is a wide variety of local minima solutions in high dimensional optimization problem , in the following observation: deep neural networks trained with different random seeds can converge to very different local minima although they share similar error rates. One of the consequence is that neural networks trained with different random seeds will usually not make all the same errors on the test set, i.e. they may disagree on a prediction given the same input even if the model has converged. Ensembles of neural networks benefit from the above observation to achieve better performance by averaging or majority voting on the output of each ensemble member . It is shown that ensembles of models perform at least as well as its individual members and diverse ensemble members lead to better performance . More recently, showed that deep ensembles give reliable predictive uncertainty estimates while remaining simple and scalable. A further study confirms that deep ensembles generally achieves the best performance on out-of-distribution uncertainty benchmarks compared to other methods such as MC-dropout . Despite their success on benchmarks, ensembles in practice are limited due to their expensive computational and memory costs, which increase linearly with the ensemble size in both training and testing. Computation-wise, each ensemble member requires a separate neural network forward pass of its inputs. Memory-wise, each ensemble member requires an independent copy of neural network weights, each up to millions (sometimes billions) of parameters. This memory requirement also makes many tasks beyond supervised learning prohibitive. For example, in lifelong learning, a natural idea is to use a separate ensemble member for each task, adaptively growing the total number of parameters by creating a new independent set of weights for each new task. No previous work achieves competitive performance on lifelong learning via ensemble methods, as memory is a major bottleneck. Our contribution: In this paper, we aim to address the computational and memory bottleneck by building a more parameter efficient ensemble model: BatchEnsemble. We achieve this goal by exploiting a novel ensemble weight generation mechanism: the weight of each ensemble member is generated by the Hadamard product between: a. one shared weight among all ensemble members. b. one rank-one matrix that varies among all members, which we refer to as fast weight in the following sections. Figure 1 compares testing and memory cost between BatchEnsemble and naive ensemble. Unlike typical ensembles, BatchEnsemble is mini-batch friendly, where it is not only parallelizable across devices like typical ensembles but also parallelizable within a device. Moreover, it incurs only minor memory overhead because a large number of weights are shared across ensemble members. Empirically, we show that BatchEnsemble has the best trade-off among accuracy, running time, and memory on several deep learning architectures and learning tasks: CIFAR-10/100 classification with ResNet32 and WMT14 EN-DE/EN-FR machine translation with Transformer . Additionally, we show that BatchEnsemble is also effective in uncertainty evaluation on contextual bandits. Finally, we show that BatchEnsemble can be successfully applied in lifelong learning and scale up to 100 sequential learning tasks without catastrophic forgetting and the need of memory buffer. In this section, we describe relevant about ensembles, uncertainty evaluation, and lifelong learning for our proposed method, BatchEnsemble. Bagging, also called boostrap aggregating, is an algorithm to improve the total generalization performance by combining several different models . The strategy to combine those models such as averaging and majority voting are known as ensemble methods. It is shown that ensembles of models perform at least as well as each of its ensemble member . Moreover, ensembles achieve the best performance when each of their members makes independent errors . Related work on ensembles: Ensembles have been studied extensively for improving model performance (; ; ;). One major direction in ensemble research is how to reduce their cost at test time. developed a method to compress large, complex ensembles into smaller and faster models which achieve faster test time prediction. developed the above approach further by distilling the knowledge in an ensemble of models into one single neural network. Another major direction in ensemble research is how to reduce their cost at training time. forms ensembles by combining the output of networks within a number of training checkpoints, named Horizontal Voting Vertical Voting and Horizontal Stacked Ensemble. Additionally, models trained with different regularization and augmentation can be used as ensemble to achieve better performance in semi-supervised learning . More recently, proposed Snapshot ensemble, in which a single model is trained by cyclic learning rates so that it is encouraged to visit multiple local minima. Those local minima solutions are then used as ensemble members. proposed fast geometric ensemble where it finds modes that can be connected by simple curves. Each mode can taken as one ensemble member. Explicit ensembles are expensive so another line of work lies on what so-called "implicit" ensembles. For example, Dropout can be interpreted as creating an exponential number of weight-sharing sub-networks, which are implicitly ensembled in test time prediction . MC-dropout can be used for uncertainty estimates . Although deep neural networks achieve state-of-the-art performance on a variety of benchmarks, their predictions are often poorly calibrated. Bayesian neural networks , which fit a distribution to the weights rather than a point estimate, are often used to model uncertainty. However, they requires modifications to the traditional neural network training scheme. Deep ensembles have been proposed as a simple and scalable alternative, and have been shown to make well-calibrated uncertainty estimates . Several metrics had been proposed to measure the quality of uncertainty estimates. In Section 4.4, we use the contextual bandits benchmark , where maximizing reward is of direct interest; this requires good uncertainty estimates in order to balance exploration and exploitation. Appendix D also uses Expected Calibrated Error (ECE) as an uncertainty metric. In lifelong learning, the model trains on a number of tasks in a sequential (online) order, without access to entire previous tasks' data . One core difficulty of lifelong learning is "catastrophic forgetting": neural networks tend to forget what it has learnt after training on the subsequent tasks . Previous work on alleviating catastrophic forgetting can be divided into two categories. In the first category, updates on the current task are regularized so that the neural network does not forget previous tasks. Elastic weight consolidation (EWC) applies a penalty on the parameter update based on the distance between the parameters for the new and the old task evaluated by Fisher information metric. Other methods maintain a memory buffer that stores a number of data points from previous tasks. For example, gradient episodic memory approach penalizes the gradient on the current task so that it does not increase the loss of examples in the memory buffer . Another approach focuses on combining existing experience replay algorithms with lifelong learning . In the second category, one increases model capacity as new tasks are added. For example, progressive neural networks (PNN) copy the entire network for the previous task and add new hidden units when adopting to a new task. This prevents forgetting on previous tasks by construction (the network on previous tasks remains the same). However, it leads to significant memory consumption when faced with a large number of lifelong learning tasks. Some following methods expand the model in a more parameter efficient way at the cost of introducing an extra learning task and not entirely preventing forgetting. applies group sparsity regularization to efficiently expand model capacity; Figure 2: An illustration on how to generate the ensemble weights for two ensemble members. For each training example in the mini-batch, it receives an ensemble weight W i by elementwise multiplying W, which we refer to as "slow weights", with a rank-one matrix F i, which we refer to as "fast weights." The subscript i represents the selection of ensemble member. Since W is shared across ensemble members, we term it as "shared weight" in the following paper. Vectorization: We show how to make the above ensemble weight generation mechanism parallelizable within a device, i.e., where one computes a forward pass with respect to multiple ensemble members in parallel. This is achieved by the fact that manipulating the matrix computations for a mini-batch. Let x denote the activations of the incoming neurons in a neural network layer. The next layer's activations are given by: where φ denotes the activation function and the subscript n represents the index in the mini-batch. The output represents next layer's activations from the i th ensemble member. To vectorize these computations, we define matrices R and S whose rows consist of the vectors r i and s i for all examples in the mini-batch. The above equation is vectorized as: where X is the mini-batch input. By computing Eqn. 5, we can obtain the next layer's activations for each ensemble member in a mini-batch friendly way. This allows us to take the full advantage of GPU parallelism to implement ensemble efficiently. To match the input and the ensemble weight, we can divide the input mini-batch into M sub-batches and each sub-batch receives ensemble weight Ensembling During Testing: In our experiments, we take the average of predictions of each ensemble member. Suppose the test batch size is B and there are M ensemble members. To achieve an efficient implementation, one repeats the input mini-batch M times, which leads to an effective batch size B · M. This enables all ensemble members to compute the output of the same B input data points in a single forward pass. It eliminates the need to calculate the output of each ensemble member sequentially and therefore reduces the ensemble's computational cost. The only extra computation in BatchEnsemble over a single neural network is the Hadamard product, which is cheap compared to matrix multiplication. Thus, BatchEnsemble incurs almost no additional computational overhead (Figure 1). 1 One limitation of BatchEnsemble is that if we keep the minibatch size the same as single model training, each ensemble member gets only a portion of input data. In practice, the above issue can be remedied by increasing the batch size so that each ensemble member receives the same amount of data as ordinary single model training. Since BatchEnsemble is parallelizable within a device, increasing the batch size incurs almost no computational overhead in both training and testing stages on the hardware that can fully utilize large batch size. Moreover, when increasing the batch size reaches its diminishing return regime, BatchEnsemble can still take advantage from even larger batch size by increasing the ensemble size. The only memory overhead in BatchEnsemble is the set of vectors, {r 1, . . ., r m} and {s 1, . . ., s m}, which are cheap to store compared to the weight matrices. By eliminating the need to store full weight matrices of each ensemble member, BatchEnsemble has almost no additional memory cost. For example, BatchEnsemble of ResNet-32 of size 4 incurs 10% more parameters while naive ensemble incurs 4X more. The significant memory cost of ensemble methods limits its application to many real world learning scenarios such as multi-task learning and lifelong learning, where one might apply an independent copy of the model for each task. This is not the case with BatchEnsemble. Specifically, consider a total of T tasks arriving in sequential order. Denote D t = (x i, y i, t) as the training data in task t where t ∈ {1, 2, . . ., T} and i is the index of the data point. Similarly, denote the test data set as T t = (x i, y i, t). At test time, we compute the average performance on T t across all tasks seen so far as the evaluation metric. To extend BatchEnsemble to lifelong learning, we compute the neural network prediction in task t with weight W t = W • (r t s t) in task t. In other words, each ensemble member is in charge of one lifelong learning task. For the training protocol, we train the shared weight W and two fast weights r 1, s 1 on the first task, min where L 1 is the objective function in the first task such as cross-entropy in image classification. On a subsequent task t, we only train the relevant fast weights r t, s t. min BatchEnsemble shares similar advantages as progressive neural networks (PNN): it entirely prevents catastrophic forgetting as the model for previously seen tasks remains the same. This removes the need of storing any data from previous task. In addition, BatchEnsemble has significantly less memory consumption than PNN as only fast weights are trained to adapt to a new task. Therefore, BatchEnsemble can easily scale to up to 100 tasks as we showed in Section 4.1 on split ImageNet. Another benefit of BatchEnsemble is that if future tasks arrive in parallel rather than sequential order, one can train on all the tasks at once (see Section 3.1). We are not aware of any other lifelong learning methods can achieve this. Limitations: BatchEnsemble is one step toward toward a full lifelong learning agent that is both immune to catastrophic forgetting and parameter-efficient. On existing benchmarks like split-CIFAR and split-ImageNet, Section 4.1 shows that BatchEnsemble's rank-1 perturbation per layer provides enough expressiveness for competitive state-of-the-art accuracies. However, one limitation of BatchEnsemble is that only rank-1 perturbations are fit to each lifelong learning task and thus the model's expressiveness is a valid concern when each task is significantly varied. Another limitation is that the shared weight is only trained on the first task. This implies that only information learnt for the first task can transfer to subsequent tasks. There is no explicit transfer, for example, between the second and third tasks. One solution is to enable lateral connections to features extracted by the weights of previously learned tasks, as done in PNN. However, we found that no lateral connections were needed for Split-CIFAR100 and Split-ImageNet. Therefore we leave the above solution to future work to further improve BatchEnsemble for lifelong learning. Section 4.1 firsts demonstrate the BatchEnsemble's effectiveness as an alternative approach to lifelong learning on Split-CIFAR and Split-ImageNet. We next evaluate BatchEnsemble on several benchmark datasets with common deep learning architectures in Section 4.2 and Section 4.3, including classification task with ResNet , neural machine translation with Transformer . Then, we demonstrate that BatchEnsemble can be used for uncertainty modelling in Section 4.4. Detailed description of datasets we used is in Appendix A.. PNN: Progressive neural network. BN-Tuned: Fine tuning Batch Norm layer per subsequent tasks. BatchE: BatchEnsemble. Upperbound: Individual ResNet-50 per task. We showcase BatchEnsemble for lifelong learning on Split-CIFAR100 and Split-ImageNet. Split-CIFAR100 proposed in is a harder lifelong learning task than MNIST permutations and MNIST rotations, where one introduces a new set of classes upon the arrival of a new task. Each task consists of examples from a disjoint set of 100/T classes assuming T tasks in total. To show that BatchEnsemble is able to scale to 100 sequential tasks, we also build our own Split-ImageNet dataset which shares the same property as Split-CIFAR100 except more classes (and thus more tasks) and higher image resolutions are involved. More details about these two lifelong learning datasets are provided in Appendix A. We consider T = 20 tasks on Split-CIFAR100, following the setup of. We used ResNet-18 with slightly fewer number of filters across all convolutional layers. Noted that for the purpose of making use of the task descriptor, we build a different final dense layer per task. We compare BatchEnsemle to progressive neural networks (PNN), vanilla neural networks, and elastic weight consolidation (EWC) on Split-CIFAR100. reported similar accuracies among DEN , RCL and PNN. Therefore we compare accuracy only to PNN which has an official implementation and only compare computational and memory costs to DEN and RCL in Appendix C. Figure 3b displays on Split-CIFAR100 over three metrics including accuracy, forgetting, and cost. The accuracy measures the average validation accuracy over total 20 tasks after lifelong learning ends. Average forgetting over all tasks is also presented in Figure 3b. Forgetting on task t is measured by the difference between accuracy of task t right after training on it and at the end of lifelong learning. It measures the degree of catastrophic forgetting. As showed in Figure 3b, BatchEnsemble achieves comparable accuracy as PNN while has 4X speed-up and 50X less memory consumption. It also preserves the no-forgetting property of PNN. Therefore BatchEnsemble has the best trade-off among all compared methods. For Split-ImageNet, we consider T = 100 tasks and apply ResNet-50 followed by a final linear classifier per task. The parameter overhead of BatchEnsemble on Split-ImageNet over 100 sequential tasks is 20%: the total number of parameters is 30M v.s. 25M (vanilla ResNet-50). PNN is not capable of learning 100 sequential tasks due to the significant memory consumption; other methods noted above have also not shown at ImageNet scale. Therefore we adopt two of our baselines. The first baseline is "BN-Tuned", which fine-tunes batch normalization parameters per task and which has previously shown strong performance for multi-task learning . To make a fair comparison, we augment the number of filters in BN-Tuned so that both methods have the same number of parameters. The second baseline is a naive ensemble which trains an individual ResNet-50 per task. This provides a rough upper bound on the BatchEnsemble's expressiveness per task. Note BatchEnsemble and both baselines are immune to catastrophic forgetting. So we consider validation accuracy on each subsequent task as evaluation metric. Figure 3a shows that In this section, we evaluate BatchEnsemble on the Transformer and the large-scale machine translation tasks WMT14 EN-DE/EN-FR. We apply BatchEnsemble to all self-attention layers with an ensemble size of 4. The ensemble in a selfattention layer can be interpreted as each ensemble member keeps their own attention mechanism and makes independent decisions. We conduct our experiments on WMT16 English-German dataset and WMT14 English-French dataset with Transformer base (65M parameters) and Transformer big (213M parameters). We maintain exactly the same training scheme and hyper-parameters between single Transformer model and BatchEnsemble Transformer model. As the shown in Figure 4, BatchEnsemble achieves a much faster convergence than a single model. Big BatchEnsemble Transformer is roughly 1.5X faster than single big Transformer on WMT16 English-German. In addition, the BatchEnsemble Transformer also gives a lower validation perplexity than big Transformer (Table 1). This suggests that BatchEnsemble is promising for even larger Transformers. We also compared BatchEnsemble to dropout ensemble (MC-drop in Table 1). Transformer single model itself has dropout layer. We run multiple forward passes with dropout mask during testing. The sample size is 16 which is already 16X more expensive than BatchEnsmeble. As Table 1 showed, dropout ensemble doesn't give better performance than single model. However, note Appendix B shows that while BatchEnemble's test BLEU score increases faster over the course of training, BatchEnsemble which gives lower validation loss does not necessarily improve BLEU score over a single model which is trained for long enough timesteps. We evaluate BatchEnsemble on classification tasks with CIFAR-10/100 dataset . We run our evaluation on ResNet32 . To achieve 100% training accuracy on CIFAR100, we use 4X more filters than the standard ResNet-32. In this section, we compare to MC-dropout which is also a memory efficient ensemble method. We add one more dense layer followed by dropout before the final linear classifier so that the number of parameters of MC-dropout are the same as BatchEnsemble. Most hyper-parameters are shared across the single model, BatchEnsemble, and MC-dropout. More details about hyper-parameters are in Appendix B. Note we increase the training iterations for BatchEnsemble to reach its best performance because each ensemble member gets only a portion of input data. We train both BatchEnsemble model and MC-dropout with 375 epochs on CIFAR-10/100, which is 50% more iterations than single model. Although the training duration is longer, BatchEnsemble is still significantly faster than training individual model sequentially. Another implementation that leads to the same performance is to increase the mini-batch size. For example, if we use 4X large minibatch size then there is no need to increase the training iterations. Table 2 shows that BatchEnsemble reaches better accuracy than single model and MC-dropout. We also calculate the accuracy of naive ensemble, whose members consist of individually trained single models. Its accuracy can be viewed as the upper bound of Ensemble methods. We also compare BatchEnsemble to naive ensemble of small models in Appendix F. In this section, we conduct analysis beyond accuracy, where we show that BatchEnsemble can be used for uncertainty modelling in contextual bandits. Appendix D evaluates the predictive uncertainty of BatchEnsemble on out-of-distribution tasks and ECE loss. We also show that BatchEnsemble preserves diversity among ensemble members in predictive distribution just like naive ensemble in Appendix E. For uncertainty modelling, we evaluate our BatchEnsemble method on the recently proposed bandits benchmark . Bandit data comes from different empirical problems that highlight several aspects of decision making. No single algorithm can outperform every other algorithm on every bandit problem. Thus, average performance of the algorithm over different problems is used to evaluate the quality of uncertainty estimation. The key factor to achieve good performance in contextual bandits is to learn a reliable uncertainty model. In our experiment, Thompson sampling samples from the policy given by one of the ensemble members. The fact that Dropout which is an implicit ensemble method achieves competitive performance on bandits problem suggests that ensemble can be used as uncertainty modelling. Indeed, Table 3 shows that BatchEnsemble with an ensemble size 8 achieves the best mean value on the bandits task. Both BatchEnsemble with ensemble size 4 and 8 outperform Dropout in terms of average performance. We also evaluate BatchEnsemble on CIFAR-10 corrupted dataset in Appendix D. Figure 7 shows that BatchEnsemble achieves promising accuracy, uncertainty and cost trade-off among all methods we compared. Moreover, combining BatchEnsemble and dropout ensemble leads to better uncertainty prediction. We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning. BatchEnsemble can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods. More importantly, BatchEnsemble removes the computation and memory bottleneck of typical ensemble methods, enabling its successful application to not only faster ensembles but also lifelong learning on up to 100 tasks. We believe BatchEnsemble has great potential to improve in lifelong learning. Our work may serve as a starting point for a new research area. CIFAR: We consider two CIFAR datasets, CIFAR-10 and CIFAR-100 . Each consists of a training set of size 50K and a test set of size 10K. They are natural images with 32x32 pixels. In our experiments, we follow the standard data pre-processing schemes including zero-padding with 4 pixels on each sise, random crop and horizon flip (; ; . In machine translation tasks, we consider the standard training datasets WMT16 EnglishGerman and WMT14 English-French. WMT16 English-German dataset consists of roughly 4.5M sentence pairs. We follow the same pre-processing schemes in .Source and target tokens are processed into 37K shared sub-word units based on byte-pair encoding (BPE) . Newstest2013 and Newstest2014 are used as validation set and test set respectively. WMT14 English-French consists of a much larger dataset sized at 36M sentences pairs. We split the tokens into a 32K word-piece vocabulary . The dataset has the same set of images as CIFAR-100 dataset . It randomly splits the entire dataset into T tasks so each task consists of 100/T classes of images. To leverage the task descriptor in the data, different final linear classifier is trained on top of feature extractor per task. This simplifies the task to be a 100/T class classification problem in each task. i.e. random prediction has accuracy T /100. Notice that since we are not under the setting of single epoch training, standard data pre-processing including padding, random crop and random horizontal flip are applied to the training set. The dataset has the same set of images as ImageNet dataset . It randomly splits the entire dataset into T tasks so each task consists of 1000/T classes of images. Same as Split-CIFAR100, each task has its own final linear classifier. Data preprocessing is applied to the training data. In this section, we discuss some implementation details of BatchEnsemble. Weight Decay: In the BatchEnsemble, the weight of each ensemble member is never explicitly calculated because we obtain the activations directly by computing Eqn. 5. To maintain the goal of no additional computational cost, we can instead regularize the mean weight W over ensemble members, which can be efficiently calculated as where W is the shared weight among ensemble members, S and R are the matrices in Eqn. 5. We can also only regularize the shared weight and leave the fast weights unregularized because it only accounts for a small portion of model parameters. In practice, we find the above two schemes work equally. Diversity Encouragement: Additional loss term such as KL divergence among ensemble members can be added to encourage diversity. However, we find it sufficient for BatchEnsemble to have desired diversity by initializing the fast weight (s i and r i in Eqn. 1) to be random sign vectors. Also note that the scheme that each ensemble member is trained with different sub-batch of input can encourage diversity as well. The diversity analysis is provided in Appendix E. Machine Translation: The Transformer base is trained for 100K steps and the Transformer big is trained for 180K steps. The training steps of big model are shorter than because we terminate the training when it reaches the targeted perplexity on validation set. Experiments are run on 4 NVIDIA P100 GPUs. The BLEU score of Big Transformer on English-German task is in Figure 5. Although BatchEnsemble has lower perplexity as we showed in Section 4.2, we didn't observe a better BLEU score. Noted that the BLEU score in Figure 5 is lower than what reported. It is because in order to correctly evaluate model performance at a given timestep, we didn't use the averaging checkpoint trick. The dropout rate of Transformer base is 0.1 and 0.3 for Transformer big on English-German while remaining 0.1 on English-French. For dropout ensemble, we ran a grid search between 0.05 and 0.3 in the testing time and report the best validation perplexity. Classification: We train the model with mini-batch size 128. We also keep the standard learning rate schedule for ResNet. The learning rate decreases from 0.1 to 0.01, from 0.01 to 0.001 at halfway of training and 75% of training. The weight decay coefficient is set to be 10 −4. We use an ensemble size of 4, which means each ensemble member receives 32 training examples if we maintain the mini-batch size of 128. It is because Batch Normalization requires at least 32 examples to be effective on CIFAR dataset. As for the training budget, we train the single model for 250 epochs. Dynamically expandable networks and Reinforced continual learning are two recently proposed lifelong learning methods that achieve competitive performance. As discussed in Section 4.1, these two methods can be seen as an improved version progressive neural network (PNN) in terms of memory efficiency. As shown in , all three methods to similar accuracy measure in Split-CIFAR100 task. Therefore, among three evaluation metrics (accuracy, forgetting and cost), we only compare the accuracy of BatchEnsemble to PNN in Section 4.1 and compare the cost in this section. We first compute the cost relative to PNN on Split-CIFAR100 on LeNet and then compute the rest of the numbers base on what were reported in. Notice that PNN has no much computational overhead on Split-CIFAR100 because the number of total tasks is limited to 10. Even on the simple setup above, BatchEnsemble gives the best computational and memory efficiency. The advantage on large lifelong learning task such as Split-ImageNet would be even obvious. MC-drop BatchE NaiveE Single C10 2.89% 2.37% 2.32% 3.27% C100 8.99% 8.89% 6.82% 9.28% Similar to , we first evaluate BatchEnsemble on out-of-distribution examples from unseen classes. It is known that deep neural network tends to make over-confident predictions even if the prediction is wrong or the input comes from unseen classes. Ensembles of models can give better uncertainty prediction when the test data is out of the distribution of training data. To measure the uncertainty on the prediction, we calculate the predictive entropy of Single neural network, naive ensemble and BatchEnsemble. The is presented in Figure 6a. As we expected, single model produces over-confident predictions on unseen examples, whereas ensemble methods exhibit higher uncertainty on unseen classes, including both BatchEnsemble and naive ensemble. It suggests our ensemble weight generation mechanism doesn't degrade uncertainty modelling. Additionally, we calculate the Expected Calibration Error (ECE) of single model, naive ensemble and BatchEnsemble on both CIFAR-10 and CIFAR-100 in Table 6b. To calculate ECE, we group model predictions into M interval where n is the number of samples. ECE as a criteria of model calibration, measures the difference in expectation between confidence and accuracy . It shows that BatchEnsemble makes more calibrated prediction compared to single neural networks. Additionally, we evaluate the calibration of different mehtods on recently proposed CIFAR-10 corruption dataset . The dataset consists of over 30 types of corruptions to the images. It is commonly used to benchmark a wide range of methods on calibrated prediction . To the best of our knowledge, dropout ensemble is the state-of-the-art memory efficient ensemble method. Thus, in our paper, we compare BatchEnsemble to dropout ensemble in this section. Naive ensemble is also plotted as an upper bound of our method. As showed in Figure 7, BatchEnsemble achieves better calibration than dropout as the skew intensity increases. Moreover, dropout ensemble requires multiple forward passes to get the best performance. used sample size 128 while we found no significant difference between sample size 128 and 8. Note that even the sample size is 8, it is 8X more expensive than BatchEnsemble in the testing time cost. Finally, we showed that combining BatchEnsemble and dropout ensemble leads to better calibration. It is competitive to naive ensemble while keeping memory consumption efficient. It is also an evidence that BatchEnsemble is an orthogonal method to dropout ensemble. As we discussed in Section 2, ensemble benefits from the diversity among its members. We focus on the set of test examples on CIFAR-10 where single model makes confident incorrect predictions while ensemble model predicts correctly. We used the final models we reported in Section 4.3. In Figure 8, we randomly select examples from the above set and plot the prediction map of single model, each ensemble member and mean ensemble. As we can see, although some of the ensemble members make mistakes on thoes examples, the mean prediction takes the advantage of the model averaging and achieves better accuracy on CIFAR-10 classification task. We notice that BatchEnsemble preserves the diversity among ensemble members as naive ensemble. In this section, we compare BatchEnsemble to naive ensemble of small models on CIFAR-10/100 dataset. To maintain the same memory consumption as BatchEnsemble, we trained 4 independent ResNet14x4 models and evaluate the naive ensemble on these 4 models. This setup of naive ensemble still has roughly 10% memory overhead to BatchEnsemble. The are reported in Table 5. It shows that naive ensemble of small models achieves lower accuracy than BatchEnsemble. It illustrates that given the same memory budget, BatchEnsemble is a better choice over naive ensemble.
We introduced BatchEnsemble, an efficient method for ensembling and lifelong learning which can be used to improve the accuracy and uncertainty of any neural network like typical ensemble methods.
1,313
scitldr
Reinforcement learning typically requires carefully designed reward functions in order to learn the desired behavior. We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior. The optimal state trajectories are used to learn a generative or predictive model of the “good” states distribution. The reward signal is computed by a function of the difference between the actual next state acquired by the agent and the predicted next state given by the learned generative or predictive model. With this inferred reward function, we perform standard reinforcement learning in the inner loop to guide the agent to learn the given task. Experimental evaluations across a range of tasks demonstrate that the proposed method produces superior performance compared to standard reinforcement learning with both complete or sparse hand engineered rewards. Furthermore, we show that our method successfully enables an agent to learn good actions directly from expert player video of games such as the Super Mario Bros and Flappy Bird. Reinforcement learning (RL) deals with learning the desired behavior of an agent to accomplish a given task. Typically, a scalar reward signal is used to guide the agent's behavior and the agent learns a control policy that maximizes the cumulative reward over a trajectory, based on observations. This type of learning is referred to as "model-free" RL since the agent does not know apriori or learn the dynamics of the environment. Although the ideas of RL have been around for a long time BID24 ), great achievements were obtained recently by successfully incorporating deep models into them with the recent success of deep reinforcement learning. Some notable breakthroughs amongst many recent works are, the work from BID12 who approximated a Q-value function using as a deep neural network and trained agents to play Atari games with discrete control; who successfully applied deep RL for continuous control agents achieving state of the art; and BID22 who formulated a method for optimizing control policies with guaranteed monotonic improvement. In most RL methods, it is very critical to choose a well-designed reward function to successfully learn a good action policy for performing the task. However, there are cases where the reward function required for RL algorithms is not well-defined or is not available. Even for a task for which a reward function initially seems to be easily defined, it is often the case that painful hand-tuning of the reward function has to be done to make the agent converge on an optimal behavior. This problem of RL defeats the benefits of automated learning. In contrast, humans often can imitate instructor's behaviors, at least to some extent, when accomplishing a certain task in the real world, and can guess what actions or states are good for the eventual accomplishment, without being provided with the detailed reward at each step. For example, children can learn how to write letters by imitating demonstrations provided by their teachers or other adults (experts). Taking inspiration from such scenarios, various methods collectively referred to as imitation learning or learning from experts' demonstrations have been proposed BID21 ) as a relevant technical branch of RL. Using these methods, expert demonstrations can be given as input to the learning algorithm. Inverse reinforcement learning BID15; BID1; BID28 ), behavior cloning BID20 ), imitation learning BID6; BID5 ), and curiosity-based exploration ) are examples of research in this direction. While most of the prior work using expert demonstrations assumes that the demonstration trajectories contain both the state and action information (τ = {(s t)}) to solve the imitation learning problem, we, however, believe that there are many cases among real world environments where action information is not readily available. For example, a human teacher cannot tell the student what amount of force to put on each of the fingers when writing a letter. As such, in this work, we propose a reward estimation method that can estimate the underlying reward based only on the expert demonstrations of state trajectories for accomplishing a given task. The estimated reward function can be used in RL algorithms in order to learn a suitable policy for the task. The proposed method has the advantage of training agents based only on visual observations of experts performing the task. For this purpose, it uses a model of the distribution of the expert state trajectories and defines the reward function in a way that it penalizes the agent's behavior for actions that cause it to deviate from the modeled distribution. We present two methods with this motivation; a generative model and a temporal sequence prediction model. The latter defines the reward function by the similarity between the state predicted by the temporal sequence model trained based on the expert's demonstrations and the currently observed state. We present experimental of the methods on multiple environments and with varied settings of input and output. The primary contribution of this paper is in the estimation of the reward function based on state similarity to expert demonstrations, that can be measured even from raw video input. Model-free Reinforcement Learning (RL) methods learn a policy π(a t |s t) that produces an action from the current observation. BID12 showed that a q-value function q(s t, a t) can be approximated with a deep neural network, which is trained using hand-engineered scalar reward signals given to the agent based on its behavior. Similarly, actor-critic networks in Deep Deterministic Policy Gradients (DDPG) can enable state of the art continuous control, e.g. in robotic manipulation by minimizing the distance between the end effector and the target position. Since the success with DDPG, other methods such as Trust Region Policy Optimization (TRPO) BID22 ) and Proximal Policy Optimization (PPO) BID23 ) have been proposed as further improvements for model-free RL in continuous control problems. Although RL enables an agent to learn an optimal policy without supervised training data, in the standard case, it requires a difficult task of hand-tuning good reward functions for each environment. This has been pointed out previously in the literature BID1 ). Several kinds of approaches have been proposed to workaround or tackle this problem. An approach that does not require reward hand-tuning is behavior cloning based on supervised learning instead of RL. It learns the conditional distribution of actions given states in a supervised manner. Although it has an advantage of fast convergence BID5 ) (as behavior cloning learns a single action from states in each step), it typically in compounding of errors in the future states. An alternate approach is Inverse Reinforcement Learning (IRL) proposed in the seminal work by BID15. In this work, the authors try to recover the optimal reward function as a best description behind the given expert demonstrations from humans or other expert agents, using linear programming methods. It is based on the assumption that expert demonstrations are solutions to a Markov Decision Process (MDP) defined with a hidden reward function BID15 ). It demonstrated successful estimation of the reward function in case of relatively simple environments such as the grid world and the mountain car problem. Another use of the expert demonstrations is initializing the value function; this was described by BID27. Extending the work by BID15, entropy-based methods that compute the suitable reward function by maximizing the entropy of the expert demonstrations have been proposed by BID29. In the work by BID1, a method was proposed for recovering the cost function based on expected feature matching between observed policy and the agent behavior. Furthermore, they showed this to be the necessary and sufficient condition for the agent to imitate the expert behavior. More recently, there was some work that extended this framework using deep neural networks as non-linear function approximator for both policy and the reward functions BID28 ). In other relevant work by BID6, the imitation learning problem was formulated as a two-players competitive game where a discriminator network tries to distinguish between expert trajectories and agent-generated trajectories. The discriminator is used as a surrogate cost function which guides the agent's behavior in each step to imitate the expert behavior by updating policy parameters based on Trust Region Policy Optimization (TRPO) BID22 ). Related recent works also include model-based imitation learning BID2 ) and robust imitation learning BID26 ) using generative adversarial networks. All the above-mentioned methods, however, rely on both state and action information provided by expert demonstrations. Contrarily, we learn only from expert state trajectories in this work. A recent line of work aims at learning useful policies for agents even in the absence of expert demonstrations. In this regard, trained an agent with a combination of reward inferred with intrinsic curiosity and a hand-engineered, complete or even very sparse scalar reward signal. The curiosity-based reward is designed to have a high value when the agent encounters unseen states and a low value when it is in a state similar to the previously explored states. The work reported successful navigation in games like Mario and Doom without any expert demonstrations. In this paper, we compare our proposed methods with the curiosity-based approach and show the advantage over it in terms of the learned behaviors. However, our methods assumed state demonstrations are available as expert data while the curiosity-based method did not use any demonstration data. We consider an incomplete Markov Decision Process (MDP), consisting of states S and action space A, where the reward signal r: S × A → R, is unknown. An agent can act in an environment defined by this MDP following a policy π(a t |s t). Here, we assume that we have knowledge of a finite set of optimal or expert state trajectories DISPLAYFORM0., n}. These trajectories can represent joints angles, raw images or any other information depicting the state of the environment. Since the reward signal is unknown, our primary goal is to find a reward signal that enables the agent to learn a policy π, that can maximize the likelihood of these set of expert trajectories τ. In this paper, we assume that the reward signal can be inferred entirely based on the current state and next state information, r: S × S → R. More formally, we would like to find a reward function that maximizes the following objective: DISPLAYFORM1 where r(s t+1 |s t) is the reward function of the next state given the current state and p(s t+1 |s t) is the transition probability. We assume the performing to maximize the likelihood of next step prediction in equation 1 will be leading the maximizing the future reward when the task is deterministic. Because this likelihood is based on similarity with demonstrations which are obtained while an expert agent is performing by maximizing the future reward. Therefore we assume the agent will be maximizing future reward when it takes the action that gets the similar next step to expert demonstration trajectory data τ. Let, τ = {s i t} i=1:M,t=1:N be the optimal states visited by the expert agent, where M is the number of demonstration episodes, and N is the number of steps within each episode. We estimate an appropriate reward signal based on the expert state trajectories τ, which in turn is used to guide a reinforcement learning algorithm and learn a suitable policy. We evaluate two approaches to implement this idea. A straightforward approach is to first train a generative model using the expert trajectories τ. Rewards can then be estimated based on similarity measures between a reconstructed state value and the actual currently experienced state value of the agent. This method constrains exploration to the states that have been demonstrated by an expert and enables learning a policy that closely matches the expert. However, in this approach, the temporal order of states are ignored or not readily accounted for. This temporal order of the next state in the sequence is important for estimating the state transition probability function. As such, the next approach we take is to consider a temporal sequence prediction model that can be trained to predict the next state value given current state, based on the expert trajectories. Once again the reward value can be estimated as a function of the similarity measure between the predicted next state and the one actually visited by the agent. The following sub-sections describes both these approaches in detail. We train a deep generative model (three-layered fully connected auto-encoder) using the state values s i t for each step number t, sampled from the expert agent trajectories τ. The generative model is trained to minimize the following reconstruction loss (maximize the likelihood of the training data): DISPLAYFORM0 where θ * g represents the optimum parameters of the generative model. Following typical settings, we assume p(s i t ; θ g) to be a Gaussian distribution, such that equation FORMULA2 DISPLAYFORM1 The reward value is estimated as a function of the difference between the actual state value s t+1 and the generated output g(s DISPLAYFORM2 where s t is the current state value, and ψ can be a linear or nonlinear function, typically hyperbolic tangent or gaussian function. In this formulation, if the current state is similar to the reconstructed state value, i.e. g(s t ; θ g), the estimated reward value will be higher. However, if the current state is not similar to the generated state, the reward value will be estimated to be low. Moreover, as a reward value is estimated at each time step, this approach can be used even in problems which originally had a highly sparse engineered reward structure. In this approach, we learn a temporal sequence prediction model (the specific networks used are mentioned in the corresponding experiments sections) such that we can maximize the likelihood of the next state given the current state. As such the network is trained using the following objective function, DISPLAYFORM0 where θ * h represents the optimal parameters of the prediction model. We also assume the probability of the next state given the previous state value, p(s i t+1 |s i t ; θ h) to be a Gaussian distribution. As such the objective function can be seen to be minimizing the mean square error, DISPLAYFORM1 DISPLAYFORM2 The estimated reward here, can also be interpreted akin to the generative model case. Here, if the agent's policy takes an action that changes the environment towards states far away from the expert trajectories, the corresponding estimated reward value is low. If the actions of agent bring it close to the expert demonstrated trajectories, thereby making the predicted next state match with the actual visited state value, the reward is estimated to be high. This process of reward shaping or guidance can enable the agent to learn a policy that is optimized based on the expert demonstration trajectories. Algorithm 1 explains the step by step flow of the proposed methods. Given trajectories τ from expert agent 3: DISPLAYFORM3 end for 6: end procedure 7: procedure REINFORCEMENT LEARNING 8: DISPLAYFORM4 Observe state s t Select/execute action a t, and observe state s t+1 11: DISPLAYFORM0 Update the deep reinforcement learning network using the tuple (s t, a t, r t, s t+1) end for 14: end procedure In order to evaluate our reward estimation methods, we conducted experiments across a range of environments. We consider five different tasks, namely: robot arm reaching task (reacher) to a fixed target position, robot arm reaching task to a random target position, controlling a point agent for reaching a target while avoiding an obstacle, learning an agent for longest duration of flight in the Flappy Bird video game, and learning an agent for maximizing the traveling distance in Super Mario Bros video game. We consider a 2-DoF robot arm in the x-y plane that has to learn to reach with the end-effector a point target. The first arm of the robot is a rigidly linked point, with the second arm linked to its edge. It has two joint values θ = (θ 1, θ 2), θ 1 ∈ (−∞, +∞), θ 2 ∈ [−π, +π] and the lengths of arms are 0.1 and 0.11 units, respectively. The robot arm was initialized by random joint values at the initial step for each episode. In the following experiments, we have two settings: fixed point target, and a random target. The applied continuous action values a t is used to control the joint angles, such that,θ = θ t − θ t−1 = 0.05 a t. Each action value has been clipped the range [−1, 1]. The reacher task is enabled using the physics engine within the roboschool environment BID3 BID16 ). Figure 1 describes the roboshool environment. The robot arms are in blue, the blue-green point is the end-effector, and the pink dot is the desired target location. In this experiment, the target point p tgt is always fixed at (0.1, 0.1). The state vector s t consists of the following values: absolute end position of first arm (p 2), joint value of elbow (θ 2), velocities of the joints (θ 1,θ 2), absolute target position (p tgt), and the relative end-effector position from target (p ee − p tgt). We used DDPG ) for this task, with the number of steps for each episode being 500 in this experiment 1. The reward functions used in this task were as follows:Dense reward: DISPLAYFORM0 Sparse reward: r t = − tanh(α p ee − p tgt 2) + r env t, DISPLAYFORM1 DISPLAYFORM2 GM with r env t: rGM (1000 episodes): DISPLAYFORM3 GM with action: DISPLAYFORM4 where r env t is the environment specific reward, which is calculated based on the cost for current action, − a t 2. This regularization is required for finding the shortest path to reach the target. As this cost is critical for fast convergence, we use this in all cases. The dense reward is a distance between end-effector and the target, and the sparse reward is based on a bonus for reaching. The generative model parameters θ 2k is trained by τ 2k trajectories that contains only states of 2000 episodes from an agent trained during 1k episodes with dense reward. The generative network has 400, 300 and 400 units fully-connected layers, respectively. They also have ReLU activation function, with the batch size being 16, and number of epochs being 50. θ 1k is trained from τ 1k trajectories that is randomly picked 1000 episodes from τ 2k. The GM with action uses a generative model θ 2k,+a that is trained pairs of state and action for 2000 episodes for same agents as τ 2k. We use a tanh nonlinear function for the estimated reward in order to keep a bounded value. The α, β change sensitiveness of distance or reward, were both set to 100 2. Here, we also compare our with behavior cloning (BC) method BID20 where the trained actor networks directly use obtained pairs of states and actions. DISPLAYFORM5 Figure 1: The environment of reacher task. The reacher has two arms, and objective of agent is reaching endeffector (green) to target point (red). FIG3 shows the difference in performance by using the different reward functions 3. All methods are evaluated by a score (y-axis), which was normalized to a minimum and a maximum value. The proposed method, especially "GM with r env t ", manages to achieve a score nearing that of the dense reward, with the performance being much better as compared to the sparse reward setting. Moreover, the learning curves based on the rewards estimated with the generative model show a faster convergence rate. However, the without environment specific reward, i.e. with the additional action regularization term, takes a longer time to converge. This is primarily because of the fact that GM reward is reflective of the distance between target and end-effector, and cannot directly account for the action regularization. The GM reward based on τ 1k underperforms as compared with GM reward based on τ 2k because of the lack of demonstration data. FIG4 shows the reward value of each end-effector point. GM estimated reward using τ 2k has better reward map as compared to GM estimated reward using τ 1k. However, these demonstrations data FIG11 ) are biased by the robot trajectories. Thus, a method of generating demonstration data that normalizes or avoids such bias will further improve the reward structure. If the demonstrations contain the action information in addition to state information, behavior cloning achieves good performance. Surprisingly, however, when using both state and action information in the generative model, "GM [state, action]", the performance of the agent is comparatively poor. In this experiment, the target point p tgt is initialized by a random uniform distribution of [−0.27, +0.27], that includes points outside of the reaching range of the robot arms. Furthermore, we removed the relative position of the target from the input state information. This makes the task more difficult. The state vector s t has the following values: p 2, θ 2,θ 1,θ 2, p tgt. In the previous experiment, the distribution of states in expert trajectories is expected to be similar to the reward structure due to a fixed target location. However, when the target position changes randomly, this distribution is not fixed. We, therefore, evaluate with the temporal sequence prediction model h(s t ; θ h) in this experiment. The RL setting is same as the previous experiment, however we changed the total number of steps within each episode to 400. The reward functions used in this experiment were calculated as follows: DISPLAYFORM0 Sparse reward: r t = tanh(−α p ee − p tgt 2) + r env t, DISPLAYFORM1 LSTM reward: r t = tanh(−γ s t+1 − h(s t:t-n ; θ lstm) 2 ) + r DISPLAYFORM2 The expert demonstrations τ were obtained using the states of 2000 episodes running a trained agent with dense hand-engineered reward. The GM estimated reward uses the same setting as in the previous experiment. NS is a model that predicts the next state given current state, and was trained using demonstration data τ 4. The LSTM model uses Long short-term memory BID7 ) as a temporal sequence prediction model. The state in reacher task does not contain time sequence data, hence we use a finite state history as input. LSTM model has three layers 5 and one fully-connected layer with 40 ReLU activation units. The forward model based reward estimation is based on predicting the next state given both the current state and action 6. Here, we also compared with the baseline behavior cloning method. In this experiment, α is 100, β is 1, and γ is 10. FIG3 shows the performance of the trained agent using the different reward functions. In all cases using estimated rewards performances significantly better than the sparse reward case. The LSTM based prediction method gives the best , reaching close to the performance obtained with dense hand engineered reward function. As expected, the GM based reward estimation fails to work well in this relatively complex experimental setting. The NS model estimated reward, which predicts next state given only the current state information, has comparable performance with LSTM based prediction model during the initial episodes. The FM based reward function also performs poorly in this experiment. Comparatively, the direct BC works relatively well. This indicates that it is better to use behavior cloning than reward prediction when both state and action information are available from demonstration data. Starting with this experiment, we evaluate using only the temporal sequence prediction method. As such, here we use a finite history of the state values in order to predict the next state value. We assume that predicting a part of the state that is related to a given action allows the model to make a better estimate of the reward function. Former work by predicts a function of the next state, φ(s t+1) rather than predicting the raw value s t+1, as in this paper. In this experiment, we also changed the non-linear function ψ in the proposed prediction method to a Gaussian function (as compared to the hyperbolic tangent function used in previous experiments). This allows us to compare the robustness of our proposed method for reward estimation to different non-linear functions. We develop a new environment that adds an obstacle to the reaching task. This reacher is a twodimensional point (x, y) that uses position control. In FIG5 we show the modified environment setup. The agent's goal is to reach the target while avoiding the obstacle in this case. The initial position of agent, the target position, and an obstacle position were initialized randomly. The state value contains the agent absolute position (p t), current velocity of the agent (ṗ t), a target absolute position (p tgt), an obstacle absolute position (p obs), and the relative location of target and obstacle with respect to the agent (p t −p tgt, p t −p obs). Once again the RL algorithm used in this experiment was DDPG ) for continuous control 7. The number of steps for each episode set to 500. Here, we used the following reward functions: DISPLAYFORM0 LSTM reward: DISPLAYFORM1 DISPLAYFORM2 where h (s t:t-n ; θ lstm) is a network that predicts a selected part of state values given a finite history of state information. The dense reward is composed of both, the target distance cost and the obstacle distance bonus. The optimal state trajectories τ contains 800 "human guided" demonstration data. In this case, the LSTM network consisted of two layers, each with 256 units with ReLU activations. In this experiment, σ 1 is 0.005, and σ 2 is 0.002 8. FIG6 shows the performance with the different estimated or hand-engineered reward settings. The LSTM based prediction method learns to reach the target faster than the dense reward, while LSTM (s) has the best overall performance by learning with human-guided demonstration data. The agent (green) will move to reach the target (red), while avoiding the obstacle (pink). DISPLAYFORM3 We use a re-implementation BID10 ) of Android game, "Flappy Bird", in python (pygame). The objective of this game is to pass through the maximum number of pipes without collision. The control is a single discrete command of whether to flap the bird wings or not. The state has four consecutive gray frames (4 x 80 x 80). The RL is trained by DQN BID12 ) 9, and the update frequency of deep network is 100 steps. The used rewards are, DISPLAYFORM0 if pass through a pipe; −1 if collide to a pipe. LSTM reward: DISPLAYFORM1 which s t+1 is an absolute position of the bird, which can be given from simulator or it could be processed by pattern matching or CNN from raw images, h (s t ; θ lstm) is a predicted the absolute position. Hence, LSTM is trained for predicting absolute position of bird location given images. The τ of this experiment is 10 episodes data from a trained agent in the repository by BID10. Also, we also compared with the baseline behavior cloning method. In this experiment, σ is 0.02. FIG8 shows the of LSTM reward is better than normal "hand-crafted" reward. The reason for this situation is, the normal dense reward just describes the traveling distance, but our LSTM reward will teach which absolute transition of bird is good. Also, LSTM has better convergence than BC ; the reason is the number of demonstrations is not enough for behavior cloning method. In the final task, we consider a more complex environment in order to evaluate our proposed reward estimation method using only state information. Here we use the Super Mario Bros classic Nintendo video game environment . Our proposed method estimates reward values based on expert gameplay video data (using only the state information in the form of image frames).In this experiment, we also benchmarked against the recently proposed curiosity-based method ) using the implementation provided by the same authors BID17 ). This was used as the baseline reinforcement learning technique. Unlike in the actual game, here we always initialize Mario to the starting position rather than a previously saved checkpoint. This is a discrete control setup, where, Mario can make 14 types of actions 10. The state information consists of sequential input of four 42 x 42 gray image frames 11. Here we used the A3C RL algorithm BID13 ). We used gameplay of stage "1-1" for this experiment, with the objective of the agent being to travel as far as possible and achieve as high a score as possible. The rewards functions used in this experiment were as follows: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 Curiosity DISPLAYFORM3 DISPLAYFORM4 where position t is the current position of Mario at time t, score t is the current score value at time t, and s t are screen images from the Mario game at time t. The position and score information are obtained using the Mario game emulator. In this experiment, we use a three-dimensional convolutional neural network BID8 ) (3D-CNN) for our temporal sequence prediction method. In order to capture expert demonstration data, we took 15 game playing videos by five different people 12. In total, the demonstration data consisted of 25000 frames. The length of skipped frames in input to the temporal sequence prediction model was 36, as humans cannot play as fast as an RL agent; however, we did not change the skip frame rate for the RL agent. The 3D-CNN consisted of 4 layers 13 and a final layer to reconstruct the image. The agent was trained using 50 epochs with a batch size of 8. We implemented two prediction methods for reward estimation. In the naïve method the Mario agent will end up getting positive rewards if it sits in a fixed place without moving. This is because it can avoid dying by just not moving. However, this is clearly a trivial suboptimal policy. Hence, we implemented the alternate reward function based on the same temporal sequence prediction model, but we apply a threshold value that prevents the agent from converging onto such a trivial solution. Here, the value of ζ is 0.025, which was calculated based on the reward value obtained by just staying fixed at the initial position. FIG10 shows the performance with the different reward functions. Here, the graphs directly show the average over multiple trials. As observed, the agent was unable to reach large distances even while using "hand-crafted" dense rewards and did not converge to the goal every time 14; this behavior was also observed by for their reward case. As observed from the average curves of FIG10, the proposed 3D-CNN method learns relatively faster as compared to the curiosity-based agent ). As expected the 3D-CNN (naïve) method converged to a solution of remaining fixed at the initial state. As future work, we hope to improve the performance in this game setting using deeper RL networks, as well as large input image sizes. Overall estimating reward from φ(s t) without the need of action data, allows an agent to learn suitable policy directly from raw video data. The abundance of visual data creates ample opportunity for this type of reward estimation method to be explored further in different video game settings.10 A single action is repeated for six consecutive frames. Please refer to ) for details. 11 Every next six frames were skipped. 12 All videos consisted of games where the player succeeded in clearing the stage. 13 Two layers with (2 x 5 x 5), two layers with (2 x 3 x 3) kernels, all have 32 filters, and every two layers with stride.14 By our experiment, even if it trained long steps, such as 3.0M; it just reached around 600 -700 averagely. In this paper, we proposed two variations of a reward estimation method via state prediction by using state-only trajectories of the expert; one based on an autoencoder-based generative model and one based on temporal sequence prediction using LSTM. Both the models were for calculating similarities between actual states and predicted states. We compared the methods with conventional reinforcement learning methods in five various environments. As overall trends, we found that the proposed method converged faster than using hand-crafted reward in many cases, especially when the expert trajectories were given by humans, and also that the temporal sequence prediction model had better than the generative model. It was also shown that the method could be applied to the case where the demonstration was given by videos. However, detailed trends were different for the different environments depending on the complexity of the tasks. Neither model of our proposed method was versatile enough to be applicable to every environment without any changes of the reward definition. As we saw in the necessity of the energy term of the reward for the reacher task and in the necessity of special handling of the initial position of Mario, the proposed method has a room of improvements especially in modeling global temporal characteristics of trajectories. We would like to tackle these problems as future work. The DDPG's actor network has 400 and 300 unites fully-connected (fc) layers, the critic network has also 400 and 300 fully-connected layers, and each layer has a ReLU (Nair & Hinton FORMULA1) activation function. We put the tanh activation function at the final layer of actor network. Without this modification, the normal RL takes a long time to converge. Also, initial weights will be set from a uniform distribution U (−0.003, 0.003). The exploration policy is Ornstein-Uhlenbeck process BID25 ) (θ = 0.15, µ = 0, σ = 0.01), size of reply memory is 1M, and optimizer is Adam (Kingma & Ba FORMULA1). We implemented these experiments by Keras-rl , Keras BID4 ), and Tensorflow BID0 ) libraries.: These are scatter-plots of end-effector positions (blue) for each state of captured demonstration τ 500, τ 1k, τ 2k, each point is drawn by α is 0.01. And the fixed target position is also plotted (red). Notes that this is just plotting end-effector position, there is more variation in other state values. For example, even if the end-effector position were same, arms' pose (joint values) might be different. Note that τ 500 is not used in the experiment. The DDPG's actor network has 64 and 64 unites fully-connected layers, the critic network has also 64 and 64 fully-connected layers, and each layer has a ReLU activation function. Initial weights will be set from a uniform distribution U (−0.003, 0.003). The exploration policy is Ornstein-Uhlenbeck process BID25 ) (θ = 0.15, µ = 0, σ = 0.01), size of reply memory is 500k, and optimizer is Adam.
Reward Estimation from Game Videos
1,314
scitldr
Deep neural networks have achieved impressive performance in handling complicated semantics in natural language, while mostly treated as black boxes. To explain how the model handles compositional semantics of words and phrases, we study the hierarchical explanation problem. We highlight the key challenge is to compute non-additive and context-independent importance for individual words and phrases. We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models. In this paper, we propose a formal way to quantify the importance of each word or phrase to generate hierarchical explanations. We modify contextual decomposition algorithms according to our formulation, and propose a model-agnostic explanation algorithm with competitive performance. Human evaluation and automatic metrics evaluation on both LSTM models and fine-tuned BERT Transformer models on multiple datasets show that our algorithms robustly outperform prior works on hierarchical explanations. We show our algorithms help explain compositionality of semantics, extract classification rules, and improve human trust of models. Recent advances in deep neural networks have led to impressive on a range of natural language processing (NLP) tasks, by learning latent, compositional vector representations of text data (; ; b). However, interpretability of the predictions given by these complex, "black box" models has always been a limiting factor for use cases that require explanations of the features involved in modeling (e.g., words and phrases) . Prior efforts on enhancing model interpretability have focused on either constructing models with intrinsically interpretable structures (; a), or developing post-hoc explanation algorithms which can explain model predictions without elucidating the mechanisms by which model works . Among these work, post-hoc explanation has come to the fore as they can operate over a variety of trained models while not affecting predictive performance of models. Towards post-hoc explanation, a major line of work, additive feature attribution methods (; ; ;), explain a model prediction by assigning importance scores to individual input variables. However, these methods may not work for explaining compositional semantics in natural language (e.g., phrases or clauses), as the importance of a phrase often is non-linear combination of the importance of the words in the phrase. Contextual decomposition (CD) and its hierarchical extension go beyond the additive assumption and compute the contribution solely made by a word/phrase to the model prediction (i.e., individual contribution), by decomposing the output variables of the neural network at each layer. Using the individual contribution scores so derived, these algorithms generate hierarchical explanation on how the model captures compositional semantics (e.g., stress or negation) in making predictions (see Figure 1 for example). (a) Input occlusion assigns a negative score for the word "interesting", as the sentiment of the phrase becomes less negative after removing "interesting" from the original sentence. (b) Additive attributions assign importance scores for words "not" and "interesting" by linearly distributing contribution score of "not interesting", exemplified with Shapley Values . Intuitively, only (c) Hierarchical explanations highlight the negative compositional effect between the words "not" and "interesting". However, despite contextual decomposition methods have achieved good in practice, what reveals extra importance that emerge from combining two phrases has not been well studied. As a , prior lines of work on contextual decomposition have focused on exploring model-specific decompositions based on their performance on visualizations. We identify the extra importance from combining two phrases can be quantified by studying how the importance of the combined phrase differs from the sum of the importance of the two component phrases on its own. Similar strategies have been studied in game theory for quantifying the surplus from combining two groups of players . Following the definition above, the key challenge is to formulate the importance of a phrase on it own, i.e., context independent importance of a phrase. However, while contextual decomposition algorithms try to decompose the individual contributions from given phrases for explanation, we show neither of them satisfy this context independence property mathematically. To this end, we propose a formal way to quantify the importance of each individual word/phrase, and develop effective algorithms for generating hierarchical explanations based on the new formulation. To mathematically formalize and efficiently approximate context independent importance, we formulate N -context independent importance of a phrase, defined as the difference of model output after masking out the phrase, marginalized over all possible N words surrounding the phrase in the sentence. We propose two explanation algorithms according to our formulation, namely the Sampling and Contextual Decomposition algorithm (SCD), which overcomes the weakness of contextual decomposition algorithms, and the Sampling and OCclusion algorithm (SOC), which is simple, model-agnostic, and performs competitively against prior lines of algorithms. We experiment with both LSTM and fine-tuned Transformer models to evaluate the proposed methods. Quantitative studies involving automatic metrics and human evaluation on sentiment analysis and relation extraction tasks show that our algorithms consistently outperform competitors in the quality of explanations. Our algorithms manage to provide hierarchical visualization of compositional semantics captured by models, extract classification rules from models, and help users to trust neural networks predictions. In summary, our work makes the following contributions: we identify the key challenges in generating post-hoc hierarchical explanations and propose a mathematically sound way to quantify context independent importance of words and phrases for generating hierarchical explanations; we extend previous post-hoc explanation algorithm based on the new formulation of N -context independent importance and develop two effective hierarchical explanation algorithms; and both experiments using automatic evaluation metrics and human evaluation demonstrate that the proposed explanation algorithms consistently outperform the compared methods (with both LSTM and Transformer as base models) over several datasets. We consider a sequence of low-dimensional word embeddings x 1:T:= (x 1, x 2, ..., x T), or denoted as x for brevity, as the input to a neural sequence model, such as standard RNNs, LSTM and Transformers . These neural models extract latent, compositional representations h 1:T (i.e., hidden states) from the input sequence x, and feed these hidden state vectors to a prediction layer to generate output in the label space (e.g., sentiment polarity of a sentence). For LSTM, we use the last hidden state h T to give unnormalized prediction scores s(x) ∈ R dc over d c label classes as follows. where W l ∈ R dc×d h is a trainable weight matrix. For Transformers, the representation corresponding to the " [CLS] " token at the final layer is fed to the prediction layer to generate scores s(x). Towards post-hoc explanation of s(x), a notable line of work, additive feature attribution methods (; ;), measure word-level importance to the model prediction s(x) by attributing a importance score φ(x i, x) to each word in the input sequence x i ∈ x. Such additive attribution methods are related to Shapley Values and thus can be proven to enjoy good properties, including that it has unique solution of a "fair" attribution . However, the additive assumption hinders these methods from explaining the complex interactions between words and compositional semantics in a sentence (e.g., modeling negation, transition, and emphasis in sentiment classification), as shown in Figure 1. To caputure non-linear compositional semantics, the line of work on contextual decomposition (CD) designs non-additive measures of importance from individual words/phrases to the model predictions, and further extend to agglomerative contextual decomposition (ACD) algorithm for generating hierarchical explanations. Given a phrase p = x i:j in the input sequence x, contextual decomposition (CD) attributes a score φ(p, x) as the contribution solely from p to the model's prediction s(x). Note that φ(p, x) does not equal to the sum of the scores of each word in the phrase, i.e., φ(p, x) = xi∈p φ(x i, x). Starting from the input layer, CD iteratively decomposes each hidden state h of the model into the contribution solely made by p, denoted as β, and the contributions involving the words outside the phrase p, denoted as γ, with the relation h = β + γ holds. Note that the algorithm also keeps the contribution from the bias term, denoted as ζ, temporally before element-wise multiplication. For a linear layer h = W i x t + b i with input x t, the contribution solely from p to h is defined as β = W i x t when x t is part of the phrase (i.e., x t ∈ p), and the contribution involving other words in the sentences (denoted as x\p) is defined as γ = 0. The contribution of the bias term ζ is thus b i. When x t lies outside of the phrase in the sentence (i.e., x t ∈ p), γ is quantified as W i x t and β is 0. In the cases when CD encounters element-wise multiplication operations h = h a * h b (e.g., in LSTMs), it eliminates the multiplicative interaction terms which involve the information outside of the phrase p. Specifically, suppose that h a and h b have been decomposed as When dealing with non-linear activation h = σ(h), CD computes the contribution solely from the phrase p as the average activation differences caused by β supposing γ is present or absent, Following the three strategies introduced above, CD decomposes all the intermediate outputs starting from the input layer, until reaching the final output of the model h T = β + γ. The logit score W l β is treated as the contribution of the given phrase p to the final prediction s(x). As a follow-up study, extends CD algorithm to other families of neural network architectures, and proposes agglomerative contextual decomposition algorithm (ACD). The decomposition of activation functions is modified as β = σ(β). For the linear layer h = W h + b with its decomposition h = β + γ, the bias term b is decomposed proportionally and merged into the β term of h, based on β = W β + |W β|/(|W β| + |W γ|) · b. In this section, we start by identifying desired properties of phrase-level importance score attribution for hierarchical explanations. We propose a measure of context-independent importance of a phrase and introduce two explanation algorithms instantiated from the proposed formulation. x ⇠ p(x |x) Despite the empirical success of CD and ACD, no prior works analyze what common properties a score attribution mechanism should satisfy to generate hierarchical explanations that reveal compositional semantics formed between phrases. Here we identify two properties that an attribution method should satisfy to generate informative hierarchical explanations. Non-additivity Importance of a phrase φ(p, x) should be quantified by a non-linear function over the importance scores of all the component words, in contrast to the family of additive feature attribution methods. Context Independence For deep neural networks, when two phrases combine, their importance to predicting a class may greatly change. The surplus by combining two phrases can be quantified by the difference between the importance of the combined phrase and the sum of the importance of two phrases on its own. It follows how the surplus of combining two groups of players can be quantified in the game theory . According to the definition, the importance of two component phrases should be evaluated independently of each other. Formally, if we are interested in how combining two phrases p 1 and p 2 contribute to a specific prediction for an input x, we expect for input sentencesx where only p 2 is replaced to another phrase, the importance attribution for p 1 remains the same, i.e., φ(p 1, x) = φ(p 1,x). In the hierarchical explanation setting, we are interested in how combining a phrase and any other contextual words or phrases in the input x changes the prediction for the input x. Therefore, we expect φ(p, x) = φ(p,x) given the phrase p in two different contexts x andx. Limitations of CD and ACD Unfortunately, while CD and ACD try to construct decomposition so that β terms represent the contributions solely from a given a phrase, the assigned importance scores by these algorithms do not satisfy the context independence property mathematically. For CD, we see the computation of β involves the γ term of a specific input sentence in Eq. 2 (see Figure 2 (a) for visualization). Similarly, for ACD, the decomposition of the bias term involves the γ terms of a specific input sentence. As a , the β terms computed by both algorithms depend on the context of the phrase p. Regarding the decomposition of activation functions, the decomposition β = σ(β) in ACD seems plausible, which does not involve γ for computing β terms. However, in case every activation is decomposed in this way and suppose the bias terms are merged into β, the algorithm is equivalent to feeding only the phrase p into the classifier with all other input masked as zero. Empirical show unreliable explanation quality of both algorithms in some models. Given the limitation of prior works, we start by formulating a importance measure of phrases that satisfies both non-additivity and context independence property. Given a phrase p:= x i:j appearing in a specific input x 1:T, we first relax our setting and define the importance of a phrase independent of all the possible N -word contexts adjacent to it. The N-context independent importance is defined as the output difference after masking out the phrase p, marginalized over all the possible N -word contexts, denoted as x δ, around p in the input x. For an intuitive example, to evaluate the context independent importance up to one word of very in the sentence The film is very interesting in a sentiment analysis model, we sample some possible adjacent words before and after the word very, and observe the prediction difference after some practice of masking the word very. In Figure 2 (Right), we illustrated an example for the sampling and masking steps. The process of evaluating context independent importance is formally written as, where x −δ denotes the ing sequence after masking out an N -word context surrounding the phrase p from the input x. Here, x δ is a N -word sequence sampled from a distribution p(x δ |x −δ), which is conditioned on the phrase p as well as other words in the sentence x. Details on the sampling process will be elaborated later in the section. Accordingly, we use s(x −δ ; x δ) to denote the model prediction score after replacing the masked-out context x −δ with a sampled N -word sequence x δ. We use x\p to denote the operation of masking out the phrase p from the input sentence x. The specific implementation of this masking out operation varies across different explanation algorithms and is instantiated from their formulation. Following the notion of N -context independent importance, we define context-independent importance of a phrase p by increasing the size of the context N to sufficiently large (e.g., length of the sentence). The context independent importance can be equivalently written as follows. Computing Eqs. 3 and 4 are intractable as it requires integrating over a large number of variants of x δ as replacements (i.e., number of variants for x δ is exponential to the size of N). While it is possible to approximate the expectations in Eqs. 3 and 4 by sampling from the training text corpus, we find it common that a phrase occurs sparsely in the corpus. Therefore, we approximate the expectation by sampling from a language model pre-trained using the training corpus. The language model helps model a smoothed distribution of p(x δ |x −δ). In practice, all our explanation algorithms implements N -context independent importance following Eq. 3, where the size of the neighborhood N is a parameter to be specified. In contextual decomposition algorithm, the desirable context independence property is compromised when computing decomposition of activation functions, as discussed in Section 3.1. Following the new formulation on context-independent importance introduced in Section 3.2, we present a simple modification of the contextual decomposition algorithm, and develop a new sampling and contextual decomposition (SCD) algorithm for effective generation of hierarchical explanations. SCD only modifies the way to decompose activation functions in CD. Specifically, given the output h = s (l) (x) at an intermediate layer l with the decomposition h = β + γ, we decompose the activation value σ(h) into β + γ, with the following definition: i.e., β is defined as the expected difference between the activation values when the β term is present or absent. h is computed for different input sequences x with the contexts of the phrase p sampled from the distribution p(x δ |x −δ). Eq. 5 is a layer wise application of Eq. 4, where the masking operation is implemented with calculating σ(h − β). Figure 2 (b) provides a visualization for the decomposition. To perform sampling, we first pretrain a LSTM language model from two directions on the training data. For sampling, we mask the words that are not conditioned in p(x δ |x −δ). Some other sampling options include performing Gibbs sampling from a masked language model . The algorithm then obtain a set of samples S by sampling with the trained language model. For each sample in S, the algorithm records the input of the i-th non-linear activation function to obtain a sample set S h. During the explanation, the decomposition of the i-th non-linear activation function is calculated as, Some neural models such as Transformers involve operations that normalize over different dimensions of a vectors, e.g. softmax functions and layer normalization operations. We observe improved performance by not decomposing the normalizer of these terms when the phrase p is shorter than a threshold, assuming that the impact of p to the normalizer can be ignored. Besides, for element-wise multiplication in LSTM models, we treat them in the same way as other nonlinear operations and decompose them as Eq. 5, where the decomposition of h 1 h 2 is written as We show it is possible to fit input occlusion algorithms into our formulation. Input occlusion algorithms calculate the importance of p specific to an input example x by observing the prediction difference after replacing the phrase p with padding tokens, noted as 0 p, It is obvious that importance score by the input occlusion algorithm is dependent on the all the context words of p in x. To eliminate the dependence, we perform sampling around the phrase p. This leads to the Sampling and Occlusion (SOC) algorithm, which computes the importance of phrases as the average prediction difference after masking the phrase for each replacement of neighboring words in the input example. Similar to SCD, SOC samples neighboring words x δ from a trained language model p(x δ |x −δ) and obtain a set of neighboring word replacement S. For each replacement x δ ∈ S, the algorithm computes the model prediction differences after replacing the phrase p with padding tokens. The importance φ(p, x) is then calculated as the average prediction differences. Formally, the algorithm calculates, We evaluate explanation algorithms on both shallow LSTM models and deep fine-tuned BERT Transformer models. We use two sentiment analysis datasets, namely the Stanford Sentiment Treebank-2 (SST-2) dataset and the Yelp Sentiment Polarity dataset , as well as a relation extraction dataset, namely the TACRED dataset . The two tasks are modeled as binary and multi-class classification tasks respectively. For the SST-2 dataset, while it provides sentiment polarity scores for all the phrases on the nodes of the constituency parsing trees, we do not train our model on these phrases, and use these scores as the evaluation for the phrase level explanations. Our Transformer model is fine-tuned from pretrained BERT model. See Appendix A for model details. We compare our explanation algorithm with following baselines: Input occlusion and Integrated Gradient+SHAP (GradSHAP) ; two algorithms applied for hierarchical explanations, namely Contextual Decomposition (CD) , and Agglomerative Contextual Decomposition (ACD) . We also compare with a naive however neglected baseline in prior literature, which directly feed the given We generate explanations for all the phrases on the truncated constituency parsing tree, where positive sentiments are colored red and negative sentiments are colored blue. We see our method identify positive segments in the overall negative sentence, such as "a breath of fresh air" phrase to the model and take the prediction score as the importance of the phrase, noted as Direct Feed. For our algorithms, we list the performance of corpus statistic based approach (Statistic) for approximating context independent importance in Eq. 3, Sampling and Contextual Decomposition (SCD), and Sampling and Occlusion (SOC) algorithm. We verify the performance of our algorithms in identifying important words and phrases captured by models. We follow the quantitative evaluation protocol proposed in CD algorithm for evaluating word-level explanations, which computes Pearson correlation between the coefficients learned by a linear bag-of-words model and the importance scores attributed by explanation methods, also noted as the word ρ. When the linear model is accurate, its coefficients could stand for general importance of words. For evaluating phrase level explanations, we notice the SST-2 dataset provides human annotated real-valued sentiment polarity for each phrase on constituency parsing trees. We generate explanations for each phrase on the parsing tree and evaluate the Pearson correlation between the ground truth scores and the importance scores assigned for phrases, also noted as the phrase ρ. This evaluation assume that annotators consider the polarity of incomplete phrases by considering there effects in possible contexts for annotations. We draw K = 20 samples from N = 10 words adjacent to a phrase to be explained at the sampling step in our SOC and SCD algorithms. The parameter setting is trade-off between the efficiency and performance. Table 1 shows word ρ and phrase ρ achieved by our algorithms and competitors. Generally, explanation algorithms that follow our formulations achieve highest word ρ and phrase ρ for all the datasets and models. SOC and SCD perform robustly on the deep Transformer model, achieving higher word ρ and phrase ρ than input occlusion and contextual decomposition algorithms by a large margin. We see the simple Direct Feed method provide promising on shallow LSTM networks, but fail in deeper Transformer models. The statistic based approximation of the context independent impor- tance, which do not employ a trained sampler, yields competitive words ρ, but it is not competitive for phrase ρ, pushing the phrase ρ towards that of the input occlusion algorithm. We find it common that a long phrase does not exist in previously seen examples. Qualitative study also shows that our explanation visualize complicated compositional semantics captured by models, such as positive segments in the negative example, and adversative conjunctions connected with "but". We present an example explanation provided by SOC algorithm in Figure 3 and Appendix. We show our explanation algorithm is a nature fit for extracting phrase level classification rules from neural classifiers. With the agglomerative clustering algorithm in , our explanation effectively identify phrase-level classification patterns without evaluating all possible phrases in the sentence even when a predefined hierarchy does not exist. Figure 4 show an example of automatically constructed hierarchy and extracted classification rules in an example in the TACRED dataset. We follow the human evaluation protocol in and study whether our explanations help subjects to better trust model predictions. We ask subjects to rank the provided visualizations based on how they would like to trust the model. For the SST-2 dataset, we show subjects the predictions of the fine-tuned BERT model, and the explanations generated by SOC, SCD, ACD and GradSHAP algorithms for phrases. The phrase polarities are visualized in a hierarchy with the provided parsing tree of each sentence in the dataset. For the TACRED dataset, we show the explanations provided by SOC, SCD, CD and Direct Feed algorithms on the LSTM model. We binarilize the importance of a phrase by calculating the difference between its importance to the predicted class and the its top importance to other classes, and the hierarchies are constructed automatically with agglomerative clustering . Figure 5 shows average ranking of explanations, where 4 notes the best, and 1 notes the worst. On the SST-2 dataset, SOC achieve significantly higher ranking than ACD and GradSHAP, showing a p-value less than 0.05 and 0.001 respectively. On the TACRED dataset, SCD achieve the best ranking, showing significantly better ranking than CD and Direct Feed with p-value less than 10 −6. Both SOC and SCD algorithms require specifying the size of the context region N and the number of samples K. In Figure 6 (also Figure 7 in Appendix) we show the impact of these parameters. We also plot the performance curves when we pad the contexts instead of sampling. We see sampling the context achieves much better performance than padding the context. We also see word ρ and phrase ρ increase as the number of samples K increases. The overall performance also increases as the size of the context region N increases at the early stage, and saturates when N grows large, which implies words or phrases usually do not interact with the words that are far away them in the input. The saturation also implies the performance of trained language models can be a bottleneck of the performance. Interpretability of neural networks has been studied with vairous techniques, including probing learned features with auxiliary tasks , or designing models with inherent interpretability . A major line of work, local explanation algorithms, explains predictions by assigning importance scores for input features. This line of work include input occlusion (Kádár et al., 2017), gradient based algorithms (; ;), additive feature attribution methods (; ;), among which Shapley value based approaches have been studied intensively because of its good mathematical properties. Researchers also study how to efficiently marginalize over alternative input features to be explained for input occlusion algorithms, while our research show extra focus could be placed on marginalizing over contexts. Regarding explanations of models with structured inputs, propose L-Shapley and C-Shapley for efficient approximation of Shapley values, with a similar hypothesis with us that the importance of a word is usually strongly dependent on its neighboring contexts. propose a feature selection based approach for explanation in an information theoretic perspective. On the other hand, global explanation algorithms have also been studied for identifying generally important features, such as Feature Importance Ranking Measure , Accumulated Local Effects (. We note that the context independence property in our proposed methods implies we study hierarchical explanation as a global explanation problem . Compared with local explanation algorithms, global explanation algorithms are less studied for explaining individual predictions , because they reveal the average behavior of models. However, with a hierarchical organization, we show global explanations are also powerful at explaining individual predictions, achieving better human evaluation scores and could explain compositional semantics where local explanation algorithms such as additive feature attribution algorithms totally fail. Moreover, we note that the use of explanation algorithms is not exclusive; we may apply explanation algorithms of different categories to make a more holistic explanation of model predictions. In this work, we identify two desirable properties for informative hierarchical explanations of predictions, namely the non-additivity and context-independence. We propose a formulation to quantify context independent importance of words and phrases that satisfies the properties above. We revisit the prior line of works on contextual decomposition algorithms, and propose Sampling and Contextual Decomposition (SCD) algorithm. We also propose a simple and model agnostic explanation algorithm, namely the Sampling and Occlusion algorithm (SOC). Experiments on multiple datasets and models show that our explanation algorithms generate informative hierarchical explanations, help to extract classification rules from models, and enhance human trust of models. Table 2: Phrase-level classification patterns extracted from models. We show the of SCD and SOC respectively for the SST-2 and the TACRED dataset. Our LSTM classifiers use 1 layer unidirectional LSTM and the number of hidden units is set to 128, 500, and 300 for SST-2, Yelp, and TACRED dataset respectively. For all models, we load the pretrained 300-dimensional Glove word vectors . The language model sampler is also built on LSTM and have the same parameter settings as the classifiers. Our Transformer models are fine-tuned from pretrained BERT models , which have 12 layers and 768 hidden units of per representation. On three datasets, LSTM models achieve 82% accuracy, 95% accuracy, and 0.64 F1 score on average. The fine-tuned BERT models achieve 92% accuracy, 96% accuracy, and 0.68 F1 score on average. We use the same parameter settings between LSTM classifiers and language models on three datasets. For computing context independent importance of a phrase, an intuitive and simple alternative approach, which is nevertheless neglected in prior literature, is to only feed the input to the model and treat the prediction score as the explanation. In Table 1, while the score of the Direct Feed is lower than that of the best performing algorithms, the score is rather competitive. The potential risk of this explanation is that it assumes model performs reasonably on incomplete sentence fragments that are significantly out of the data distribution. As a , the explanation of short phrases can be misleading. To simulate the situation, we train a LSTM model on inversed labels on isolate words, in addition to the origin training sentences. The model could achieve the same accuracy as the original LSTM model. However, the word ρ and the phrase ρ of Direct Feed drop by a large margin, showing a word ρ of -0.38 and 0.09. SOC and SCD are still robust on the adverse LSTM model, both showing a word ρ and phrase ρ of more than 0.60 and 0.55. The masking operation could also cause performance drop because the masked sentence can be out of data distribution when explaining long phrases. For SOC, the risk can be resolved by implementing the masking operation of the phrase p by another round of sampling from a language model conditioned on its context x −p, but we do not find empirical evidence showing that it improves performance.
We propose measurement of phrase importance and algorithms for hierarchical explanation of neural sequence model predictions
1,315
scitldr
Stochastic gradient descent (SGD) with stochastic momentum is popular in nonconvex stochastic optimization and particularly for the training of deep neural networks. In standard SGD, parameters are updated by improving along the path of the gradient at the current iterate on a batch of examples, where the addition of a ``momentum'' term biases the update in the direction of the previous change in parameters. In non-stochastic convex optimization one can show that a momentum adjustment provably reduces convergence time in many settings, yet such have been elusive in the stochastic and non-convex settings. At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e.g. ADAM, AMSGrad, etc. Yet theoretical justification for the use of stochastic momentum has remained a significant open question. In this paper we propose an answer: stochastic momentum improves deep network training because it modifies SGD to escape saddle points faster and, consequently, to more quickly find a second order stationary point. Our theoretical also shed light on the related question of how to choose the ideal momentum parameter--our analysis suggests that $\beta \in$ should be large (close to 1), which comports with empirical findings. We also provide experimental findings that further validate these . SGD with stochastic momentum has been a de facto algorithm in nonconvex optimization and deep learning. It has been widely adopted for training machine learning models in various applications. Modern techniques in computer vision (e.g. ; ; ;), speech recognition (e.g.), natural language processing (e.g.), and reinforcement learning (e.g.) use SGD with stochastic momentum to train models. The advantage of SGD with stochastic momentum has been widely observed (; ;). demonstrate that training deep neural nets by SGD with stochastic momentum helps achieving in faster convergence compared with the standard SGD (i.e. without momentum). The success of momentum makes it a necessary tool for designing new optimization algorithms in optimization and deep learning. For example, all the popular variants of adaptive stochastic gradient methods like Adam or AMSGrad (Reddi et al. (2018b) ) include the use of momentum. Despite the wide use of stochastic momentum (Algorithm 1) in practice, justification for the clear empirical improvements has remained elusive, as has any mathematical guidelines for actually setting the momentum parameter-it has been observed that large values (e.g. β = 0.9) work well in practice. It should be noted that Algorithm 1 is the default momentum-method in popular software packages such as PyTorch and Tensorflow. 1 In this paper we provide a theoretical analysis for SGD with 1: Required: Step size parameter η and momentum parameter β. 2: Init: w0 ∈ R d and m−1 = 0 ∈ R d. 3: for t = 0 to T do 4: Given current iterate wt, obtain stochastic gradient gt:= ∇f (wt; ξt). Update stochastic momentum mt:= βmt−1 + gt. Update iterate wt+1:= wt − ηmt. 7: end for momentum. We identify some mild conditions that guarantees SGD with stochastic momentum will provably escape saddle points faster than the standard SGD, which provides clear evidence for the benefit of using stochastic momentum. For stochastic heavy ball momentum, a weighted average of stochastic gradients at the visited points is maintained. The new update is computed as the current update minus a step in the direction of the momentum. Our analysis shows that these updates can amplify a component in an escape direction of the saddle points. In this paper, we focus on finding a second-order stationary point for smooth non-convex optimization by SGD with stochastic heavy ball momentum. Specifically, we consider the stochastic nonconvex optimization problem, min w∈R d f (w):= E ξ∼D [f (w; ξ)], where we overload the notation so that f (w; ξ) represents a stochastic function induced by the randomness ξ while f (w) is the expectation of the stochastic functions. An (,)-second-order stationary point w satisfies ∇f (w) ≤ and ∇ 2 f (w) − I. Obtaining a second order guarantee has emerged as a desired goal in the nonconvex optimization community. Since finding a global minimum or even a local minimum in general nonconvex optimization can be NP hard (; ; ;), most of the papers in nonconvex optimization target at reaching an approximate second-order stationary point with additional assumptions like Lipschitzness in the gradients and the Hessian (e.g. ; et al. ). 2 We follow these related works for the goal and aim at showing the benefit of the use of the momentum in reaching an (,)-second-order stationary point. We introduce a required condition, akin to a model assumption made in , that ensures the dynamic procedure in Algorithm 2 produces updates with suitable correlation with the negative curvature directions of the function f. Definition 1. Assume, at some time t, that the Hessian H t = ∇ 2 f (w t) has some eigenvalue smaller than − and ∇f (w t) ≤. Let v t be the eigenvector corresponding to the smallest eigenvalue of ∇ 2 f (w t). The stochastic momentum m t satisfies Correlated Negative Curvature (CNC) at t with parameter γ > 0 if As we will show, the recursive dynamics of SGD with heavy ball momentum helps in amplifying the escape signal γ, which allows it to escape saddle points faster. We show that, under CNC assumption and some minor constraints that upper-bound parameter β, if SGD with momentum has properties called Almost Positively Aligned with Gradient (APAG), Almost Positively Correlated with Gradient (APCG), and Gradient Alignment or Curvature Exploitation (GrACE), defined in the later section, then it takes T = O((1 − β) log(1/(1 − β) ) −10 ) iterations to return an (,) second order stationary point. Alternatively, one can obtain an (, √) second order stationary point in T = O((1 − β) log(1/(1 − β) ) −5 ) iterations. Our theoretical demonstrates that a larger momentum parameter β can help in escaping saddle points faster. As saddle points are pervasive in the loss landscape of optimization and deep learning , the sheds light on explaining why SGD with momentum enables training faster in optimization and deep learning. Notation: In this paper we use E t [·] to represent conditional expectation E[·|w 1, w 2, . . ., w t], which is about fixing the randomness upto but not including t and notice that w t was determined at t − 1. 2.1 A THOUGHT EXPERIMENT. Let us provide some high-level intuition about the benefit of stochastic momentum with respect to avoiding saddle points. In an iterative update scheme, at some time t 0 the parameters w t0 can enter a saddle point region, that is a place where Hessian ∇ 2 f (w t0) has a non-trivial negative eigenvalue, say λ min (∇ 2 f (w t0)) ≤ −, and the gradient ∇f (w t0) is small in norm, say ∇f (w t0) ≤. The challenge here is that gradient updates may drift only very slowly away from the saddle point, and may not escape this region; see for additional details. On the other hand, if the iterates were to move in one particular direction, namely along v t0 the direction of the smallest eigenvector of ∇ 2 f (w t0), then a fast escape is guaranteed under certain constraints on the step size η; see e.g. ). While the negative eigenvector could be computed directly, this 2nd-order method is prohibitively expensive and hence we typically aim to rely on gradient methods. With this in mind, , who study non-momentum SGD, make an assumption akin to our CNC property described above that each stochastic gradient g t0 is strongly non-orthogonal to v t0 the direction of large negative curvature. This suffices to drive the updates out of the saddle point region. In the present paper we study stochastic momentum, and our CNC property requires that the update direction m t0 is strongly non-orthogonal to v t0; more precisely, E t0 [m t0, v t0 2] ≥ γ > 0. We are able to take advantage of the analysis of to establish that updates begin to escape a saddle point region for similar reasons. Further, this effect is amplified in successive iterations through the momentum update when β is close to 1. Assume that at some w t0 we have m t0 which possesses significant correlation with the negative curvature direction v t0, then on successive rounds m t0+1 is quite close to βm t0, m t0+2 is quite close to β 2 m t0, and so forth; see Figure 1 for an example. This provides an intuitive perspective on how momentum might help accelerate the escape process. Yet one might ask does this procedure provably contribute to the escape process and, if so, what is the aggregate performance improvement of the momentum? We answer the first question in the affirmative, and we answer the second question essentially by showing that momentum can help speed up saddle-point escape by a multiplicative factor of 1 − β. On the negative side, we also show that β is constrained and may not be chosen arbitrarily close to 1. Let us now establish, empirically, the clear benefit of stochastic momentum on the problem of saddle-point escape. We construct two stochastic optimization tasks, and each exhibits at least one significant saddle point. The two objectives are as follows. Problem of these was considered by (; Reddi et al. (2018a) ) and represents a very straightforward non-convex optimization challenge, with an embedded saddle given by the matrix H:= diag([1, −0.1]), and stochastic gaussian perturbations given by b i ∼ N (0, diag([0.1, 0.001])); the small variance in the second component provides lower noise in the escape direction. Here we have set n = 10. Observe that the origin is in the neighborhood of saddle points and has objective We plot convergence in function value f (·) given in. Initialization is always set as w0 = 0. All the algorithms use the same step size η = 5 × 10 −5. Fig. 4b: We plot convergence in relative distance to the true model w *, defined as min(wt − w *, wt + w *)/ w *, which more appropriately captures progress as the global sign of the objective is unrecoverable. All the algorithms are initialized at the same point w0 ∼ N (0, I d /(10000d)) and use the same step size η = 5 × 10 −4. value zero. SGD and SGD with momentum are initialized at the origin in the experiment so that they have to escape saddle points before the convergence. The second objective appears in the phase retrieval problem, that has real applications in physical sciences (Candés et al.; ). In phase retrieval 3, one wants to find an unknown w * ∈ R d with access to but a few samples y i = (a i w *) 2; the design vector a i is known a priori. Here we have sampled The empirical findings, displayed in Figure 2, are quite stark: for both objectives, convergence is significantly accelerated by larger choices of β. In the first objective (Figure 4a), we see each optimization trajectory entering a saddle point region, apparent from the "flat" progress, yet we observe that large-momentum trajectories escape the saddle much more quickly than those with smaller momentum. A similar affect appears in Figure 4b. To the best of our knowledge, this is the first reported empirical finding that establishes the dramatic speed up of stochastic momentum for finding an optimal solution in phase retrieval. Heavy ball method: The heavy ball method was originally proposed by. It has been observed that this algorithm, even in the deterministic setting, provides no convergence speedup over standard gradient descent, except in some highly structure cases such as convex quadratic objectives where an "accelerated" rate is possible . We provide a comprehensive survey of the related works about heavy ball method in Appendix A. Reaching a second order stationary point: As we mentioned earlier, there are many works aim at reaching a second order stationary point. We classify them into two categories: specialized algorithms and simple GD/SGD variants. Specialized algorithms are those designed to exploit the negative curvature explicitly and escape saddle points faster than the ones without the explicit exploitation). Our work belongs to this category. In this category, perhaps the pioneer works are and . show that explicitly adding isotropic noise in each iteration guarantees that GD escapes saddle points and finds a second order stationary 3 It is known that phase retrieval is nonconvex and has the so-called strict saddle property: every local minimizer {w *, −w *} is global up to phase, each saddle exhibits negative curvature (see e.g. (Sun et al. (2015; ;)) point with high probability. Following , assume that stochastic gradient inherently has a component to escape. Specifically, they make assumption of the Correlated Negative Curvature (CNC) for stochastic gradient g t so that E t [g t, v t 2] ≥ γ > 0. The assumption allows the algorithm to avoid the procedure of perturbing the updates by adding isotropic noise. Our work is motivated by but assumes CNC for the stochastic momentum m t instead. In Appendix A, we compare the of our work with the related works. We assume that the gradient ∇f is L-Lipschitz; that is, f is L-smooth. Further, we assume that the Hessian ∇ 2 f is ρ-Lipschitz. These two properties ensure that ∇f w−w 3, ∀w, w. Furthermore, we assume that the stochastic gradient has bounded noise ∇f (w) − ∇f (w; ξ) 2 ≤ σ 2 and that the norm of stochastic momentum is bounded so that m t ≤ c m. We denote Π i M i as the matrix product of matrices {M i} and we use to denote the spectral norm of the matrix M. Our analysis of stochastic momentum relies on three properties of the stochastic momentum dynamic. These properties are somewhat unusual, but we argue they should hold in natural settings, and later we aim to demonstrate that they hold empirically in a couple of standard problems of interest. Definition 2. We say that SGD with stochastic momentum satisfies Almost Positively Aligned with Gradient (APAG) 4 if we have We say that SGD with stochastic momentum satisfies Almost Positively Correlated with Gradient (APCG) with parameter τ if ∃c > 0 such that, where the PSD matrix M t is defined as for any integer 1 ≤ k ≤ τ − 1, and η is any step size chosen that guarantees each G s,t is PSD. Definition 3. We say that the SGD with momentum exhibits Gradient Alignment or Curvature Exploitation (GrACE) if ∃c h ≥ 0 such that APAG requires that the momentum term m t must, in expectation, not be significantly misaligned with the gradient ∇f (w t). This is a very natural condition when one sees that the momentum term is acting as a biased estimate of the gradient of the deterministic f. APAG demands that the bias can not be too large relative to the size of ∇f (w t). Indeed this property is only needed in our analysis when the gradient is large (i.e. ∇f (w t) ≥ ) as it guarantees that the algorithm makes progress; our analysis does not require APAG holds when gradient is small. APCG is a related property, but requires that the current momentum term m t is almost positively correlated with the the gradient ∇f (w t), but measured in the Mahalanobis norm induced by M t. It may appear to be an unusual object, but one can view the PSD matrix M t as measuring something about the local curvature of the function with respect to the trajectory of the SGD with momentum dynamic. We will show that this property holds empirically on two natural problems for a reasonable constant c. APCG is only needed in our analysis when the update is in a saddle region with significant 2 versus iterations. For (b), we only report them when the gradient is large (∇f (wt) ≥ 0.02). It shows that the value is large than −0.5 except the transition. For (e), we observe that the value is almost always nonnegative. Sub-figures (c) and (f): We plot the value of Gs,t) and we only report the values when the update is in the region of saddle points. For (f), we let Mt = (Π 500 s=1 Gs,t)(Π 500 s=1 Gs,t) and we observe that the value is almost always nonnegative. The figures implies that SGD with momentum has APAG and APCG properties in the experiments. Furthermore, an interesting observation is that, for the phase retrieval problem, the expected values might actually be nonnegative. negative curvature, ∇f (w) ≤ and λ min (∇ 2 f (w)) ≤ −. Our analysis does not require APCG holds when the gradient is large or the update is at an (,)-second order stationary point. For GrACE, the first term on l.h.s of measures the alignment between stochastic momentum m t and the gradient ∇f (w t), while the second term on l.h.s measures the curvature exploitation. The first term is small (or even negative) when the stochastic momentum m t is aligned with the gradient ∇f (w t), while the second term is small (or even negative) when the stochastic momentum m t can exploit a negative curvature (i.e. the subspace of eigenvectors that corresponds to the negative eigenvalues of the Hessian ∇ 2 f (w t) if exists). Overall, a small sum of the two terms (and, consequently, a small c h) allows one to bound the function value of the next iterate (see Lemma 8). On Figure 3, we report some quantities related to APAG and APCG as well as the gradient norm when solving the previously discussed problems and using SGD with momentum. We also report a quantity regarding GrACE on Figure 4 in the appendix. The high level idea of our analysis follows as a similar template to (; ;). Our proof is structured into three cases: either (a) ∇f (w) ≥, or (b) ∇f (w) ≤ and λ min (∇ 2 f (w)) ≤ −, or otherwise (c) ∇f (w) ≤ and λ min (∇ 2 f (w)) ≥ −, meaning we have arrived in a second-order stationary region. The precise algorithm we analyze is Algorithm 2, which identical to Algorithm 1 except that we boost the step size to a larger value r on 4 Note that our analysis still go through if one replaces 1 2 on r.h.s. of with any larger number c < 1; the ed iteration complexity would be only a constant multiple worse. Step size parameters r and η, momentum parameter β, and period parameter T thred. 2: Init: Get stochastic gradient gt at wt, and set stochastic momentum mt:= βmt−1 + gt. Set learning rate:η:= η unless (t mod T thred) = 0 in which caseη:= r 6: wt+1 = wt −ηmt. 7: end for occasion. We will show that the algorithm makes progress in cases (a) and (b). In case (c), when the goal has already been met, further execution of the algorithm only weakly hurts progress. Ultimately, we prove that a second order stationary point is arrived at with high probability. While our proof borrows tools from , much of the momentum analysis is entirely novel to our knowledge. Theorem 1. Assume that the stochastic momentum satisfies CNC. Set, and ). If SGD with momentum (Algorithm 2) has APAG property when gradient is large (∇f (w) ≥ ), APCG T thred property when it enters a region of saddle points that exhibits a negative curvature (∇f (w) ≤ and λ min (∇ 2 f (w)) ≤ − ), and GrACE property throughout the iterations, then it reaches an (,) second order stationary point in ) iterations with high probability 1 − δ. The theorem implies the advantage of using stochastic momentum for SGD. Higher β leads to reaching a second order stationary point faster. As we will show in the following, this is due to that higher β enables escaping the saddle points faster. In Subsection 3.2.1, we provide some key details of the proof of Theorem 1. The interested reader can read a high-level sketch of the proof, as well as the detailed version, in Appendix G. Remark 1: (constraints on β) We also need some minor constraints on β so that β cannot be too close to 1. They are 1) On the other hand, for CNC-SGD, based on Table 3 in their paper, is T thred =Õ 1 η. One can clearly see that T thred of our has a dependency 1 − β, which makes it smaller than that of for any same η and consequently demonstrates escaping saddle point faster with momentum. Remark 3: (finding a second order stationary point) Denote a number such that ∀t, g t ≤. In Appendix G.3, we show that in the high momentum regime where, Algorithm 2 is strictly better than CNC-SGD of , which means that a higher momentum can help find a second order stationary point faster. Empirically, we find out that c ≈ 0 (Figure 3) and c h ≈ 0 (Figure 4) in the phase retrieval problem, so the condition is easily satisfied for a wide range of β. In this subsection, we analyze the process of escaping saddle points by SGD with momentum. Denote t 0 any time such that (t 0 mod T thred) = 0. Suppose that it enters the region exhibiting a small gradient but a large negative eigenvalue of the Hessian (i.e. ∇f (w t0) ≤ and λ min (∇ 2 f (w t0)) ≤ − ). We want to show that it takes at most T thred iterations to escape the region and whenever it escapes, the function value decreases at least by F thred = O on expectation, where the precise expression of F thred will be determined later in Appendix E. The technique that we use is proving by contradiction. Assume that the function value on expectation does not decrease at least F thred in T thred iterations. Then, we get an upper bound of the expected distance E t0 [w t0+T thred − w t0 2] ≤ C upper. Yet, by leveraging the negative curvature, we also show a lower bound of the form E t0 [w t0+T thred − w t0 2] ≥ C lower. The analysis will show that the lower bound is larger than the upper bound (namely, C lower > C upper), which leads to the contradiction and concludes that the function value must decrease at least F thred in T thred iterations on expectation. Since 6 ), the dependency on β suggests that larger β can leads to smaller T thred, which implies that larger momentum helps in escaping saddle points faster. Lemma 1 below provides an upper bound of the expected distance. The proof is in Appendix C. Lemma 1. Denote t 0 any time such that We see that C upper,t in Lemma 1 is monotone increasing with t, so we can define C upper:= C upper,T thred. Now let us switch to obtaining the lower bound of E t0 [w t0+T thred − w t0 2]. The key to get the lower bound comes from the recursive dynamics of SGD with momentum. Lemma 2. Denote t 0 any time such that (t 0 mod T thred) = 0. Let us define a quadratic ap- Then we can write w t0+t − w t0 exactly using the following decomposition. The proof of Lemma 2 is in Appendix D. Furthermore, we will use the quantities q v,t−1, q m,t−1, q q,t−1, q w,t−1, q ξ,t−1 as defined above throughout the analysis. Lemma 3. Following the notations of Lemma 2, we have that We are going to show that the dominant term in the lower bound of, which is the critical component for ensuring that the lower bound is larger than the upper bound of the expected distance. Following the conditions and notations in Lemma 1 and Lemma 2, we have that Proof. We know that λ min (H) ≤ − < 0. Let v be the eigenvector of the Hessian H with unit norm that corresponds to λ min (H) so that Hv = λ min (H)v. We have where (a) is because v is with unit norm, (b) is by Cauchy-Schwarz inequality, (c), (d) are by the definitions, and (e) is by the CNC assumption so that Observe that the lower bound in is monotone increasing with t and the momentum parameter β. Moreover, it actually grows exponentially in t. To get the contradiction, we have to show that the lower bound is larger than the upper bound. By Lemma 1 and Lemma 3, it suffices to prove the following lemma. We provide its proof in Appendix E. Lemma 5. Let F thred = O and η 2 T thred ≤ r 2. By following the conditions and notations in Theorem 1, Lemma 1 and Lemma 2, we conclude that if SGD with momentum (Algorithm 2) has the APCG property, then we have that In this paper, we identify three properties that guarantee SGD with momentum in reaching a secondorder stationary point faster by a higher momentum, which justifies the practice of using a large value of momentum parameter β. We show that a greater momentum leads to escaping strict saddle points faster due to that SGD with momentum recursively enlarges the projection to an escape direction. However, how to make sure that SGD with momentum has the three properties is not very clear. It would be interesting to identify conditions that guarantee SGD with momentum to have the properties. Perhaps a good starting point is understanding why the properties hold in phase retrieval. We believe that our shed light on understanding the recent success of SGD with momentum in non-convex optimization and deep learning. Heavy ball method: The heavy ball method was originally proposed by. It has been observed that this algorithm, even in the deterministic setting, provides no convergence speedup over standard gradient descent, except in some highly structure cases such as convex quadratic objectives where an "accelerated" rate is possible . In recent years, some works make some efforts in analyzing heavy ball method for other classes of optimization problems besides the quadratic functions. provide a unified analysis of stochastic heavy ball momentum and Nesterov's momentum for smooth non-convex objective functions. They show that the expected gradient norm converges at rate O(1/ √ t). Yet, the rate is not better than that of the standard SGD. We are also aware of the works (Ghadimi & Lan (2016; 2013) ), which propose some variants of stochastic accelerated algorithms with first order stationary point guarantees. Yet, the framework in (Ghadimi & Lan (2016; 2013) ) does not capture the stochastic heavy ball momentum used in practice. There is also a negative about the heavy ball momentum. show that for a specific strongly convex and strongly smooth problem, SGD with heavy ball momentum fails to achieving the best convergence rate while some algorithms can. Reaching a second order stationary point: As we mentioned earlier, there are many works aim at reaching a second order stationary point. We classify them into two categories: specialized algorithms and simple GD/SGD variants. Specialized algorithms are those designed to exploit the negative curvature explicitly and escape saddle points faster than the ones without the explicit exploitation), in which the gradient g t is multiplied by a preconditioning matrix G t and the update is w t+1 = w t − G −1/2 t g t. The work shows that the algorithm can help in escaping saddle points faster compared to the standard SGD under certain conditions. propose average-SGD, in which a suffix averaging scheme is conducted for the updates. They also assume an inherent property of stochastic gradients that allows SGD to escape saddle points. We summarize the iteration complexity of the related works for simple SGD variants on Table 1. 6 The readers can see that the iteration complexity of and our . So, we want to explain the and clarify the differences. First, we focus on explaining why the popular algorithm, SGD with heavy ball momentum, works well in practice, which is without the suffix averaging scheme used in and is without the explicit perturbation used in . Specifically, we focus on studying the effect of stochastic heavy ball momentum and showing the advantage of using it. Furthermore, our analysis framework is built on the work of . We believe that, based on the insight in our work, one can also show the advantage of stochastic momentum by modifying the assumptions and algorithms in or and consequently get a better dependency on. B LEMMA 6, 7, AND 8 In the following, Lemma 7 says that under the APAG property, when the gradient norm is large, on expectation SGD with momentum decreases the function value by a constant and consequently makes progress. On the other hand, Lemma 8 upper-bounds the increase of function value of the next iterate (if happens) by leveraging the GrACE property. Lemma 6. If SGD with momentum has the APAG property, then, considering the update step w t+1 = w t − ηm t, we have that Proof. By the L-smoothness assumption, Taking the expectation on both sides. We have where we use the APAG property in the last inequality. Lemma 7. Assume that the step size η satisfies η ≤ 2 8Lc 2 m. If SGD with momentum has the APAG property, then, considering the update step w t+1 = w t − ηm t, we have that 2, where the last inequality is due to the constraint of η. Lemma 8. If SGD with momentum has the GrACE property, then, considering the update step w t+1 = w t − ηm t, we have that Proof. Consider the update rule w t+1 = w t − ηm t, where m t represents the stochastic momentum and η is the step size. By ρ-Lipschitzness of Hessian, we have Taking the conditional expectation, one has C PROOF OF LEMMA 1 Lemma 1 Denote t 0 any time such that (t 0 mod T thred) = 0. Suppose that Proof. Recall that the update is w t0+1 = w t0 − rm t0, and w t0+t = w t0+t−1 − ηm t0+t−1, for t > 1. We have that where the first inequality is by the triangle inequality and the second one is due to the assumption that m t ≤ c m for any t. Now let us denote s=1 α s and let us rewrite g t = ∇f (w t) + ξ t, where ξ t is the zero-mean noise. We have that To proceed, we need to upper bound E t0 [4η We have that where (a) is because E t0 [ξ t0+i ξ t0+j] = 0 for i = j, (b) is by that ξ t 2 ≤ σ 2 and max t α t ≤ 1 1−β. Combining,,,, Now we need to bound E t0 [. By using ρ-Lipschitzness of Hessian, we have that By adding η ∇f (w t0+s−1), g t0+s−1 on both sides, we have η ∇f (w t0+s−1), g t0+s−1 ≤ f (w t0+s−1) − f (w t0+s) + η ∇f (w t0+s−1), g t0+s−1 − m t0+s−1 Taking conditional expectation on both sides leads to D PROOF OF LEMMA 2 AND LEMMA 3 Lemma 2 Denote t 0 any time such that (t 0 mod T thred) = 0. Let us define a quadratic approximation at w t0, Q(w):= f (w t0) + w − w t0, ∇f (w t0) + 1 2 (w − w t0) H(w − w t0), where • qv,t−1:= Π t−1 j=1 Gj − rmt 0. • qm,t−1:= − t−1 s=1 Π t−1 j=s+1 Gj β s mt 0. • qq,t−1: • qw,t−1: • q ξ,t−1: Then, w t0+t − w t0 = q v,t−1 + ηq m,t−1 + ηq q,t−1 + ηq w,t−1 + ηq ξ,t−1. Denote t 0 any time such that (t 0 mod T thred) = 0. Let us define a quadratic approximation at w t0, where H:= ∇ 2 f (w t0). Also, we denote Proof. First, we rewrite m t0+j for any j ≥ 1 as follows. We have that where (a) is by using with j = t − 1, (b) is by subtracting and adding back the same term, and (c) is by ∇Q(w t0+t−1) = ∇f (w t0) + H(w t0+t−1 − w t0). To continue, by using the nations in, we can rewrite as Recursively expanding leads to where (a) we use the notation that Π t−1 and the notation that Π t−1 j=t G j = 1 and (b) is by the update rule. By using the definitions of {q,t−1} in the lemma statement, we complete the proof. Lemma 3 Following the notations of Lemma 2, we have that Proof. Following the proof of Lemma 2, we have Therefore, by using a + b 2 ≥ a 2 + 2 a, b, E PROOF OF LEMMA 5 Lemma 5 Let F thred = O and η 2 T thred ≤ r 2. By following the conditions and notations in Theorem 1, Lemma 1 and Lemma 2, we conclude that if SGD with momentum (Algorithm 2) has the APCG property, then we have that, cr ≤,,, from " ) from W.l.o.g, we assume that c m, L, σ 2, c, c h, and ρ are not less than one and that ≤ 1. E.1 SOME CONSTRAINTS ON β. We require that parameter β is not too close to 1 so that the following holds, • 2) σ 2 (1 − β) 3 > 1. • 3) c (1 − β) 2 > 1. • The constraints upper-bound the value of β. That is, β cannot be too close to 1. We note that the β dependence on L, σ, and c are only artificial. We use these constraints in our proofs but they are mostly artefacts of the analysis. For example, if a function is L-smooth, and L < 1, then it is also 1-smooth, so we can assume without loss of generality that L > 1. Similarly, the dependence on σ is not highly relevant, since we can always increase the variance of the stochastic gradient, for example by adding an O gaussian perturbation. To prove Lemma 5, we need a series of lemmas with the choices of parameters on Table 3. Upper bounding E t0 [q q,t−1]: Lemma 9. Following the conditions in Lemma 1 and Lemma 2, we have Proof. where (a), (c), (d) is by triangle inequality, (b) is by the fact that Ax 2 ≤ A 2 x 2 for any matrix A and vector x. Now that we have an upper bound of ∇f (w t0+k) − ∇f (w t0+s), where (a) is by the assumption of L-Lipschitz gradient and (b) is by applying the triangle inequality (s − k) times and that w t − w t−1 ≤ η m t−1 ≤ ηc m, for any t. We can also derive an upper bound of E t0 [∇f (w t0+s) − ∇Q(w t0+s) ], Above, (a) is by the fact that if a function f (·) has ρ Lipschitz Hessian, then (c.f. Lemma 1.2.4 in ) and using the definition that Combing,,, we have that where on the last line we use the notation that To continue, let us analyze Π t−1 j=s+1 G j 2 first. Above, we use the notation that θ j:= j k=1 β j−k. For (a), it is due to that λ:= −λ min (H), λ max (H) ≤ L, and the choice of η so that 1 ≥ ηL 1−β, or equivalently, For (b), it is due to that θ j ≥ 1 for any j and λ ≥. Therefore, we can upper-bound the first term on r.h.s of as where (a) is by that fact that is by using, and (c) is by using that η. Now let us switch to bound where (a) is by the fact that, (c) is by using that (1−z) 2 for any |z| ≤ 1 and substituting z = 1 1+η, which leads to in which the last inequality is by chosen the step size η so that η ≤ 1. By combining,, and, we have that Proof. where the last inequality is because η is chosen so that 1 ≥ ηL 1−β and the fact that λ max (H) ≤ L. Lower bounding E t0 [2η q v,t−1, q q,t−1]: Lemma 11. Following the conditions in Lemma 1 and Lemma 2, we have Proof. By the of Lemma 9 and Lemma 10 Under review as a conference paper at ICLR 2020 Lower bounding E t0 [2η q v,t−1, q ξ,t−1]: Lemma 12. Following the conditions in Lemma 1 and Lemma 2, we have Proof. where (a) holds for some coefficients α k, (b) is by the tower rule, (c) is because q v,t−1 is measureable with t 0, and (d) is by the zero mean assumption of ξ's. Lower bounding E t0 [2η q v,t−1, q m,t−1]: Lemma 13. Following the conditions in Lemma 1 and Lemma 2, we have Proof. where (a) is by defining the matrix B: For (b), notice that the matrix B is symmetric positive semidefinite. To see that the matrix B is symmetric positive semidefinite, observe that each G j:= (I − η j k=1 β j−k H) can be written in the form of G j = U D j U for some orthonormal matrix U and a diagonal matrix D j. Therefore, the matrix product U is symmetric positive semidefinite as long as each G j is. So, (b) is by the property of a matrix being symmetric positive semidefinite. Lower bounding 2ηE t0 [q v,t−1, q w,t−1]: Lemma 14. Following the conditions in Lemma 1 and Lemma 2, if SGD with momentum has the APCG property, then Proof. Define where (a) is by the APCG property. We also have that where (a) and (b) is by. Substituting the back to, we get Using the fact that ∇f (w t0) ≤ completes the proof. Recall that the strategy is proving by contradiction. Assume that the function value does not decrease at least F thred in T thred iterations on expectation. Then, we can get an upper bound of the expected distance E t0 [w t0+T thred − w t0 2] ≤ C upper but, by leveraging the negative curvature, we can also show a lower bound of the form E t0 [w t0+T thred − w t0 2] ≥ C lower. The strategy is showing that the lower bound is larger than the upper bound, which leads to the contradiction and concludes that the function value must decrease at least F thred in T thred iterations on expectation. To get the contradiction, according to Lemma 1 and Lemma 3, we need to show that Yet, by Lemma 13 and Lemma 12, we have that ηE t0 [q v,T thred −1, q m,T thred −1] ≥ 0 and ηE t0 [q v,T thred −1, q ξ,T thred −1] = 0. So, it suffices to prove that and it suffices to show that Under review as a conference paper at ICLR 2020 as guaranteed by the constraint of η. So, where (a) is by using the inequality log(1 + x) ≥ x 2 with x = ηθ j ≤ 1 and (b) is by making (1−β) 2, which is equivalent to the condition that Now let us substitute the of back to. We have that which is what we need to show. By choosing T thred large enough, for some constant c > 0, we can guarantee that the above inequality holds. Lemma 15 Let us define the event Set T = 2T thred f (w 0) − min w f (w) /(δ∆). We return w uniformly randomly from w 0, w T thred, w 2T thred,..., w kT thred,..., w KT thred, where K:= T /T thred. Then, with probability at least 1 − δ, we will have chosen a w k where Υ k did not occur. Proof. Let P k be the probability that Υ k occurs. Summing over all K, we have −β) ) −6 ). If SGD with momentum (Algorithm 2) has APAG property when gradient is large (∇f (w) ≥ ), APCG T thred property when it enters a region of saddle points that exhibits a negative curvature (∇f (w) ≤ and λ min (∇ 2 f (w)) ≤ − ), and GrACE property throughout the iterations, then it reaches an (,) second order stationary point in T = O((1 − β) log(Lcmσ 2 ρc c h (1−β)δγ ) −10 ) iterations with high probability 1 − δ. In this subsection, we provide a sketch of the proof of Theorem 1. The complete proof is available in Appendix G. Our proof uses a lemma in , which is Lemma 15 below. The lemma guarantees that uniformly sampling a w from {w kT thred}, k = 0, 1, 2,..., T /T thred gives an (,)-second order stationary point with high probability. We replicate the proof of Lemma 15 in Appendix F. We return w uniformly randomly from w 0, w T thred, w 2T thred,..., w kT thred,..., w KT thred, where K:= T /T thred. Then, with probability at least 1 − δ, we will have chosen a w k where Υ k did not occur. To use the of Lemma 15, we need to let the conditions in be satisfied. We can bound E[f (w (k+1)T thred ) − f (w kT thred)|Υ k ] ≤ −F thred, based on the analysis of the large gradient norm regime (Lemma 7) and the analysis for the scenario when the update is with small gradient norm but a large negative curvature is available (Subsection 3.2.1). For the other condition, E[f (w (k+1)T thred ) − f (w kT thred)|Υ, it requires that the expected amortized increase of function value due to taking the large step size r is limited (i.e. bounded by δ) when w kT thred is a second order stationary point. By having the conditions satisfied, we can apply Lemma 15 and finish the proof of the theorem. Proof. Our proof is based on Lemma 15. So, let us consider the events in Lemma 15, Υ k:= {∇f (w kT thred) ≥ or λ min (∇ 2 f (w kT thred)) ≤ − }. We first show that E[f (w (k+1)T thred ) − f (w kT thred)|Υ k ] ≤ F thred. When ∇f (w kT thred) ≥: Now we are ready to use Lemma 15, since both the conditions are satisfied. According to the lemma and the choices of parameters value on Table 3, we can set T = 2T thred f (w 0) − min w f (w) /(δF thred) = O((1 − β) log(Lcmσ 2 ρc c h (1−β)δγ ) −10 ), which will return a w that is an (,) second order stationary point. Thus, we have completed the proof. (1−β)δγ ) −10 ) for Algorithm 2. Before making a comparison, we note that their does not have a dependency on the variance of stochastic gradient (i.e. σ 2), which is because they assume that the variance is also bounded by the constant (can be seen from, Algorithm 2 is strictly better than that of , which means that a higher momentum can help to find a second order stationary point faster.
Higher momentum parameter $\beta$ helps for escaping saddle points faster
1,316
scitldr
GANs provide a framework for training generative models which mimic a data distribution. However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images. In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction. Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings. To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model. As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize s Generative image models have improved rapidly in the past few years, in part because of the success of Generative Adversarial Networks, or GANs BID2. GANs attempt to train a "generator" to create images which mimic real images, by training it to fool an adversarial "discriminator," which attempts to discern whether images are real or fake. This is one solution to the difficult problem of learning when we don't know how to write down an objective function for image quality: take an empirical distribution of "good" images, and try to match it. Often, we want to impose additional constraints on our goal distribution besides simply matching empirical data. If we can write down an objective which reflects our goals (even approximately), we can often simply incorporate this into the loss function to achieve our goals. For example, when trying to generate art, we would like our network to be creative and innovative rather than just imitating previous styles, and including a penalty in the loss for producing recognized styles appears to make GANs more creative BID1. Conditioning on image content class, training the discriminator to classify image content as well as making real/fake judgements, and including a loss term for fooling the discriminator on class both allows for targeted image generation and improves overall performance BID9.However, sometimes it is not easy to write an explicit objective that reflects our goals. Often the only effective way to evaluate machine learning systems on complex tasks is by asking humans to determine the quality of their (, e.g.) or by actually trying them out in the real world. Can we incorporate this kind of feedback to efficiently guide a generative model toward producing better ? Can we do so without a prohibitively expensive and slow amount of data collection? In this paper, we tackle a specific problem of this kind: generating images that cause more positive user interactions. We imagine interactions are measured by a generic Positive Interaction Rate (PIR), which could come from a wide variety of sources. For example, users might be asked to rate how aesthetically pleasing an image is from 1 to 5 stars. The PIR could be computed as a weighted sum of how frequently different ratings were chosen. Alternatively, these images could be used in the of web pages. We can assess user interactions with a webpage in a variety of ways (time on page, clicks, shares, etc.), and summarize these interactions as the PIR. In both of these tasks, we don't know exactly what features will affect the PIR, and we certainly don't know how to explicitly compute the PIR for an image. However, we can empirically determine the quality of an image by actually showing it to users, and in this paper we show how to use a small amount of this data ( on 1000 images) to efficiently tune a generative model to produce images which increase PIR. In this work we focus on simulated PIR values as a proof of concept, but in future work we will investigate PIR values from real interactions. The most straight-forward way to improve an image GAN might be to evaluate the images the model produces with real users at each training step. However, this process is far too slow. Instead, we want to be able to collect a batch of PIR data on a batch of images, and then use this batch of data to improve the generative model for many gradient steps; we want to do this despite the fact that the images the generator is producing may evolve to be very different from the original images we collected PIR data on. In order to do this, we use the batch of image and PIR data to train a "PIR Estimator Model" which predicts PIRs on images. We then use these estimated PIRs at each step as a loss. Our approach is inspired by the work of Christiano and colleagues BID0, who integrated human preference ratings between action sequences into training of a reinforcement learning model by using the preference data to estimate a reward function. However, our problem and approach differ in several key ways. First, we are optimizing a generative image model rather than a RL model. This is more difficult in some ways, since the output space is much higher-dimensional than typical RL problems, which means that scalar feedback (like a PIR) may be harder for the system to learn from. This difficulty is partially offset by the fact that we assume we get "reward" (PIR) information for an image when we evaluate, instead of just getting preferences which we have to map to rewards. Perhaps most importantly, we use our PIR estimation model as a fully-differentiable loss function for training, instead of just using its estimated rewards. This allows us to more effectively exploit its knowledge of the objective function (but risks overfitting).Our system consists of three components: A generative image model, users who interact with the generated images in some way, and a PIR estimator that models user interactions given an image. See FIG0 for a diagram of the system's general operation. The generative model produces images, which are served to users. Using interaction data from these users, we train the PIR estimator model, which predicts PIRs given a image, and then incorporate this estimated PIR into the loss of the generative model to tune it to produce higher quality images. Below, we discuss each of these components in more detail. We begin with a GAN 1 BID2 which we pre-trained to produce images from a target distribution (specifically, landscapes of mountains and coasts). Let D source be the source estimated by the discriminator, G the generator, and z a noise input to the generator sampled from a multivariate standard normal distribution N (0, I), and I be the set of real images shown to the discriminator. Define: DISPLAYFORM0 Then the discriminator and generator are trained to maximize the following losses (respectively), where the w * are weights set as hyperparameters: DISPLAYFORM1 Note that there is a difference between these losses and the standard GAN formulation given in BID2 ) -we maximize L fake image fools = log P (classified real) rather than minimizing log (1 − P (classified real)). This seems to in the generation of slightly better images in practice. This GAN was trained on a dataset consisting of landscape images of mountains and coastlines (see Appendix C.2 for details of the architecture and training). It is worth noting that this generative model is not photorealistic (see FIG2 for some samples). Its expressive capacity is limited, and it has clear output modes with limited intra-mode variability. However, for our purposes this may not matter. Indeed, it is in some ways more interesting if we can tweak this model to optimize for many objective functions, since its limited expressive capacity will make it more difficult for us to estimate and pursue the real objective -a limited set of images will effectively give us fewer points to estimate the PIR function from, and will reduce the space in which the model can easily produce images, thus reducing the possibility of getting very optimal images from the model. For example, a model which produces images of birds may not produce data points which provide good estimates of a PIR based on how much the image looks like a car, and even if it could, it may not be able to produce images which are "more car-like." If we are able to succeed in improving PIRs with this generative model, it is likely that a better generative model would yield even better . We will show these images to users in a variety of ways, depending on our target domain. For the purposes of this paper, however, we will use simulated interaction data (see Section 3 for details). Of course, since showing images to users is an expensive prospect, we wanted to limit the size of the datasets we used to train the model. Typical datasets used to train vision models are on the order of millions of images, (e.g. ImageNet BID11), but it is completely infeasible to collect user data on this number of images. We estimated that we could show 1000 images each 1000 times to generate our datasets. We used these dataset sizes and number of impressions for all experiments discussed here, and added noise to the PIRs that was binomially distributed according to the number of times each image was shown and the "true" PIR simulate from the objective. The final component of our system is the PIR estimator model, which learns to predict PIR from a image. We denote this model by R: image →. We parameterize this model as a deep neural network. Specifically, we take the Inception v2 architecture BID13, remove the output layer, and replace it with a fully-connected layer to PIR estimates. We initialize the Inception v2 parameters from a version of the model trained on [dataset redacted for blind review]. See Appendix C.3 for more details. Why did we not make estimated PIR simply another auxiliary output from the discriminator, like class in the ACGAN (Appendix C.1)? Because the PIR estimator needs to be held constant in order to provide an accurate training objective. If the PIR estimates were produced by the discriminator, then as the discriminator changed to accurately discriminate the evolving generator images, the PIR estimates would tend to drift without a ground-truth to train them on. Separating the discriminator and the PIR estimator allows us to freeze the PIR estimator while still letting the discriminator adapt. Once we have trained a PIR estimator model, we have to use it to improve the GAN. We do this as follows. Let R denote the PIR estimator model, as above. Define L PIR to be the expectation of the estimated PIR produced over images sampled from the generator: DISPLAYFORM0 Then we simply supplement the generator loss by adding this term times a weight w P IR, set as a hyperparameter: DISPLAYFORM1 We set w P IR = 1000 as this made the magnitude of the PIR loss and the other loss terms roughly comparable. Otherwise we used the same parameters as in the GAN training above, except that we reduced the learning rate to 10 −6 to allow the system to adapt more smoothly to the multiple objectives, and we trained for 50,000 steps. In this paper, we use simulated interaction data as proof of concept. This raises an issue: what functions should we use to simulate user interactions? Human behavior is complex, and if we already knew precisely what guided user interactions, there would be no need to actually collect human behavioral data at all. Since we don't know what features will guide human behavior, the next best thing we can do is to ensure that our system is able to alter the image generation model in a broad variety of ways, ranging from low level features (like making the images more colorful) to altering complex semantic features (such as including more plants in outdoor scenery). We also want to avoid hand-engineering tasks to the greatest extent possible. We present an overview of our approaches to simulating PIR data below, see Appendix C.4 for more details. The first approach we took to evaluating our system's ability to train for different features was to use activity from hidden layers of a computer vision model, specifically VGG 16 BID12 ) trained on ImageNet BID11. In particular, we took the activity of a single filter in a layer of VGG relative to the overall activity of that layer. This approach to simulating PIRs has several benefits. First, it gives a wide variety of complex objectives that can nevertheless be easily computed to simulate data. Second, models like VGG exhibit hierarchical organization, where lower levels generally respond to lower-level features such as edges and colors, while higher levels respond to higher-level semantic features such as faces BID15, and the represented features relate to those in human and macaque visual cortex BID14. Thus VGG features give a wide range of objectives which we may relate to the human perception we wish to target. There are some caveats to this approach, however. First, although the higher layers of CNNs are somewhat selective for "abstract" object categories, they are also fooled by adversarial images that humans would not be, and directly optimizing inputs for these high level features does not actually produce semantically meaningful images BID6. Thus, even if our system succeeds in increasing activity in a targeted layer which is semantically selective, it will likely do so by adversarially exploiting particulars of VGG 16's parameterization of the classification problem (although the fact that we are not backpropagating through the true objective will make this harder). It is not necessarily a failure of the system if it exploits simple features of the objective it is given to increase PIRs -indeed, it should be seen as a success, as long as it is generalizes to novel images. However, success on this task does not necessarily guarantee success on modifying semantic content when interacting with actual humans. It may be easier for the PIR estimator model (which is based on a CNN) to learn objectives which come from another CNN than more general possible objectives. The fact that adversarial examples can sometimes transfer between networks with different architectures BID5 suggests that the computations being performed by these networks are somewhat architecture invariant. Thus CNN objectives may be easier for our estimator than human ones. We have tried to minimize these problems to the greatest extent possible by using different network architectures (Inception V2 and VGG 16, respectively) trained on different datasets ([hidden] and ImageNet BID11, respectively) for the estimator and the objective. However, we cannot be certain that the network is not "cheating" in some way on the VGG 16 tasks, so our must be considered with this qualification in mind. Despite this, we think that evaluating our system's ability to optimize for objectives generated from various layers of VGG will show its ability to optimize for a variety of complex objectives, and thus will serve as a useful indicator of its potential to improve PIRs from real users. After using our system on the tasks above, we noted that its performance was quite poor at layers 5, 6, and 7 of VGG compared to other tasks (see FIG1). This could suggest that our system was unable to capture the complex features represented at the higher levels of VGG. However, we also noticed that the feature representations at these layers tended to be quite sparse, so many of the simulated PIRs we generated were actually zero to within the bin width of our PIR estimator (see Appendix B FIG7 for a plot of how this affected learning). In order to evaluate whether the poor performance of our system at the higher layers of VGG was due to the number of zeros or to the complexity of the features, we created less sparse features from these layers by simply targeting a set of k filters sampled without replacement from the layer, rather than a single filter. The single filter cases above can be thought of as a special case of this, where k = 1. To complement these, we also tried k = 20.This can also be thought of as perhaps a more realistic simulation of human behavior, in the sense that it is highly unlikely that there is a single feature which influences human PIRs. Rather, there are probably many related features which influence PIR in various ways. Thus it is important to evaluate our system's ability to target these types of features as well. Finally, we also considered some simpler objectives based on targeting specific colors in the output images, or targeting vertical bands of two different colors, one in each half of the image, or three colors, one in each third of the image. These objectives provide a useful complement to the VGG objectives above. Although the single color objectives may be relevant to the classification task VGG 16 performs, the split color tasks are less likely to be relevant to classification. Note that it is important that we split the images along the width instead of the height dimension, as there may well be semantically relevant features corresponding to color divisions along the height dimension, e.g. a blue upper half and green lower half likely correlates with outdoor images, which would provide useful class information. By contrast, it is harder to imagine circumstances where different colors on the left and right halves of the image are semantically predictive, especially since flipping left to right is usually included in the data augmentation for computer vision systems. Thus success on optimizing for these objectives would increase our confidence in the generality of our system. We present our in terms of the change in mean PIR from 1000 images produced by the GAN before tuning to 1000 images produced after tuning, or in terms of the effect size of this change (Cohen's d, i.e. the change in mean PIR standardized by the standard deviation of the PIRs in the pre-and post-tuning image sets). We assess whether these changes are significant by performing a Welch's t-test (with a significance threshold of α = 0.001) between the pre-and post-tuning PIRs. Overall, our system was quite successful at improving PIRs across a range of simulated objective functions (see FIG1). Below, we discuss these in more detail. Our system largely succeed at increasing PIRs on a variety of VGG objectives (see FIG1). However, there are several interesting patterns to note. First, the system is not particularly successful at targeting single filters from the pool5, fc6, and fc7 layers. However, we believe this is due to the fact that filters in these layers produce relatively sparse activation, see section 3.2. Indeed, the performance of the system seems much more consistent when it is optimizing for sets of 20 filters than for single filters. Even when using 20 filters, however, there is a noticeable decline in the effect size of the improvement the system is able to make at higher layers of VGG (β = −0.13 per layer, t = −7.8, p < 10 −10 in a linear model controlling for initial standard deviation and percent zeros). This suggests that as the objectives grow more complex, the system may be finding less accurate approximations to them. However, the system's continuing (if diminished) success at the higher layers of VGG suggests that our model is capable of at least partially capturing complex objective functions. Overall, the system performed quite well at optimizing for the color objectives, particularly the single and two-color (see FIG1). It had more difficulty optimizing for the three-color , and indeed had produced only very small improvements after the usual 50,000 tuning steps for the generative model, but after 500,000 steps it was able to produce significant improvements for two out of the three objectives (these longer training are the ones included here).Because the color objectives are easiest to assess visually, we have included for a variety of these objectives in FIG2. For the single color objectives, the improvement is quite clear, for example the images in FIG2 appear much more blue than the pre-training ones. For the two color objectives, it appears that the system found the "trick" of reducing the third color, for example the red-green split images in FIG2 appear much less blue than the pre-training images. Even on the three-color images where the system struggled, there are some visible signs of improvement, for example on the green-blue-red task the system has started producing a number of images with a blue streak in the middle. We also conducted several supplemental analyses which can be found in detail Appendix A. In summary, the initial variability in the PIR of the images used to train the system is strongly correlated with the amount of improvement the system makes in the PIR, the system fairly consistently underestimates the performance it achieves (because of a detail of training procedure, see the Appendix), and iterating the process of improving PIRs yields better for objectives from a lower layer of VGG but not a higher. Overall, our system appears to be relatively successful. It can optimize a generative model to produce images which target a wide variety of objectives, ranging from low-level visual features such as colors and early features of VGG to features computed at the top layers of VGG. This success across a wide variety of objective functions allows us to be somewhat confident that our system will be able to achieve success in optimizing for real human interactions. Furthermore, the system did not require an inordinate amount of training data. In fact, we were able to successfully estimate many different objective functions from only 1000 images, several orders of magnitude fewer than is typically used to train CNNs for vision tasks. Furthermore, these images came from a very biased and narrow distribution (samples from our generative model) which is reflective of neither the images that were used to pre-train the Inception model in the PIR estimator, nor the images the VGG model (which produced the simulated objectives) was trained on. Our success from this small amount of data suggests that not only will our system be able to optimize for real human interactions, it will be able to do so from a feasible number of training points. These are exciting -the model is able to approximate apparently complex objective functions from a small amount of data, even though this data comes from a very biased distribution that is unrelated to most the objectives in question. But what is really being learned? In the case of the color images, it's clear that the model is doing something close to correct. However, for the objectives derived from VGG we have no way to really assess whether the model is making the images better or just more adversarial. For instance, when we are optimizing for the logit for "magpie," it's almost certainly the case that the of this optimization will not look more like a magpie to a human, even if VGG does rate the images as more "magpie-like." On the other hand, this is not necessarily a failure of the system -it is accurately capturing the objective function it is given. What remains to be seen is whether it can capture how images influence human behavior as well as it can capture the vagaries of deep vision architectures. We believe there are many domains where a system similar to ours could be useful. We mentioned producing better webpage s and making more aesthetic images above, but there are many potential applications for improving GANs with a limited amount of human feedback. For example, a model could be trained to produce better music (e.g. song skip rates on streaming generated music could be treated as inverse PIRs). When tuning the GAN, the decrease in the PIR loss is usually accompanied by an increase in the generator loss, and often by a partial collapse of the generator output (for example, the optimized images generally seem to have fewer output modes than the pre-training images in FIG2). This is not especially surprising -because we weighted the PIR loss very highly, the model is rewarded for trading some image diversity for image optimality. Depending on the desired application, the weight on the PIR loss could be adjusted as necessary to trade off between producing images close to the data distribution and optimizing PIR. At its most extreme, one could down-weight the generator loss entirely, and train until the model just produces a single optimal image. However, the generator likely provides some regularization by constraining the images to be somewhat close to the real images, which will reduce overfitting to an imperfect estimate of the PIR function. Furthermore, in many settings we will want to generate a variety of images (e.g. s for different websites). For these reasons, we chose to keep the generator loss when tuning the GAN. There are a number of future directions suggested by this work. A number of possible improvements are discussed in Appendix C.5. However, we also think this work has potential applications from the perspective of distillation or imitation approaches, which attempt to train one network to emulate another BID3 , e.g ), as well as from the perspective of understanding the computations that these vision architectures perform. As far as we are aware, these are the first to show that a deep vision model can be tuned rapidly from relatively little data to produce outputs which accurately emulate the behavior of hidden layers of another deep vision architecture trained on a different dataset. This suggests both that the inductive biases shared among these architectures are causing them to find similar solutions (which is also supported by work on transferable adversarial examples (, e.g.)), and that these networks final layers represent the computations of earlier hidden layers in a way that is somewhat accessible. It's possible that using our system with objectives from CNN layers as we did here might help to understand the features those layers are attending to, by analyzing the distribution of images that are produced. In this sense, our system can be thought of as offering a new approach to multifaceted feature visualization BID7, because our system attempts to optimize a distribution of images for an objective and encourages diversity in the distribution produced, rather than just optimizing a single image. We have described a system for efficiently tuning a generative image model according to a slow-toevaluate objective function. We have demonstrated the success of this system at targeting a variety of objective functions simulated from different layers of a deep vision model, as well as from low-level visual features of the images, and have shown that it can do so from a small amount of data. We have quantified some of the features that affect its performance, including the variability of the training PIR data and the number of zeros it contains. Our system's success on a wide variety of objectives suggests that it will be able to improve real user interactions, or other objectives which are slow and expensive to evaluate. This may have many exciting applications, such as improving machine-generated images, music, or art. A OTHER ANALYSES Because L P IR is just the expected value of the PIR, by looking at L P IR before and after tuning the generative model, we can tell how well the system thinks it is doing, i.e. how much it estimates that it improved PIR. This comparison reveals the interesting pattern that the system is overly pessimistic about its performance. In fact, it tends to underestimate its performance by a factor of more than 1.5 (β = 1.67 when regressing change in mean PIR on predicted change in mean PIR, see FIG3). However, it does so fairly consistently. This effect appears to be driven by the system consistently underestimating the (absolute) PIRs, which is probably caused by our change in the softmax temperature between training the PIR estimator and tuning the generative model (which we empirically found improves performance, as noted above). This is in contrast to the possible a priori expectation that the model would systematically overestimate its performance, because it is overfitting to an imperfectly estimated objective function. Although decreasing the softmax temperature between training and using the PIR obscures this effect, we do see some evidence of this; the more complex objectives (which the system produced lower effect sizes on) seem to both have lower estimated changes in mean PIR and true changes in PIR which are even lower than the estimated ones (see FIG3). Thus although the system is somewhat aware of its reduced effectiveness with these objectives (as evidenced by the lower estimates of change in mean PIR), it is not reducing its estimates sufficiently to account for the true difficulty of the objectives (as evidenced by the fact that the true change in PIR is even lower than the estimates). However, the system was generally still able to obtain positive on these objectives (see FIG1). There is a general trend (see FIG4) that the variability in PIR in the initial dataset is strongly positively related with the change in PIR the system is able to produce (β = 0.99, t = 10.6, p < 10 −10, in a linear model controlling for initial mean PIR and initial percent of values that are zero). In fact, initial standard deviation explains about 50% of the variance in the change in mean PIR. This is perhaps not too surprising -more variability means that the generative model has capacity to produce higher PIR images without too much tweaking, and that the PIR estimator model gets a wider range of values to Figure 6: Effect size evolution over two iterations learn from. Still, when attempting to use this system in practice, it is important to keep in mind that starting with a sufficiently expressive generative model will more likely produce better than starting with a more limited model. Given that our model improves PIRs, an obvious question is whether we can iterate the process. Once we have increased PIRs, can we train a new PIR estimator on samples from our new generative model, and use that to increase PIRs again? If we could iterate this many times, we might be able to create much larger improvements in PIR than we can on a single step. On the other hand, it is possible that after a single step of optimization we will effectively have saturated the easily achievable improvement in the model, and further steps will not in much improvement. To evaluate this, we took the subset of models trained on single filters from VGG layers pool2 and fc8, and used the set of images generated from the post-tuning model along with the original set of images to tune their PIR estimators for another 25,000 steps, and then tuned the generative model using this updated PIR estimator for another 50,000 steps. We then evaluated them as before, see C DETAILED METHODS C.1 ACGAN We describe our using the standard GAN framework for clarity, but we actually used an ACGAN BID9, which allows for conditioning for various user-specific features. This requires the following adjustments. Defining C to be the set of possible classes and I c to be the set of real images corresponding to a class c ∈ C: DISPLAYFORM0 Then the discriminator and generator losses are modified as follows (letting w f akeclass = 1.5 and w realclass = 1.0): DISPLAYFORM1 Note that unlike the standard ACGAN formulation given in BID9, we do not include L fake class in the discriminator's loss, to keep the discriminator from cooperating with the generator on the classification task. We modified the generator network by adding one-hot class inputs, and the discriminator by adding class outputs alongside the source output, as in BID9. We parameterized the generator as a deep neural network, which begins with a fully-connected mapping from the latent (noise) space to a 4 × 4 × 512 dimensional image, and then successively upsampled (a factor of 2 by nearest neighbor), padded and applied a convolution (3 × 3 kernel, stride of 1) and a leaky ReLU (α = 0.2) nonlinearity repeatedly. We repeated this process 5 times (except with no upsampling on the first step, and a tanh nonlinearity on the last), while stepping the image depth down as follows: 512, 512, 256, 128, 64, and finally 3 (RGB) for the output image. This means that the final output images were 64 × 64. We parameterized the discriminator as a convolutional network with 7 layers, 6 convolutions (kernels all 3 × 3; strides 2, 1, 2, 1, 2, 1; dropout after the 1st, 3rd, and 5th layers; filter depth 16, 32, 64, 128, 256, 512 ; batch normalization after each layer) and a fully connected layer to a single output for real/fake. We used a leaky ReLU (α = 0.2) nonlinearity after each layer, except the final layer, where we used a tanh. This GAN was trained on a dataset consisting of landscape images of mountains and coastlines obtained from the web. The generator was trained with the Adam optimizer BID4, and the discriminator with RMSProp. The learning rates for both were set to 10 −5, and for Adam we set β 1 = 0.5. We used a latent size of 64 units. The model was trained for 1.1 × 10 6 gradient steps, when the generated images appeared to stop improving. Instead of predicting PIR as a scalar directly, we predict it by classifying into 100 bins via a softmax, which performs better empirically. This choice was motivated by noting that the scalar version was having trouble fitting some highly multi-modal distributions that appear in the data. We trained the PIR estimator with the Adam optimizer (learning rate 5 · 10 −4). When evaluating and when using this model for improving the GAN we froze the weights of the PIR estimator. We also reduced the output softmax's temperature to 0.01, so it was behaving almost like a max, which empirically improved . Intuitively, a low softmax temperature in training allows the system to rapidly get gradients from many different output bins and adjust the distribution appropriately, whereas when actually using the system we want to be conservative with our estimates and not be too biased by low probability bins far from the modal estimate. The first approach we took to evaluating our system's ability to train for different features was to use activity from hidden layers of a computer vision model, specifically VGG 16 BID12 ) trained on ImageNet BID11. In particular, we took the 2 norm of the activity of one filter within a layer, and normalized it by the 2 norm of the total layer's activity, i.e. letting VGG l,f (i) be the vector of unit activations in filter f of layer l of VGG 16 on image i, we computed PIR for that image and a given layer and filter l *, f * as: DISPLAYFORM0 (Note that if we did not normalize by the activity in the whole layer, the system might be able to "cheat" to improve the PIR by just increasing the contrast of the images, which will likely increase overall network activity.) As noted above, we also added binomially distributed noise to these PIRs. After using our system on the tasks above, we noted that its performance was quite poor at layers 5, 6, and 7 of VGG compared to other tasks (see FIG1). This could suggest that our system was unable to capture the complex features represented at the higher levels of VGG. However, we also noticed that the feature representations at these layers tended to be quite sparse, so many of the simulated PIRs we generated were actually zero to within the bin width of our PIR estimator. Respectively, these layers had only 20%, 25%, and 24% non-zero PIRs (collapsing across filters), and around half the filters in each (resp. 6, 4, and 5) were producing > 90% zero PIRs. (By contrast, the layer with the next greatest number of zero PIRs, fc8, still had 69% nonzero PIRs overall, and had no filters in which 90% or more of the PIRs were zero.) In a few cases on layers 5, 6, and 7, all of the generated PIRs were zero. This clearly makes learning infeasible, and indeed we noted that there was a strong relationship between number of non-zero simulated PIRs in the training dataset and the ability of our system to improve PIR (see FIG7). This is somewhat troubling, since probably most images in the real world will not produce a PIR that is truly zero.in order to evaluate whether the poor performance of our system at the higher layers of VGG was due to the number of zeros or to the complexity of the features, we created less sparse features from these layers by simply targeting a set of k filters sampled without replacement from the layer, rather than a single filter. We did this by taking the norm across the k target filters, or equivalently by summing the squared norms of the k filters before taking the square root, and then normalizing by the activity in the layer as before. Formally, letting a 1,..., a k be a set of k filter indices sampled without replacement from {0, ..., number of filters in layer}, we computed the PIR for an image as: DISPLAYFORM0 The single filter cases above can be thought of as a special case of this, where k = 1. To complement these, we also tried k = 20. As above, we also added binomially distributed noise to these PIRs. This can also be thought of as perhaps a more realistic simulation of human behavior, in the sense that it is highly unlikely that there is a single feature which influences human PIRs. Rather, there are probably many related features which influence PIR in various ways. Thus it is important to evaluate our system's ability to target these types of features as well. Finally, we also considered some simpler objectives based on targeting specific colors in the output images. Analogously to the VGG features, we computed the PIRs from the vector norm of a given image in the targeted color, normalized by the total image value. We considered several objectives of this type: Three color: Similar to two color, but split the image into thirds, and computed PIR from a different color in each third.(As above, we also added binomially distributed noise to these PIRs.) These objectives provide a useful complement to the VGG objectives discussed in section 3.1. Although the single color objectives may be relevant to the classification task VGG 16 performs, the split color tasks are less likely to be relevant to classification. Note that it is important that we split the images along the width instead of the height dimension, as there may well be semantically relevant features corresponding to color divisions along the height dimension, e.g. a blue upper half and green lower half likely correlates with outdoor images, which would provide useful class information. By contrast, it is harder to imagine circumstances where different colors on the left and right halves of the image are semantically predictive, especially since flipping left to right is usually included in the data augmentation for computer vision systems. Thus success on optimizing for these objectives would increase our confidence in the generality of our system. There are a number of techniques that could be explored to improve our system. As we mentioned above, iterating for multiple steps of PIR collection and generative model tuning is worth exploring further. Also, some form of data scaling might allow the system to perform better on tasks with low variance. We briefly tried normalizing all data for an objective to have mean 0.5 and standard deviation 0.25, but did not achieve particularly good from this, possibly because there were many outliers getting clipped to 0 or 1. Still, there are many other possibilities for scaling data that could potentially in some improvement in performance. Also, one alternative approach to training a GAN to produce high-PIR images would be to use the PIR estimator objective in the Plug & Play Generative Networks framework BID8 instead of using it to tune the GAN. This could be an interesting direction to explore, but its success would probably depend on expressiveness of the initial generative model. With the mediocre model we started with, it's probably better to actually tune the model itself, which may allow it to explore parts of image space which it had not previously.
We describe how to improve an image generative model according to a slow- or difficult-to-evaluate objective, such as human feedback, which could have many applications, like making more aesthetic images.
1,317
scitldr
Semi-supervised learning (SSL) is a study that efficiently exploits a large amount of unlabeled data to improve performance in conditions of limited labeled data. Most of the conventional SSL methods assume that the classes of unlabeled data are included in the set of classes of labeled data. In addition, these methods do not sort out useless unlabeled samples and use all the unlabeled data for learning, which is not suitable for realistic situations. In this paper, we propose an SSL method called selective self-training (SST), which selectively decides whether to include each unlabeled sample in the training process. It is also designed to be applied to a more real situation where classes of unlabeled data are different from the ones of the labeled data. For the conventional SSL problems which deal with data where both the labeled and unlabeled samples share the same class categories, the proposed method not only performs comparable to other conventional SSL algorithms but also can be combined with other SSL algorithms. While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data. Recently, machine learning has achieved a lot of success in various fields and well-refined datasets are considered to be one of the most important factors (; ; BID6 . Since we cannot discover the underlying real distribution of data, we need a lot of samples to estimate it correctly . However, creating a large amount of dataset requires a huge amount of time, cost and manpower BID3.Semi-supervised learning (SSL) is a method relieving the inefficiencies in data collection and annotation process, which lies between the supervised learning and unsupervised learning in that both labeled and unlabeled data are used in the learning process (; BID3 . It can efficiently learn a model from fewer labeled data using a large amount of unlabeled data BID15 . Accordingly, the significance of SSL has been studied extensively in the previous literatures BID18 BID5 ; BID4 BID2 . These suggest that SSL can be a useful approach in cases where the amount of annotated data is insufficient. However, there is a recent research discussing the limitations of conventional SSL methods BID3 . They have pointed out that conventional SSL algorithms are difficult to be applied to real applications. Especially, the conventional methods assume that all the unlabeled data belong to one of the classes of the training labeled data. Training with unlabeled samples whose class distribution is significantly different from that of the labeled data may degrade the performance of traditional SSL methods. Furthermore, whenever a new set of data is available, they should be trained from the scratch using all the data including out-of-class 1 data. In this paper, we focus on the classification task and propose a deep neural network based approach named as selective self-training (SST) to solve the limitation mentioned above. Unlike the conventional self-training methods in , our algorithm selectively utilizes the unlabeled data for the training. To enable learning to select unlabeled data, we propose a selection network, which is based on the deep neural network, that decides whether each sample is to be added or not. Different from BID12, SST does not use the classification for the data selection. Also, we adopt an ensemble approach which is similar to the co-training method BID0 ) that utilizes outputs of multiple classifiers to iteratively build a new training dataset. In our case, instead of using multiple classifiers, we apply a temporal ensemble method to the selection network. For each unlabeled instance, two consecutive outputs of the selection network are compared to keep our training data clean. In addition, we have found that the balance between the number of samples per class is quite important for the performance of our network. We suggest a simple heuristics to balance the number of selected samples among the classes. By the proposed selection method, reliable samples can be added to the training set and uncertain samples including out-of-class data can be excluded. SST is a self-training framework, which iteratively adopts the newly annotated training data (details in Section 2.1). SST is also suitable for the incremental learning which is frequently used in many real applications when we need to handle gradually incoming data. In addition, the proposed SST is suitable for lifelong learning which makes use of more knowledge from previously acquired knowledge BID10; ). Since SSL can be learned with labeled and unlabeled data, any algorithm for SSL may seem appropriate for lifelong learning. However, conventional SSL algorithms are inefficient when out-of-class samples are included in the additional data. SST only add samples having high relevance in-class data and is suitable for lifelong learning. The main contributions of the proposed method can be summarized as follows:• For the conventional SSL problems, the proposed SST method not only performs comparable to other conventional SSL algorithms but also can be combined with other algorithms.• For the new SSL problems, the proposed SST does not show any performance degradation even with the out-of-class data.• SST requires few hyper-parameters and can be easily implemented.• SST is more suitable for lifelong learning compared to other SSL algorithms. To prove the effectiveness of our proposed method, first, we conduct experiments comparing the classification errors of SST and several other state-of-the-art SSL methods (; BID9 ;) in conventional SSL settings. Second, we propose a new experimental setup to investigate whether our method is more applicable to realworld situations. The experimental setup in BID3 samples classes among in-classes and out-classes. In the experimental setting in this paper, we sample unlabeled instances evenly in all classes. (details in Section 6.6 of the supplementary material). We evaluate the performance of the proposed SST using three public benchmark datasets: CIFAR-10, CIFAR-100 BID8 ), and SVHN . In this section, we introduce the of our research. First, we introduce some methods of self-training (; BID16 BID17) on which our work is based. Then we describe consistency regularization-based algorithms such as temporal ensembling . Self-training method has long been used for semi-supervised learning (; BID5 BID16 BID17 . It is a resampling technique that repeatedly labels unlabeled training samples based on the confidence scores and retrains itself with the selected pseudo-annotated data. Our proposed method can also be categorized as a self-training method. FIG0 shows an overview of our SSL system. Since our proposed algorithm is based on the selftraining, we follow its learning process. This process can be formalized as follows. However, most self-training methods assume that the labeled and unlabeled data are generated from the identical distribution. Therefore, in real-world scenarios, some instances with low likelihood according to the distribution of the labeled data are likely to be misclassified inevitably. Consequently, these erroneous samples significantly lead to worse in the next training step. To alleviate this problem, we adopt the ensemble and balancing methods to select reliable samples. Consistency regularization is one of the popular SSL methods and has been referred to many recent researches (; ; BID9 . Among them, Π model and temporal ensembling are widely used . They have defined new loss functions for unlabeled data. The Π model outputs f (x) andf (x) for the same input x by perturbing the input with different random noise and using dropout BID8, and then minimizes DISPLAYFORM0 2 ) between these output values. Temporal ensembling does not make different predictions f (x) andf (x), but minimizes the difference (DISPLAYFORM1 2) between the outputs of two consecutive iterations for computational efficiency. In spite of the improvement in performance, they require lots of things to consider for training. These methods have various hyperparameters such as'ramp up','ramp down','unsupervised loss weight' and so on. In addition, customized settings for training such as ZCA preprocessing and mean-only batch normalization BID7 are also very important aspects for improving the performance BID3.Algorithm 1 Training procedure of the proposed SST Require: x i, y i: training data and label Require: L, U: labeled and unlabeled datasets Require: I U: set of unlabeled sample indices Require: f n (·; θ n), f cl (·; θ c) and f sel (·; θ s): trainable SST model Require: α,, K, K re: hyper-parameters, 0 ≤ α < 1, 0 ≤ < 1 1: randomly initialize θ n, θ c, θ s 2: train f n (·; θ n), f cl (·; θ c) and f sel (·; θ s) for K epochs using L 3: repeat 4:initialize r DISPLAYFORM2 for each i ∈ I U do 6: DISPLAYFORM3 end if 10: DISPLAYFORM4 if z i > 1 − then I S ← I S ∪ {i} DISPLAYFORM0 retrain f n (·; θ n), f cl (·; θ c) and f sel (·; θ s) for K re epochs using T 19: until stopping criterion is true In this section, we introduce our selective self-training (SST) method. The proposed model consists of three networks as shown in the bottom part of FIG0. The output of the backbone network is fed into two sibling fully-connected layers -a classification network f cl (·; θ c) and a selection network f sel (·; θ s), where θ c and θ s are learnable parameters for each of them. In this paper, we define the classification and the selection score as r i = f cl (f n (x i ; θ n); θ c ) and DISPLAYFORM0 respectively, where f n (·; θ n) denotes the backbone network with learnable parameters θ n. Note that we define r i as the ant label and it belongs to one of the class labels r i ∈ Y = {1, 2, · · ·, C}. The network architecture of the proposed model is detailed in Section 6.2 in the supplementary material. As shown in FIG0, the proposed SST method can be represented in the following four steps. First, SST trains the network using a set of the labeled data L = {(x i, y i) | i = 1, · · ·, L}, where x i and y i ∈ {1, 2, · · ·, C} denote the data and the ground truth label respectively, which is a standard supervised learning method. The next step is to predict all the unlabeled data U = {x i | i = L + 1, · · ·, N} and select a subset of the unlabeled data {x i |i ∈ I S} whose data have high selection scores with the current trained model, where I S denotes a set of selected sample indices from I U = {L + 1, · · ·, N}. Then, we annotate the selected samples with the pseudo-categories evaluated by the classification network and construct a new training dataset T composed of L and U S = {(x i,ŷ i)|i ∈ I S }. After that, we retrain the model with T and repeat this process iteratively. The overall process of the SST is described in Algorithm 1 and the details of each of the four steps will be described later. The SST algorithm first trains a model with supervised learning. At this time, the entire model (all three networks) is trained simultaneously. The classification network is trained using the softmax function and the cross-entropy loss as in the ordinary supervised classification learning task. In case of the selection network, the training labels are motivated by discriminator of generative adversarial networks (GAN) (; BID13 . When i-th sample x i with the class label y i is fed into the network, the target for the selection network is set as: DISPLAYFORM0 where I L = {1, · · ·, L} represents a set of labeled sample indices. The selection network is trained with the generated target g i. Especially, we use the sigmoid function for the final activation and the binary cross-entropy loss to train the selection network. Our selection network does not utilize the softmax function because it produces a relative value and it can induce a high value even for an out-of-class sample. Instead, our selection network is designed to estimate an absolute confidence score using the sigmoid activation function. Consequently, our final loss function is a sum of the classification loss L cl and the selection loss L sel: DISPLAYFORM1 After learning the model in a supervised manner, SST takes all instances of the unlabeled set U as input and predicts classification r i and the selection score s i, for all i ∈ I U. We utilize the classification and selection score (r i and s i) to annotate and choose unlabeled samples, respectively. In the context of self-training, removing erroneously annotated samples is one of the most important things for the new training dataset. Thus, we adopt temporal co-training and ensemble methods for selection score in order to keep our training set from contamination. First, let r t i and r t−1 i be the classification of the current and the previous iterations respectively and we utilize the temporal consistency of these values. If these values are different, we set the ensemble score z i = 0 to reduce uncertainty in selecting unlabeled samples. Second, inspired by , we also utilize multiple previous network evaluations of unlabeled instances by updating the ensemble score z i = αz i + (1 − α)s i, where α is a momentum weight for the moving average of ensemble scores. However, the aim of our ensembling approach is different from . They want to alleviate different predictions for the same input, which are ed from different augmentation and noise to the input. However, our aim differs from theirs in that we are interested in selecting reliable (pseudo-)labeled samples. After that, we select unlabeled samples with high ensemble score z i. It is very important to set an appropriate threshold because it decides the quality of the added unlabeled samples for the next training. If the classification network is trained well on the labeled data, the training accuracy would be very high. Since the selection network is trained with the target g i generated from the classification score r i, the selection score s i will be close to 1.0. We set the threshold to 1 − and control it by changing. In this case, if the ensemble score z i exceeds 1 −, the pseudo-label of the unlabeled sampleŷ i is set to the classification r i. When we construct a new training dataset, we keep the number of samples of each class the same. The reason is that if one class dominates the others, the classification performance is degraded by the imbalanced distribution (FernáNdez et al., 2013). We also empirically found that naively creating a new training dataset fails to yield good performance. In order to fairly transfer the selected samples to the new training set, the amount of migration in each class should not exceed the number of the class having the least selected samples. We take arbitrary samples in every class as much as the maximum number satisfying this condition. The new training set T is composed of both a set of labeled samples L and a set of selected unlabeled samples U S. The number of selected unlabeled samples is the same for all classes. After combining the labeled and selected pseudo-labeled data, the model is retrained with the new dataset for K re epochs. In this step, the label for the selection network is obtained by a process similar to Eq.. Above steps (except for Section 3.1) are repeated for M iterations until (near-) convergence. Table 1: Ablation study with 5 runs on the CIFAR-10 dataset.'balance' denotes the usage of data balancing scheme during data addition as described in Sec. 3.3,'ensemble' is for the usage of previous selection scores as in the 10th line of Algorithm 1, and'multiplication' is the scheme of multiplying top-1 softmax output of the classifier network to the selection score and use it as a new selection score. To evaluate our proposed SST algorithm, we conduct two types of experiments. First, we evaluate the proposed SST algorithm for the conventional SSL problem where all unlabeled data are in-class. Then, SST is evaluated with the new SSL problem where some of the unlabeled data are out-of-class. In the case of in-class data, gradually gathering highly confident samples in U can help improve the performance. On the other hand, in the case of out-of-class data, a strict threshold is preferred to prevent uncertain out-of-class data from being involved in the new training set. Therefore, we have experimented with decay mode that decreases the threshold in log-scale and fixed mode that fixes the threshold in the way described in Section 4.2. We have experimented our method with 100 iterations and determined epsilon by cross-validation in decay modes. In case of fixed modes, epsilon is fixed and the number of iteration is determined by cross-validation. The details about the experimental setup and the network architecture are presented in Section 6.1, 6.2 of the supplementary material. We experiment with a couple of simple synthetic datasets (two moons, four spins) and three popular datasets which are SVHN, CIFAR-10, and CIFAR-100 (; BID8 . The settings of labeled versus unlabeled data separation for each dataset are the same with (; ; BID9 . More details are provided in Section 6.3 in the supplementary material. The experimental of the synthetic datasets can be found in Section 6.4 of the supplementary material. We have performed experiments on CIFAR-10 dataset with the combination of three types of components. As described in Table 1, these are whether to use data balancing scheme described in Section 3.3 (balance), whether to use selection score ensemble in the 10th line of Algorithm 1 (ensemble) and whether to multiply the selection score with the top-1 softmax output of the classifier network to set a new selection score for comparison with the threshold (multiplication). First, when SST does not use all of these, the error 21.44% is higher than that of the supervised learning which does not use any unlabeled data. This is due to the problem of unbalanced data mentioned in subsection 3.3. When the data balance is used, the error is 14.43%, which is better than the baseline 21.44%. Adding the ensemble scheme in 11.82% error, and the multiplication scheme shows a slight drop in performance. Since all of the experiments use the same threshold, the number of candidate samples to be added is reduced by the multiplication with the top-1 softmax output and the variation becomes smaller because only confident data are added. However, we have not used the multiplication scheme in what follows because the softmax classification output is dominant in multiplication. Therefore, we have used only balance and ensemble schemes in the following experiments. TAB2 shows the experiment of supervised learning, conventional SSL algorithms and the proposed SST on CIFAR-10, SVHN and CIFAR-100 datasets. Our baseline model with supervised learning performs slightly better than what has been reported in other papers (; BID9) because of our different settings such as Gaussian noise Figure 2: SST on CIFAR-10, SVHN, and CIFAR-100 datasets with 5 runs. The x-axis is the iteration, the blue circle is the average of the number of data used for training, and the red diamond is the average accuracy. BID9 12.31 ± 0.28% 3.95 ± 0.21% -Π model 12.36 ± 0.31% 4.82 ± 0.17% 39.19 ± 0.36% TempEns 12.16 ± 0.24% 4.42 ± 0.16% 38.65 ± 0.51% TempEns + SNTG 10.93 ± 0.14% 3.98 ± 0.21% 40.19 ± 0.51%* VAT 11.36 ± 0.34% 5.42 ± 0.22% -VAT + EntMin 10.55 ± 0.05% 3.86 ± 0.11% -pseudo-label (; BID3 17.78 ± 0.57% 7.62 ± 0.29% -Proposed method (SST)* 11.82 ± 0.40% 6.88 ± 0.59% 34.89 ± 0.75% SST + TempEns + SNTG* 9.99 ± 0.31% 4.74 ± 0.19% 34.94 ± 0.54% on inputs, optimizer selection, the mean-only batch normalizations and the learning rate parameters. For all the datasets, we have also performed experiments with a model of SST combined with the temporal ensembling (TempEns) and SNTG, labeled as SST+TempEns+SNTG in the table. For the model, the pseudo-labels of SST at the last iteration is considered as the true class label. Figure 2 shows the number of samples used in the training and the corresponding accuracy on the test set for each dataset. The baseline network yields the test error of 18.97% and 5.57% when trained with 4,000 (sampled) and 50,000 (all) labeled images respectively. The test error of our SST method reaches 11.82% which is comparable to other algorithms while SST+TempEns+SNTG model 1.83% better than the SST-only model. The baseline model for SVHN dataset is trained with 1,000 labeled images and yields the test error of 13.45%. Our proposed method has an error of 6.88% which is relatively higher than those of other SSL algorithms. Performing better than SST, SST+TempEns+SNTG reaches 4.74% of error which is worse than that of TempEns+SNTG model. We suspect two reasons for this. The first is that SVHN dataset is not well balanced, and the second is that SVHN is a relatively easy dataset, so it seems to be easily added to the hard labels. With data balancing, the SST is still worse than other algorithms. More details are provided in Section 6.5 in the supplementary material. We think this phenomenon owes to the use of hard labels in SST where incorrectly estimated samples deteriorate the performance. conjectured that the hyper-parameter in the current temporal ensembling and SNTG may not have been optimized. We have experimented with the following settings for real-world applications. The dataset is categorized into six animal and four non-animal classes as similarly done in BID3. In CIFAR-10, 400 images per animal class are used as the labeled data (total 2,400 images for 6 animal classes) and a pool of 20,000 images with different mixtures of both animal and non-animal classes are experimented as an unlabeled dataset. In CIFAR-100, 5,000 labeled data (100 images per animal class) and a total of 20,000 unlabeled images of both classes with different mixed ratios are utilized. Unlike the experimental setting in BID3, we have experimented according to the ratio (%) of the number of out-of-class data in the unlabeled dataset. More details are provided in Section 6.6 in the supplementary material. As mentioned in Section 4, in the presence of out-of-class samples, a strict threshold is required. If all of the unlabeled data is assumed to be in-class, the decay mode may be a good choice. However, in many real-applications, out-of-class unlabeled data is also added to the training set in the decay mode and causes poor performance. In avoidance of such matter, we have experimented on a fixed mode of criterion threshold on adding the unlabeled data. Unlike the decay mode that decrements the threshold value, SST in the fixed mode sets a fixed threshold at a reasonably high value throughout the training. Our method in the fixed mode should be considered more suitable for real-applications but empirically shows lower performances in FIG1 and TAB4 than when running in the decay mode. The difference between the decay mode and the fixed mode are an unchangeable and the initial ensemble. Setting a threshold value for the fixed mode is critical for a feasible comparison against the decay mode. FIG1 shows the average of the obtained when performing SST five times for each ratio in CIFAR-10. As shown in FIG1, as the number of iteration increases, the threshold in the decay mode decreases and the number of additional unlabeled data increases. Obviously, while the different percentage of the non-animal data inclusion show different trends of training, in the cases of 0 ∼ 75% of non-animal data included in the unlabeled dataset, the additionally selected training data shows an initial increase at 30 th ∼ 40 th iteration. On the other hand, when the unlabeled dataset is composed of only the out-of-class data, selective data addition of our method initiates at 55 th ∼ 65 th training iteration. This tendency has been observed in previous researches on classification problems and we have set the threshold value fixed at a value between two initiating points of data addition as similarly done in the works of BID11 BID14. We have set the fixed threshold based on 47th iteration (between 40 and 55). For a more reliable selection score, we have not added any unlabeled data to the new training set and have trained our method with the labeled data only for 5 iterations. As it can be seen in TAB4, in the case of SST in the decay mode, the performance has been improved when the unlabeled dataset consists only in-class animal data, but when the unlabeled pool is filled with only out-of-class data, the performance is degraded. For the case of SST with a fixed threshold value, samples are not added and the performance was not degraded at 100% nonanimal ratio as shown in FIG1 (c). Furthermore, at 0% of out-of-class samples in the pool, there is a more improvement in the performance than at 100 % of out-of-class samples while still being inferior to the improvement than the decay mode. Because less but stable data samples are added by SST with a fixed threshold, the performance is improved for all the cases compared to that of supervised learning. Therefore, it is more suitable for real applications where the origin of data is usually unknown. We proposed selective self-training (SST) for semi-supervised learning (SSL) problem. Unlike conventional methods, SST selectively samples unlabeled data and trains the model with a subset of the dataset. Using selection network, reliable samples can be added to the new training dataset. In this paper, we conduct two types of experiments. First, we experiment with the assumption that unlabeled data are in-class like conventional SSL problems. Then, we experiment how SST performs for out-of-class unlabeled data. For the conventional SSL problems, we achieved competitive on several datasets and our method could be combined with conventional algorithms to improve performance. The accuracy of SST is either saturated or not depending on the dataset. Nonetheless, SST has shown performance improvements as a number of data increases. In addition, the of the combined experiments of SST and other algorithms show the possibility of performance improvement. For the new SSL problems, SST did not show any performance degradation even if the model is learned from in-class data and out-of-class unlabeled data. Decreasing the threshold of the selection network in new SSL problem, performance degrades. However, the output of the selection network shows different trends according to in-class and out-of-class. By setting a threshold that does not add out-of-class data, SST has prevented the addition of out-of-class samples to the new training dataset. It means that it is possible to prevent the erroneous data from being added to the unlabeled dataset in a real environment. 6 SUPPLEMENTARY MATERIAL The basic settings of our experiments are as follows. Different from , we use stochastic gradient descent (SGD) with a weight decay of 0.0005 as an optimizer. The momentum weight for the ensemble of selection scores is set to α = 0.5. Also, we do not apply mean-only batch normalization layer BID7 and Gaussian noise. We follow the same data augmentation scheme in consisting of horizontal flips and random translations. However, ZCA whitening is not used. In the supervised learning phase, we train our model using batch size 100 for 300 epochs. After that, in the retraining phase, we train using the same batch size for 150 epochs with the new training dataset. The learning rate starts from 0.1. In the supervised learning phase, it is divided by 10 at the 150-th and 225-th epoch. In the retraining phase, it is divided by 10 at the 75-th and 113-th epoch. The number of training iteration and thresholding are very important parameters in our algorithm and have a considerable correlation with each other. In the first experiment, the iteration number remains fixed and the growth rate of is adjusted so that the validation accuracy saturates near the settled iteration number. While the validation accuracy is evaluated using the cross-validation, we set the number of training iteration to be 100 so that the model is trained enough until it saturates. is increased in log-scale and begins at a very small value (10 −5) where no data is added. The growth rate of is determined according to when the validation accuracy saturates. The stopping criterion is that the accuracy of the current iteration reaches the average accuracy of the previous 20 steps. If the stopping iteration is much less than 100 times, the growth rate should be reduced so that the data is added more slowly. If the stopping iteration significantly exceeds 100 iterations, the growth rate should be increased so that the data is added more easily. We allow 5 iterations as a deviation from 100 iterations and the growth rate of is left unchanged in this interval. As a , the is gradually increased in log-scale by 10 times every 33 iterations in CIFAR-10 and SVHN. In the case of CIFAR-100, the is increased by 10 times in log-scale every 27 iterations. In the second experiment, we leave the fixed and simply train the model until the stopping criteria are satisfied. Other details are the same as those of the first experiment. We used two types of networks. The network for training the synthetic dataset is shown in TAB7 and consists of two hidden layers with 30 nodes. The network structure for CIFAR-10, SVHN, and CIFAR-100 consists of convolutions, and its structure is shown in TAB6. We used standard batch normalization and Leaky ReLU with 0.1. We have experimented with CIFAR-10, SVHN, and CIFAR-100 datasets that consist of 32 × 32 pixel RGB images. CIFAR-10 and SVHN have 10 classes and CIFAR-100 has 100 classes. Overall, standard data normalization and augmentation scheme are used. For data augmentation, we used random horizontal flipping and random translation by up to 2 pixels. In the case of SVHN, random horizontal flipping is not used. To show that the SST algorithm is comparable to the conventional SSL algorithms, we experimented with the popular setting (; ; BID9 . The validation set in the cross-validation to obtain the reduction rate of epsilon is extracted from the training set by 5000 images. After the epsilon is obtained, all the training datasets are used. The following is the standard labeled/unlabeled split. CIFAR-10 : 4k labeled data ( 400 images per class), 46k unlabeled data (4,600 images per class), As synthetic datasets, two moons and 4 spins were tested in the same manner as SNTG . Each dataset has 6,000 training and 6,000 test samples. In the case of two moons, there are two classes y ∈ {0, 1}, and in case of 4 spins, y ∈ {0, 1, 2, 3}. In 6,000 training data, there are 12 labeled data and 5,988 unlabeled data. Thus, for two moons, each class has 6 points and for 4 spins, each class has 3 points. Because the number of labeled datapoints are too small, random sampling can lead to sample similar points. Therefore, we randomly sampled the labeled data with a constraint that the Euclidian distance of each data point is greater than 0.7. For these datasets, total iteration was performed 50 times, and the was increased from 10 −7 to 10 −4.5 on a log scale. FIG3 shows the basic setting of the synthetic dataset, and Figure 5 and 6 show the progress of the SST algorithm. The SST algorithm improves performance by gradually expanding certain data in a synthetic dataset. CIFAR-10: When the network were trained with 1k and 2k images, the test error were 38.71% and 26.99% respectively. The test errors in the SST algorithm were 23.15% and 15.72%, the SST has better performance than Π model but worse than Mean Teacher in 1k test. In 2k test, the SST has better performance than Π model and similar with Mean Teacher. Table 6: Classification error on CIFAR-10 (1k and 2k Labels) with 5 runs using in-class unlabeled data Method CIFAR-10 (1k) CIFAR-10 (2k) supervised (sampled)38.71 ± 0.47% 26.99 ± 0.79% Π model 27.36 ± 1.20% 18.02 ± 0.60% Mean Teacher BID9 21.55 ± 1.48% 15, 73 ± 0.31%Proposed method (SST) 23.15 ± 0.61% 15.72 ± 0.50% For the balancing experiments, in SVHN, 1,000 images are used as the labeled data and 45,000 balanced unlabeled images are used. As a , the SST is still worse than other algorithms. As mentioned in Section 4.1, we think that incorrectly estimated samples by SST deteriorate the performance. BID3 adds only four unlabeled classes and tests according to the radio of unlabeled class. For example, at 50%, two classes are in-class, and two classes are out-of-class. However, we experimented with the ratio of the number of non-animal data. Thus at 50% in CIFAR-10, unlabeled data consists of 50% in-class and 50% out-of-class. The data for each ratio are shown in TAB8, and the data category for animal and non-animal is shown in TAB9. TAB10 shows the of a general test on other algorithms. First, self-training (; BID16 BID17) without threshold does not improve performance even at 0%, and performance at 100% is degraded. When SST is applied to the softmax output as a threshold without selection network, the performance is improved at 0%, but the performance is degraded at 100%. Although the threshold was 0.9999, unlabeled data was added in 100% of the non-animal data. In the new SSL problem, the experiment in decay mode is to find a gap between two initiating points of data addition in 0 ∼ 75% of non-animal data and 100% of non-animal data. In our experiment, the growth rate of epsilon in CIFAR-10 is applied to CIFAR-100. (The smaller the growth rate of epsilon, the less the difference in between iterations. Therefore, although the difference in the between intervals is the same, depending on the growth rate of, the difference in the iteration can be greater.) In the case of 0 ∼ 75%, the number of data shows a slight increase from about 30 iterations. On the other hand, in the case of 100%, selected samples are added from about 40 iterations. The fixed threshold set to the threshold of 35 iterations. In the decay mode, the performance is much improved at 0%, and at 100%, the performance is degraded. On the other hand, in the fixed mode, there was no performance degradation from 0% to 100%. In CIFAR-100, the difference between 0% and 100 % is less than CIFAR-10, because the gap between animal and non-animal is small and additional data is small. FIG5 shows the experimental .
Our proposed algorithm does not use all of the unlabeled data for the training, and it rather uses them selectively.
1,318
scitldr
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Generative modeling of complicated data such as images and audio is a long-standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non-specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user-specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine-grained controls BID0.One can enforce user-specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent-variable models, such as Generative Adversarial Networks (GANs; BID8 and Variational Autoencoders (VAEs; BID15 BID26, learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations BID2 .Here, we show that new constraints can be enforced post-hoc on pre-trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into creating an unsupervised model that learns how to reconstruct data from latent embeddings, and leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints. Our key contributions are as follows:Figure 1: (a) Diagram of latent constraints for a VAE. We use one critic D attr to predict which regions of the latent space will generate outputs with desired attributes, and another critic D realism to predict which regions have high mass under the marginal posterior, q(z), of the training data. (b) We begin by pretraining a standard VAE, with an emphasis on achieving good reconstructions. (c) To train the actor-critic pair we use constraint-satisfaction labels, c, to train D to discriminate between encodings of actual data, z ∼ q(z|x), versus latent vectors z ∼ p(z) sampled from the prior or transformed prior samples G(z ∼ p(z), y). Similar to a Conditional GAN, both G and D operate on a concatenation of z and a binary attribute vector, y, allowing G to learn conditional mappings in latent space. If G is an optimizer, a separate attribute discriminator, D attr is trained and the latent vector is optimized to reduce the cost of both D attr and D realism. (d) To sample from the intersection of these regions, we use either gradient-based optimization or an amortized generator, G, to shift latent samples from either the prior (z ∼ p(z), sampling) or from the data (z ∼ q(z|x), transformation).• We show that it is possible to generate conditionally from an unconditional model, learning a critic function D(z) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function G(z), even with a nondifferentiable decoder (e.g., discrete sequences).• Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal "realism" constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).• Because we start from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person's expression or hair, the is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.• Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data. Decoder-based deep generative models such as VAEs and GANs generate samples that approximate a population distribution p (x) by passing samples from some simple tractable distribution p(z) (often p(z) N (0, I)) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum-likelihood estimation: DISPLAYFORM0 Figure 2: Typical VAEs use a pixel-wise data likelihood, N (µ x (z), σ x I), with σ x = 1 to produce coherent samples at the expense of visual and conceptual blurriness (Row 3). Some reconstructions (Row 2) actually change attributes of the original data. Decreasing σ x to 0.1 maximizes the ELBO (supplemental TAB3) and increases the fidelity of reconstructions (Row 4) at the cost of sample realism (Row 5). Using an actor to shift prior samples to satisfy the realism constraint, we achieve more realistic samples without sacrificing sharpness (Row 6). The samples are mapped to the closest point in latent space that both satisfies the realism constraint and has the same attributes as the original data.where the "encoder" distribution q(z | x) is an approximation to the posterior p(z | x), π(x; g(z)) p(x | z) is a tractable likelihood function that depends on some parameters output by a "decoder" function g(z), and q and g are fit to maximize the evidence lower bound (ELBO) L ELBO. The likelihood π(x; g) is often chosen to be a product of simple distributions such as π(x; g) = N (x; g, σ DISPLAYFORM1 GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the "modecollapse" problem, where the generator assigns mass to a small subset of the support of the population distribution-that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., BID5 BID6 BID24 to determine what z best matches some x tend to produce reconstructions that are reminiscent of the input but do not preserve its identity. On the other hand, VAEs (especially those with simple likelihoods π) often exhibit a tradeoff between sharp reconstructions and sensible-looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued BID20 that this is due to the "holes" problem; the decoder is trained on samples from the marginal posterior q(z) 1 N n q(z | x n), which may have very high KL divergence to the presupposed marginal p(z) BID10. In particular, if the decoder, g(z), can reconstruct arbitrary values of x with high accuracy (as in the case of small σ x) then the typical posterior p(z | x) will be highly concentrated. We show this experimentally in supplemental FIG7. If q(z | x) underestimates the posterior variance (as it usually does), then the marginal posterior q(z) will also be highly concentrated, and samples from p(x) = z p(z)p(x | z)dz may produce that are far from typical reconstructions DISPLAYFORM2 If we tune σ x to maximize the ELBO BID1, we find the optimal σ x ≈ 0.1 (supplemental TAB3). Figure 2 shows that this choice does indeed lead to good reconstructions but strange-looking samples. We look at the two latent dimensions that have the lowest average posterior standard deviation on the training set, taking low variance in z space as a proxy for influence over the generated images. All other latent dimensions are held fixed at their original values (from a sample from p(z) on the left, and from a sample from q(z | x) for a held-out x on the right). Gray x marks correspond to the points in latent space of the generated images to the right. The cross-section on the left, taken from a prior sample, shows contours that point towards more realistic looking digits. In the cross-section on the right, a sample from the validation set (indicated by orange squares) resides within a local maximum of the critic, as one would hope. Conditional GANs (CGAN; BID21 and conditional VAEs (CVAE; BID28 can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode-collapse and blurriness as their unconditional cousins. We take a different approach to conditional generation and identity-preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a "realism" critic to predict whether a given z maps to a high-quality sample. We also train critics to predict whether a given z maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random z vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an "actor" network to map a random set of z vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed z vectors to remain as close as possible to where they started, we alleviate the mode-collapse problem common to GANs. Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6. We define the realism constraint implicitly as being satisfied by samples from the marginal posterior q(z) 1 N n q(z | x n) and not those from p(z). By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity).As shown in Figure 1, we can train a critic D to differentiate between samples from p(z) and q(z). The critic loss, L D (z), is simply the cross-entropy, with labels c = 1 for z ∼ q(z | x) and c = 0 for z ∼ p(z). We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from q(z | x held−out) as being "realistic" FIG0 ).Sampling from the prior is sufficient to train D for models with lower KL Divergence, but if the KL Divergence between q and p is large, the chances of sampling a point p(z) that has high probability under q(z) becomes vanishingly small. This leads to poor sample quality and makes it difficult for D to learn a tight approximation of q(z) solely by sampling from p(z). Instead, we use an inner-loop of gradient-based optimization, DISPLAYFORM0, to move prior samples to points deemed more like q(z) by D. For clarity, we introduce the shorthand DISPLAYFORM1 ). This gives us our critic loss for the realism constraint: DISPLAYFORM2 Figure 4: Conditional generation with a CGAN actor-critic pair acting in the latent space of a VAE with σ x = 0.1. Each row starts from a different prior sample and maps it to a new point in latent space that satisfies both the attribute constraints and the realism constraint. The attribute constraints are changed one at a time to produce as smooth a transition as possible from left to right. The bottom CGAN is regularized during training to prefer small shifts in latent space (λ dist = 0.1), while the top is not (λ dist = 0.0). Compared to the images generated by the unregularized model, the images generated by the regularized model are much less diverse across columns, suggesting that the regularization does indeed enforce some degree of identity preservation. The regularized model produces images that are somewhat more diverse across rows, suggesting that the regularization fights mode collapse (arguably at the expense of image quality). For each column, the complete list of attributes is given in supplemental TAB2.Since this inner-loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer BID31 BID18. As with a traditional GAN, the parameters of the function G are updated to maximize the value D ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode-collapse. However, an advantage of applying the GAN in latent space is that we can regularize G to try and find the closest point in latent space that satisfies D, thus encouraging diverse solutions. We introduce a regularization term, L dist (z, z) = 1/σ z 2 log(1 + (z − z) 2 ) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale σ z (x) of the encoder distribution q(z | x), averaged over the training dataset, DISPLAYFORM3 We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels y for a dataset, we can accomplish this by using a CGAN in Figure 5: Identity-preserving transformations with optimization. Two separate critics are trained, one for attributes and one for the realism constraint. Starting at the latent points corresponding to the data reconstructions, we then perform gradient ascent in latent space on a weighted combination of critic values (1.0 attribute, 0.1 marginal posterior), stopping when a threshold value is passed for both critics. Images remain semantically close to the original because the pixel-wise likelihood of VAE training encourages identity-preserving reconstructions, and the dynamics of gradient ascent are naturally limited to finding solutions close in latent space. Panels are black for attributes of the original image, as the procedure just returns the original point in latent space.the latent space, which amounts to replacing D(z) and G(z) with conditional versions D(z, y) and G(z, y) and concatenating y to z as input. If both the actor and critic see attribute information, G must find points in latent space that could be samples from q(z) with attributes y. This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor-critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about 96× fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor-critic pair (3 fully connected ReLU layers of 256 units), which uses about 2884× fewer FLOPs/iteration than the VAE, and achieves only slightly worse than the larger CGAN (supplemental FIG5 and TAB0). Figure 4 demonstrates the quality of conditional samples from a CGAN actor-critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random z vectors to the same area of the latent space, since it is penalized for moving samples from p(z) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode-collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by TAB0: Accuracy of a separate model trained to classify attributes from images, evaluated on test data and generated images. We condition and evaluate the generated images on the same labels as the test data. For comparison, the of a similar task using invertible CGANs for generation BID24 are provided. However, since the full list of salient attributes was not given in the paper, we emphasize that they are not directly comparable as the two experiments use a slightly different set of attribute labels. We also measure the distance in latent space that prior samples are shifted, weighted by 1/σ z 2. Actors trained with a latent distance penalty λ dist have slightly worse accuracy, but find latent points much closer to the prior samples and produce a greater diversity of images (see supplemental Figures 7 and 8). Interestingly, an actor trained without a distance penalty achieves higher classification accuracy than the test set itself, possibly by generating images with more exaggerated and distinctive features than real data. A "small model" CGAN with 85x fewer parameters (3 fully connected layers of 256 units) generates images (supplemental FIG5 of comperable quality. Due to the smaller capacity, the model finds more local solutions (smaller z M SE) that have slightly less attribute accuracy, but are more visually similar to the prior sample without an explicit regularization term. Although we used a VAE as the base generative model, our approach could also be used to generate high-quality conditional samples from pretrained classical autoencoders. We show in supplemental FIG6 that we obtain reasonably good conditional samples (albeit with high-frequency spatial artifacts) as σ x → 0 (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable-looking images. The prior p(z) = N (0, I) also imposes a natural scale on the latent variables. If we have a VAE that can produce good reconstructions of held-out data, we can transform the attributes of the output by gradient-based optimization. We simply need to train a critic, D attr (z), to predict the attribute labels p(y | z) of the data embeddings z ∼ q(z | x), and use a cross-entropy loss to train. Then, starting from a data point, z ∼ q(z | x), we can perform gradient descent on the the realism constraint and attribute constraint jointly, L D real (z) + λ attr L Dattr (z). Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from z ∼ p(z). Figure 5 demonstrates transformations applied to samples from the held-out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance-penalty-free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network. So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre-trained model, using it to generate exemplars that are scored by a user-supplied reward function. If we constrain the reward Figure 6: Transformations from a prior sample for the Melody VAE model. In each 16-bar pianoroll, time is in the horizontal direction and pitch in the vertical direction. In the prior sample, notes falling outside of the C Major scale are shown in red. After transformation by G P=CMaj,d=0, all sampled notes fall within the scale, without a significant change to note density. After transformation of the original z by G P=CMaj,d=192, all sampled notes lay within the scale and the density increases beyond 192. Synthesized audio of these samples can be heard at https://goo.gl/ouULt9. function to be bounded, c(x): R N →, the problem becomes very similar to previous GAN settings, but now the actor, G, and critic, D, are working together. D aims to best approximate the true value of each latent state, E x∼p(x|z) c(x), and G aims to shift samples from the prior to highvalue states. The critic loss is the cross-entropy from c(x), and the actor loss is the same as L G in equation 3, where we again have a distance penalty to promote diversity of outputs. Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hardcoded rule-based constraints. We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, m, is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches P, and another for that encourages melodies to have at least d notes: Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11). DISPLAYFORM0 Conditional GANs BID21 and VAEs BID28 introduce conditioning variables at training time. BID28 allow these variables to affect the distribution in latent z space, but still require that p(z | y) be a tractable distribution. BID24 use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image-space processing techniques to transfer effects to the original input. BID32 propose adding "attribute vectors" to samples from p(z) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space. Table 2: Average rewards and constraint satisfaction rates (in parentheses) for unconditional (Prior) and conditional generation. Samples from the prior receive low rewards, on average, and near zero satisfaction rates from both the pitch class (C Major) and note density (≥ 192 notes) constraints. After applying an actor optimized only for the C Major scale (G P=CMaj,d=0), the pitch class constraint is fully satisfied 70.8% of the time with only a minor effect on density. The average value close to 1 also indicates that when the constraint is not satisfied, it is typically off by only a few notes. Applying an actor function optimized for the C Major scale and high density (G P=CMaj,d=192) causes both constraints to be satisfied at high rates, with a slightly larger shift in latent space. Some recent work has focused on applying more expressive prior constraints to VAEs BID26 BID29 BID3 BID30. The prior that maximizes the ELBO is p (z) = q(z) BID10; one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from q(z). Like the adversarial autoencoder of BID20, our realism constraint relies on a discriminative model, but instead of trying to force q(z) to equal some simple p(z), we only weakly constrain q(z) and then use a classifier to "clean up" our . Like this work, the recently proposed adversarially regularized autoencoder BID13 uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. FORMULA0 train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists. BID11 also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, BID22 generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm. We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways. One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods. While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach BID16. Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learning approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in BID4. For images, we use the MNIST digits dataset BID17 and the Large-scale CelebFaces Attributes (CelebA) dataset BID19. MNIST images are 28×28 pixels and greyscale scaled to. For attributes, we use the number class label of each digit. CelebA images are centercropped to 128 × 128 pixels and then downsampled to 64 × 64 RGB pixels and scaled to. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age. For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16-bar melodies by sliding a window with a single bar stride over each non-percussion instrument with a 4 4 time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note-on pitches, a hold state, and a rest state. All encoders, decoders, and classifiers are trained with the Adam optimizer BID14, with learning rate = 3e-4, β 1 = 0.9, and β 2 = 0.999.To train D real (z), D attr (z) and G(z) we follow the training procedure of BID9, applying a gradient penalty of 10, training D and G in a 10:1 step ratio, and use the Adam optimizer with learning rate = 3e-4, β 1 = 0.0, and β 2 = 0.9. While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one-sided label smoothing BID25. As samples from p(z) are easier to discriminate than samples from G(p(z)), we train D by sampling from p(z) at a rate 10 times less than G(p(z)). For actors with inner-loop optimization, G opt, 100 iterations of Adam are used with with learning rate = 1e-1, β 1 = 0.9, and β 2 = 0.999. To model the MNIST data, we use a deep feed-forward neural network FIG0 ).The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z. The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image. To model the CelebA data, we use a deep convolutional neural network FIG0 ).The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size 3 × 3, 3 × 3, 5 × 5, and 5 × 5, with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as the σ to parameterize a 1024-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z. The decoder passes the z through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size 5×5, 5×5, 3×3, and 3×3, with 1024, 512, 256, and 3 output channels, respectively. All deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image. The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross-entropy loss. Music is fundamentally sequential, so we use an LSTM-based sequence VAE for modelling monophonic melodies FIG0 ).The encoder is made up of a single-layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the µ and the softplus of the other half are used as a σ to parameterize a 512-dimension multivariate Gaussian distribution with a diagonal covariance matrix for z. Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the z goes through a linear layer to initialize the state of a 2-layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2-layer LSTM with 1024 units per layer. This bar-level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross-entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step. For G(z), we use a deep feed-forward neural network (Figure 12a) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2 * dim(z) outputs. Half of the outputs are used as the δz and the sigmoid of the other half are used as gates. The transformed z is the computed as (1 − gates) * z + gates * δz. This aids in training as the network only has to then predict shifts in z. When conditioning on attribute labels, y, to compute G(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input. For D(z), we use a deep feed-forward neural network (Figure 12b) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute D(z).When conditioning on attribute labels, y, to compute D(z, y), the labels are passed through a linear layer producing 2048 outputs which are concatenated with z as the model input. Figure 7: Additional generated CelebA faces by G CGAN with λ dist = 0. Full attribute labels are given in supplementary TAB2 Figure 8: Additional generated CelebA faces by G CGAN with λ dist = 0.1. Full attribute labels are given in supplementary TAB2 Figure 9: Optimization of samples drawn from the prior to satisfy both the realism constraint and attribute constraints (drawn from the test set). The optimization takes 100 steps, and images are shown at 0, 10, 30, 50 and 100 steps. D is trained with inner-loop optimization, G opt, as described in Section 9.2 Figure 10: Identity-distorting transformations with CGAN actor-critic. Without a penalty to encourage small moves in latent space, the actor maps the latent vectors of the original data points to generated images that have the correct attributes, but a different identity. Panels are black for attributes of the original image, as the procedure just returns the same image as the reconstruction. | x), averaged over the training set per a dimension. The x-axis is sorted from lowest to highest variance. Tighter posteriors correspond to more utilization of the latent dimension, and we scale our distance regularization the square inverse on a per-dimension basis.
A new approach to conditional generation by constraining the latent space of an unconditional generative model.
1,319
scitldr
Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice. Advances in techniques and hardware for training deep neural networks have recently enabled impressive accuracy improvements across many fundamental NLP tasks BID1 BID12 BID8 BID18, with the most computationally-hungry models obtaining the highest scores BID13 BID7 BID14 BID16. As a , training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated financial and environmental costs. Research and development of new models multiplies these costs by thousands of times by requiring retraining to experiment with model architectures and hyperparameters. Whereas a decade ago most NLP models could be trained and developed on a commodity laptop or server, many now require multiple instances of specialized hardware such as GPUs or TPUs, therefore limiting access to these highly accurate models on the basis of finances. Even when these expensive computational resources are available, model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time. Though some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern since energy is not currently derived from carbon-neural sources in many locations, and when renewable energy is available, it is still limited to the equipment we have to produce and store it, and energy spent training a neural network might better be allocated to heating a family's home. It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster, and based on the estimated CO 2 emissions listed in TAB1, model training and development likely make up a substantial portion of the greenhouse gas emissions attributed to many NLP researchers. To heighten the awareness of the NLP community to this issue and promote mindful practice and policy, we characterize the dollar cost and carbon emissions that from training the neural networks at the core of many state-of-the-art NLP models. We do this by estimating the kilowatts of energy required to train a variety of popular off-the-shelf NLP models, which can be converted to approximate carbon emissions and electricity costs. To estimate the even greater resources required to transfer an existing model to a new task or develop new models, we perform a case study of the full computational resources required for the development and tuning of a recent state-of-the-art NLP pipeline BID17. We conclude with recommendations to the community based on our findings, namely: Time to retrain and sensitivity to hyperparameters should be reported for NLP machine learning models; academic researchers need equitable access to computational resources; and researchers should prioritize developing efficient models and hardware. To quantify the computational and environmental cost of training deep neural network models for NLP, we perform an analysis of the energy required to train a variety of popular offthe-shelf NLP models, as well as a case study of the complete sum of resources required to develop LISA BID17, a state-of-the-art NLP model from EMNLP 2018, including all tuning and experimentation. We measure energy use as follows. We train the models described in §2.1 using the default settings provided, and sample GPU and CPU power consumption during training. Each model was trained for a maximum of 1 day. We train all models on a single NVIDIA Titan X GPU, with the exception of ELMo which was trained on 3 NVIDIA GTX 1080 Ti GPUs. While training, we repeatedly query the NVIDIA System Management Interface 2 to sample the GPU power consumption and report the average over all samples. We estimate the total time expected for models to train to completion using training times and hardware reported in the original papers. We then calculate the power consumption in kilowatt-hours (kWh) as follows. Let p c be the average power draw (in watts) from all CPU sockets during training, let p r be the average power draw from all DRAM (main memory) sockets, let p g be the average power draw of a GPU during training, and let g be the number of GPUs used to train. We estimate total power consumption as combined GPU, CPU and DRAM consumption, then multiply this by Power Usage Effectiveness (PUE), which accounts for the additional energy required to support the compute infrastructure (mainly cooling). We use a PUE coefficient of 1.58, the 2018 global average for data centers BID0. Then the total power p t required at a given instance during training is given by: DISPLAYFORM0 The U.S. Environmental Protection Agency (EPA) provides average CO 2 produced (in pounds per kilowatt-hour) for power consumed in the U.S. , which we use to convert power to estimated CO 2 emissions: DISPLAYFORM1 This conversion takes into account the relative proportions of different energy sources (primarily natural gas, coal, nuclear and renewable) consumed to produce energy in the United States. TAB3 lists the relative energy sources for China, Germany and the United States compared to the top three cloud service providers. The U.S. breakdown of energy is comparable to that of the most popular cloud compute service, Amazon Web Services, so we believe this conversion to provide a reasonable estimate of CO 2 emissions per kilowatt hour of compute energy used. We analyze four models, the computational requirements of which we describe below. All models have code freely available online, which we used out-of-the-box. For more details on the models themselves, please refer to the original papers. Transformer. The Transformer model BID18 is an encoder-decoder architecture primarily recognized for efficient and accurate machine translation. The encoder and decoder each consist of 6 stacked layers of multi-head selfattention. BID18 report that the Transformer base model (65M parameters) was trained on 8 NVIDIA P100 GPUs for 12 hours, and the Transformer big model (213M parameters) was trained for 3.5 days (84 hours; 300k steps). This model is also the basis for recent work on neural architecture search (NAS) for machine translation and language modeling BID16, and the NLP pipeline that we study in more detail in §4.2 BID17. BID16 report that their full architecture search ran for a total of 979M training steps, and that their base model requires 10 hours to train for 300k steps on one TPUv2 core. This equates to 32,623 hours of TPU or 274,120 hours on 8 P100 GPUs. ELMo. The ELMo model BID13 is based on stacked LSTMs and provides rich word representations in context by pre-training on a large amount of data using a language modeling objective. Replacing context-independent pretrained word embeddings with ELMo has been shown to increase performance on downstream tasks such as named entity recognition, semantic role labeling, and coreference. BID13 report that ELMo was trained on 3 NVIDIA GTX 1080 GPUs for 2 weeks (336 hours).BERT. The BERT model BID7 provides a Transformer-based architecture for building contextual representations similar to ELMo, but trained with a different language modeling objective. BERT substantially improves accuracy on tasks requiring sentence-level representations such as question answering and natural language inference. BID7 report that the BERT base model (110M parameters) was trained on 16 TPU chips for 4 days (96 hours). NVIDIA reports that they can train a BERT model in 3.3 days (79.2 hours) using 4 DGX-2H servers, totaling 64 Tesla V100 GPUs BID10. GPT-2. This model is the latest edition of OpenAI's GPT general-purpose token encoder, also based on Transformer-style self-attention and trained with a language modeling objective . By training a very large model on massive data, BID14 show high zero-shot performance on question answering and language modeling benchmarks. The large model described in BID14 has 1542M parameters and is reported to require 1 week (168 hours) of training on 32 TPU v3 chips. 6 There is some precedent for work characterizing the computational requirements of training and inference in modern neural network architectures in the computer vision community. BID11 present a detailed study of the energy use required for training and inference in popular convolutional models for image classification in computer vision, including fine-grained analysis comparing different neural network layer types. BID5 assess image classification model accuracy as a function of model size and gigaflops required during inference. They also measure average power draw required during inference on GPUs as a function of batch size. Neither work analyzes the recurrent and self-attention models that have become commonplace in NLP, nor do they extrapolate power to estimates of carbon and dollar cost of training. Analysis of hyperparameter tuning has been performed in the context of improved algorithms for hyperparameter search BID3 BID2 BID15. To our knowledge there exists to date no analysis of the computation required for R&D and hyperparameter tuning of neural network models in NLP. 6 Via the authors on Reddit. 7 GPU lower bound computed using pre-emptible P100/V100 U.S. resources priced at $0.43-$0.74/hr, upper bound uses on-demand U.S. resources priced at $1.46-$2.48/hr. We similarly use pre-emptible ($1.46/hr-$2.40/hr) and on-demand ($4.50/hr-$8/hr) pricing as lower and upper bounds for TPU v2/3; cheaper bulk contracts are available. Table 3: Estimated cost of training a model in terms of CO 2 emissions (lbs) and cloud compute cost (USD). 7 Power and carbon footprint are omitted for TPUs due to lack of public information on power draw for this hardware. Table 3 lists CO 2 emissions and estimated cost of training the models described in §2.1. Of note is that TPUs are more cost-efficient than GPUs on workloads that make sense for that hardware (e.g. BERT). We also see that models emit substantial carbon emissions; training BERT on GPU is roughly equivalent to a trans-American flight. BID16 report that NAS achieves a new stateof-the-art BLEU score of 29.7 for English to German machine translation, an increase of just 0.1 BLEU at the cost of at least $150k in on-demand compute time and non-trivial carbon emissions. To quantify the computational requirements of R&D for a new model we study the logs of all training required to develop LinguisticallyInformed Self-Attention BID17 ), a multi-task model that performs part-of-speech tagging, labeled dependency parsing, predicate detection and semantic role labeling. This model makes for an interesting case study as a representative NLP pipeline and as a Best Long Paper at EMNLP.Model training associated with the project spanned a period of 172 days (approx. 6 months). During that time 123 small hyperparameter grid searches were performed, ing in 4789 jobs in total. Jobs varied in length ranging from a minimum of 3 minutes, indicating a crash, to a maximum of 9 days, with an average job length of 52 hours. All training was done on a combination of NVIDIA Titan X (72%) and M40 (28%) GPUs. 8 The sum GPU time required for the project totaled 9998 days (27 years). This averages to 8 We approximate cloud compute cost using P100 pricing. about 60 GPUs running constantly throughout the 6 month duration of the project. TAB6 lists upper and lower bounds of the estimated cost in terms of Google Cloud compute and raw electricity required to develop and deploy this model. 9 We see that while training a single model is relatively inexpensive, the cost of tuning a model for a new dataset, which we estimate here to require 24 jobs, or performing the full R&D required to develop this model, quickly becomes extremely expensive.are compatible with their setting. More explicit characterization of tuning time could also reveal inconsistencies in time spent tuning baseline models compared to proposed contributions. Realizing this will require: a standard, hardwareindependent measurement of training time, such as gigaflops required to convergence, and a standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers need equitable access to computation resources. Recent advances in available compute come at a high price not attainable to all who desire access. Most of the models studied in this paper were developed outside academia; recent improvements in state-of-the-art accuracy are possible thanks to industry access to large-scale compute. Limiting this style of research to industry labs hurts the NLP research community in many ways. First, it stifles creativity. Researchers with good ideas but without access to large-scale compute will simply not be able to execute their ideas, instead constrained to focus on different problems. Second, it prohibits certain types of research on the basis of access to financial resources. This even more deeply promotes the already problematic "rich get richer" cycle of research funding, where groups that are already successful and thus well-funded tend to receive more funding due to their existing accomplishments. Third, the prohibitive start-up cost of building in-house resources forces resource-poor groups to rely on cloud compute services such as AWS, Google Cloud and Microsoft Azure. While these services provide valuable, flexible, and often relatively environmentally friendly compute resources, it is more cost effective for academic researchers, who often work for nonprofit educational institutions and whose research is funded by government entities, to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. For example, an off-the-shelf GPU server containing 8 NVIDIA 1080 Ti GPUs and supporting hardware can be purchased for approximately $20,000 USD. At that cost, the hardware required to develop the model in our case study (approximately 58 GPUs for 172 days) would cost $145,000 USD plus electricity, about half the estimated cost to use on-demand cloud GPUs. Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers. Researchers should prioritize computationally efficient hardware and algorithms. We recommend a concerted effort by industry and academia to promote research of more computationally efficient algorithms, as well as hardware that requires less energy. An effort can also be made in terms of software. There is already a precedent for NLP software packages prioritizing efficient models. An additional avenue through which NLP and machine learning software developers could aid in reducing the energy associated with model tuning is by providing easyto-use APIs implementing more efficient alternatives to brute-force grid search for hyperparameter tuning, e.g. random or Bayesian hyperparameter search techniques BID3 BID2 BID15. While software packages implementing these techniques do exist, 10 they are rarely employed in practice for tuning NLP models. This is likely because their interoperability with popular deep learning frameworks such as PyTorch and TensorFlow is not optimized, i.e. there are not simple examples of how to tune TensorFlow Estimators using Bayesian search. Integrating these tools into the workflows with which NLP researchers and practitioners are already familiar could have notable impact on the cost of developing and tuning in NLP.
We quantify the energy cost in terms of money (cloud credits) and carbon footprint of training recently successful neural network models for NLP. Costs are high.
1,320
scitldr
Many models based on the Variational Autoencoder are proposed to achieve disentangled latent variables in inference. However, most current work is focusing on designing powerful disentangling regularizers, while the given number of dimensions for the latent representation at initialization could severely influence the disentanglement. Thus, a pruning mechanism is introduced, aiming at automatically seeking for the intrinsic dimension of the data while promoting disentangled representations. The proposed method is validated on MPI3D and MNIST to be advancing state-of-the-art methods in disentanglement, reconstruction, and robustness. The code is provided on the https://github.com/WeyShi/FYP-of-Disentanglement. To advance disentanglement, models based on the Variational Autoencoder (VAE) are proposed in terms of additional disentangling regularizers. However, in this paper, we introduce an orthogonal mechanism that is applicable to most state-of-theart models, ing in higher disentanglement and robustness for model configurationsespecially the choice of dimensionality for the latent representation. Intuitively, both excessive and deficient latent dimensions can be detrimental to achieving the best disentangled latent representations. For excessive dimensions, powerful disentangling regularizers, like the β-VAE , can force information to be split across dimensions, ing in capturing incomplete features. On the other hand, having too few dimensions inevitably leads to an entangled representation, such that each dimension could capture enough information for the subsequent reconstruction. In this paper, we introduce an approximated L 0 regularization to prune the dimension of the latent representation vector. Consequently, our Pruning Variational Autoencoders (PVAE) framework is applicable to most state-of-the-art VAE-based models due to its orthogonality with current approaches. But in this challenge, we choose to put the pruning mechanism onto the DIP-VAE (for Disentangled Inferred Prior VAE) due to its decent performance on MPI3D . In the context of pruning, the aim of L 0 is to compress the network, while here the goal is seeking for the intrinsic dimension for the latent representation, which is achieved by the balance between several terms. 2.1. The Masked Base Model: Masked DIP-VAE Basically, we desire to achieve binary masks m, depending on some learnable parameters α, to control each dimension. Thus, the DIP-VAE loss term with masks can be formulated as follow: where x, p(x), z, and p(z) are the input images, the data distribution, the latent variables (the output of the encoder), and their prior, respectively, and µ φ (·), and p θ (·), q φ (z|·) denote the function of the encoder's mean path, the decoder, and the encoder. Meanwhile, ] denotes the covariance matrix of the pruned mean representations. There are two points to note about the Kullback-Leibler (KL) divergence terms. Firstly, they decompose across dimensions (z i) because we assumed factorized prior and variational posterior distributions. Secondly, the KL term for each dimension is multiplied by the mask for consistency when that dimension is forced to zero, which can be understood in terms of inference with spike-and-slab distributions (see , Appendix A). With a second term denoting L 0 regularization over e, the samples drawn from the q φ (z|x), the total loss can be formulated as where, which is the sum of the probability of m j being positive. where γ < 0 and ζ > 1 are the lower and upper bounds for the stretched range, and β is the temperature coefficient of the masks generation process introduced in Section 2.3. The given formulation is slightly different from for clarity. The mask vector m is clamped such that m j ∈. The binary masks m are modelled as following Bernoulli distributions with parameters α: proposed to obtain these masks in a differentiable fashion, feeding uniform random variables through a sigmoid-like function whose location depends on α. Furthermore, to ensure that masks are likely to be exactly 0 or 1, they stretch the value range of the sigmoid-like function to be [ζ, γ] and then clamp it to be. This process can be formulated as below, and is illustrated in Appendix A: To align it with VAE, we need the encoder to output means µ φ (x) and variances σ 2 φ (x) of q(z|x) instead of means µ φ (x) and log σ 2 φ (x) such that after pruning we have a N rather than N for a specific dimension. In detail, a mask is multiplied with each pair of mean and variance and the KL divergence for the corresponding dimension, such that dimensions can effectively be'switched off' and not affect training. To avoid numerical instability in the KL divergence, we add a small positive constant to σ 2 φ (x). Given the outputs of the last layer of the original encoder, the L0Pair layer can be expressed as In terms of the structure of the encoder and the decoder, we adopt the default settings given in the starter kit 1, which is based on row 3 of Table 1 on page 13 of. We list our choices of hyperparameters in Appendix B. The L 0 regularization in the pruning mechanism facilitates the performance and the robustness of vanilla DIP-VAE on MPI3D by approaching the intrinsic dimension during training. In Appendix B, we additionally present on MNIST with a JointVAE extension of the proposed PVAE (PJVAE), which further validates the disentanglement benefits of pruning. A pruning mechanism that is complementary to most current state-of-the-art VAE-based disentangling models is introduced and validated on MPI3D and MNIST. The approximated L 0 regularization facilitates the model to capture better-disentangled representations with optimal size and increases the robustness to initialization. Moreover, with the same hyperparameters, the model approaches the intrinsic dimension for several datasets including MNIST and MPI3D, even with an extra-large number of dimensions at initialization. Even given the intrinsic dimension, the PVAE still outperforms other SOTA methods in terms of disentanglement and reconstruction. The default parameters are given in Table 1. As for the optimizer and its Learning rate, we select Adam optimizer with 10 −4. Moreover, the τ = 0.1 generalizes well on both MNIST and CelebA 2. To capture the discrete features like Digits , we adopt one additional discrete variable and the model becomes Pruning Joint VAE (PJVAE). Since there is only one discrete variable, it is unnecessary to impose further disentanglement on it (the disentanglement on discrete variables is beyond the scope of this report). In Figure 3, we can see the advantage of pruning on MNIST, especially when the initialization far deviates from the intrinsic dimension (which is still unknown for MNIST, but is estimated to be around 10 by several methods). However, the PJVAE is robust to the initialization as long as it is given enough latent space at initialization. Surprisingly, with appropriate initialization, its reconstruction occasionally becomes better than the VAE, with consistent higher disentanglement performance. Furthermore, on this dataset PJVAE outperforms DIP-VAE in both metrics. Inspecting the variation between different initialization, we can validate the robustness of PJVAE versus the other two methods. In general, in terms of TC, PJVAE possesses obvious advantages. And reconstruction performance is the same, PJVAE also showing a consistent lower error. Note that both VAE and DIP-VAE are initialized with one additional 10-value categorical (discrete) variable for a fair comparison. The only difference between this DIP-VAE (actually, DIP-JointVAE) and PJVAE, is the approximated L 0. MNIST. The number denotes the total dimensionality of the latent variables at initialization. TC stands for Total Correlation.
The Pruning VAE is proposed to search for disentangled variables with intrinsic dimension.
1,321
scitldr
We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment. Representation learning BID1 ) plays a critical role in many modern machine learning systems. Representations map raw data to more useful forms and the choice of representation is an important component of any application. Broadly speaking, there are two areas of research emphasizing different details of how to learn useful representations. The supervised training of high-capacity models on large labeled datasets is critical to the recent success of deep learning techniques for a wide range of applications such as image classification BID25 ), speech recognition ), and machine translation ). Analysis of the task specific representations learned by these models reveals many fascinating properties BID58 ). Image classifiers learn a broadly useful hierarchy of feature detectors re-representing raw pixels as edges, textures, and objects BID55 ). In the field of computer vision, it is now commonplace to reuse these representations on a broad suite of related tasks -one of the most successful examples of transfer learning to date BID42 ).There is also a long history of unsupervised representation learning BID41 ). Much of the early research into modern deep learning was developed and validated via this approach BID14; BID17; BID49; BID4; BID27 ). Unsupervised learning is promising due to its ability to scale beyond the small subsets and subdomains of data that can be cleaned and labeled given resource, privacy, or other constraints. This advantage is also its difficulty. While supervised approaches have clear objectives that can be directly optimized, unsupervised approaches rely on proxy tasks such as reconstruction, density estimation, or generation, which do not directly encourage useful representations for specific tasks. As a , much work has gone into designing objectives, priors, and architectures meant to encourage the learning of useful representations. We refer readers to for a detailed review. Despite these difficulties, there are notable applications of unsupervised learning. Pre-trained word vectors are a vital part of many modern NLP systems BID5 ). These representations, learned by modeling word co-occurrences, increase the data efficiency and generalization capability of NLP systems BID45; BID3 ). Topic modelling can also discover factors within a corpus of text which align to human interpretable concepts such as "art" or "education" BID2 ).How to learn representations of phrases, sentences, and documents is an open area of research. Inspired by the success of word vectors, BID23 propose skip-thought vectors, a method of training a sentence encoder by predicting the preceding and following sentence. The representation learned by this objective performs competitively on a broad suite of evaluated tasks. More advanced training techniques such as layer normalization BID0 ) further improve . However, skip-thought vectors are still outperformed by supervised models which directly optimize the desired performance metric on a specific dataset. This is the case for both text classification tasks, which measure whether a specific concept is well encoded in a representation, and more general semantic similarity tasks. This occurs even when the datasets are relatively small by modern standards, often consisting of only a few thousand labeled examples. In contrast to learning a generic representation on one large dataset and then evaluating on other tasks/datasets, BID6 proposed using similar unsupervised objectives such as sequence autoencoding and language modeling to first pretrain a model on a dataset and then finetune it for a given task. This approach outperformed training the same model from random initialization and achieved state of the art on several text classification datasets. Combining word-level language modelling of a dataset with topic modelling and fitting a small neural network feature extractor on top has also achieved strong on document level sentiment analysis BID7 ).Considering this, we hypothesize two effects may be combining to in the weaker performance of purely unsupervised approaches. Skip-thought vectors were trained on a corpus of books. But some of the classification tasks they are evaluated on, such as sentiment analysis of reviews of consumer goods, do not have much overlap with the text of novels. We propose this distributional issue, combined with the limited capacity of current models, in representational underfitting. Current generic distributed sentence representations may be very lossy -good at capturing the gist, but poor with the precise semantic or syntactic details which are critical for applications. The experimental and evaluation protocols may be underestimating the quality of unsupervised representation learning for sentences and documents due to certain seemingly insignificant design decisions. BID12 also raises concern about current evaluation tasks in their recent work which provides a thorough survey of architectures and objectives for learning unsupervised sentence representations -including the above mentioned skip-thoughts. In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn an unsupervised representation that accurately contains this concept. BID37 showed that word-level recurrent language modelling supports the learning of useful word vectors. We are interested in pushing this line of work to learn representations of not just words but arbitrary scales of text with no distinction between sub-word, word, phrase, sentence, or document-level structure. Recent work has shown that traditional NLP task such as Named Entity Recognition and Part-of-Speech tagging can be performed this way by processing text as a byte sequence BID10 ). Byte level language modelling is a natural choice due to its simplicity and generality. We are also interested in evaluating this approach as it is not immediately clear whether such a low-level training objective supports the learning of high-level representations. We train on a very large corpus picked to have a similar distribution as our task of interest. We also benchmark on a wider range of tasks to quantify the sensitivity of the learned representation to various degrees of out-of-domain data and tasks. Much previous work on language modeling has evaluated on relatively small but competitive datasets such as Penn Treebank BID33 ) and Hutter Prize Wikipedia BID18 ). As discussed in BID19 performance on these datasets is primarily dominated by regularization. Since we are interested in high-quality sentiment representations, we chose the Amazon product review dataset introduced in BID35 as a training corpus. In deduplicated form, this dataset contains over 82 million product reviews from May 1996 to July 2014 amounting to over 38 billion training bytes. Due to the size of the dataset, we first split it into 1000 shards containing equal numbers of reviews and set aside 1 shard for validation and 1 shard for test. Many potential recurrent architectures and hyperparameter settings were considered in preliminary experiments on the dataset. Given the size of the dataset, searching the wide space of possible configurations is quite costly. To help alleviate this, we evaluated the generative performance in terms of log-likelihood of smaller candidate models after a single pass through the dataset and selected the best performing architecture according to this metric for large scale experiments. The model chosen is a single layer multiplicative LSTM BID24 ) with 4096 units. We observed multiplicative LSTMs to converge faster than normal LSTMs for the hyperparameter settings that were explored both in terms of data and wall-clock time. The model was trained for a single epoch on mini-batches of 128 subsequences of length 256 for a total of 1 million weight updates. States were initialized to zero at the beginning of each shard and persisted across updates to simulate full-backpropagation and allow for the forward propagation of information outside of a given subsequence. Adam BID22 ) was used to accelerate learning with an initial 5e-4 learning rate that was decayed linearly to zero over the course of training. Weight normalization BID46 ) was applied to the LSTM parameters. Data-parallelism was used across 4 Pascal Titan X gpus to speed up training and increase effective memory size. Training took approximately one month. The model is compact, containing approximately as many parameters as there are reviews in the training dataset. It also has a high ratio of compute to total parameters due to operating at a byte level. The model reaches 1.12 bits per byte. Our model processes text as a sequence of UTF-8 encoded bytes BID54 ). For each byte, the model updates its hidden state and predicts a probability distribution over the next possible byte. The hidden state of the model serves as an online summary of the sequence which encodes all information the model has learned to preserve that is relevant to predicting the future bytes of the sequence. We are interested in understanding the properties of the learned encoding. The process of extracting a feature representation is outlined as follows:• Since newlines are used as review delimiters in the training dataset, all newline characters are replaced with spaces to avoid the model resetting state.• Any leading whitespace is removed and replaced with a newline+space to simulate a start token. Any trailing whitespace is removed and replaced with a space to simulate an end token. HTML is unescaped and HTML tags are removed to mirror preprocessing applied to the training dataset. The text is encoded as a UTF-8 byte sequence.• Model states are initialized to zeros. The model processes the sequence and the final cell states of the mLSTM are used as a feature representation. We follow the methodology established in BID23 by training a logistic regression classifier on top of our model's representation on datasets for tasks including semantic relatedness, text classification, and paraphrase detection. For the details on these comparison experiments, we refer the reader to their work. One exception is that we use an L1 penalty for text classification instead of L2 as we found this performed better in the very low data regime. BID47 ) CNN BID21 ) DMN BID26 ) P-SL999 LSTM BID52 ) NSE BID39 ) CT-LSTM BID30 4.1 TEXT CLASSIFICATION AND SENTIMENT ANALYSIS Table 1 shows the of our model on 4 standard text classification datasets. The performance of our model is noticeably lopsided. On the MR BID44 ) and CR BID15 ) sentiment analysis datasets we improve the state of the art by a significant margin. The MR and CR datasets are sentences extracted from Rotten Tomatoes, a movie review website, and Amazon product reviews (which almost certainly overlaps with our training corpus). This suggests that our model has learned a rich representation of text from a similar domain. On the other two datasets, SUBJ's subjectivity/objectivity detection BID43 ) and MPQA's opinion polarity BID51 ) our model has no noticeable advantage over other unsupervised representation learning approaches and is still outperformed by a supervised approach. To better quantify the learned representation, we also test on a wider set of sentiment analysis datasets with different properties. The Stanford Sentiment Treebank (SST) BID47 ) was created specifically to evaluate more complex compositional models of language. It is derived from the same base dataset as MR but was relabeled via Amazon Mechanical Turk and includes dense labeling of the phrases of parse trees computed for all sentences. For the binary subtask, this amounts to ST BID23 77.3 81.8 92.6 87.9 STLN BID0 79.5 83.1 93.7 89.3 SDAE BID12 74.6 78.0 90.8 86.9 CNN BID21 81.5 85.0 93.4 89.6 Adasent 83.1 86.3 95.5 93.3 byte mLSTM 86.9 91.4 94.6 88.5 Table 1: Small dataset classification accuracies FullBoW BID31 11.1 NBSVM BID36 8.1 Sentiment unit (ours) 7.7 SA-LSTM BID6 7.2 byte mLSTM (ours) 7.1 TopicRNN BID7 6.2 Virt Adv BID38 5.9 Table 2: IMDB sentiment analysis 76961 total labels compared to the 6920 sentence level labels. As a demonstration of the capability of unsupervised representation learning to simplify data collection and remove steps from a traditional NLP pipeline, our reported ignore these dense labels and computed parse trees, using only the raw text and sentence level labels. The representation learned by our model achieves 91.8% significantly outperforming the state of the art of 90.2% by a 30 model ensemble BID30 ). As visualized in FIG1, our model is very data efficient. It matches the performance of baselines using as few as a dozen labeled examples and outperforms all previous with only a few hundred labeled examples. This is under 10% of the total sentences in the dataset. Confusingly, despite a 16% relative error reduction on the binary subtask, it does not reach the state of the art of 53.6% on the fine-grained subtask, achieving 52.9%. Figure 3: Histogram of cell values for the sentiment unit on IMDB reviews. We conducted further analysis to understand what representations our model learned and how they achieve the observed data efficiency. The benefit of an L1 penalty in the low data regime (see FIG1) is a clue. L1 regularization is known to reduce sample complexity when there are many irrelevant features BID40 ). This is likely to be the case for our model since it is trained as a language model and not as a supervised feature extractor. By inspecting the relative contributions of features on various datasets, we discovered a single unit within the mLSTM that directly corresponds to sentiment. In Figure 3 we show the histogram of the final activations of this unit after processing IMDB reviews BID31 ) which shows a bimodal distribution with a clear separation between positive and negative reviews. In FIG3 we visualize the activations of this unit on 6 randomly selected reviews from a set of 100 high contrast reviews which shows it acts as an online estimate of the local sentiment of the review. Fitting a threshold to this single unit achieves a test accuracy of 92.30% which outperforms a strong supervised on the dataset, the 91.87% of NB-SVM trigram BID36 ), but is still below the semi-supervised state of the art of 94.09% BID38 ). Using the full 4096 unit representation achieves 92.88%. This is an improvement of only 0.58% over the sentiment unit suggesting that almost all information the model retains that is relevant to sentiment analysis is represented in the very compact form of a single scalar. Table 2 has a full list of on the IMDB dataset. Method r ρ MSE ST BID23 0.858 0.792 0.269 STLN BID0 0.858 0.788 0.270Tree-LSTM BID48 ST BID23 73.0 82.0 SDAE BID12 76.4 83.4MTMETRICS BID32 Encouraged by these , we were curious how well the model's representation scales to larger datasets. We try our approach on the binary version of the Yelp Dataset Challenge in 2015 as introduced in BID56. This dataset contains 598,000 examples which is an order of magnitude larger than any other datasets we tested on. When visualizing performance as a function of number of training examples in FIG2, we observe a "capacity ceiling" where the test accuracy of our approach only improves by a little over 1% across a four order of magnitude increase in training data. Using the full dataset, we achieve 95.22% test accuracy. This better than a BoW TFIDF baseline at 93.66% but slightly worse than the 95.64% of a linear classifier on top of the 500,000 most frequent n-grams up to length 5.The observed capacity ceiling is an interesting phenomena and stumbling point for scaling our unsupervised representations. We think a variety of factors are contributing to cause this. Since our model is trained only on Amazon reviews, it is does not appear to be sensitive to concepts specific to other domains. For instance, Yelp reviews are of businesses, where details like hospitality, location, and atmosphere are important. But these ideas are not present in reviews of products. Additionally, there is a notable drop in the relative performance of our approach transitioning from sentence to document datasets. This is likely due to our model working on the byte level which leads to it focusing on the content of the last few sentences instead of the whole document. Finally, as the amount of labeled data increases, the performance of a simple linear model trained on top of a static representation will eventually saturate. Complex models explicitly trained for a task can continue to improve and eventually outperform our approach when provided with enough labeled data. With this context, the observed reveal a natural ordering. On a small sentence level dataset from a very similar domain (the movie reviews of Stanford Sentiment Treebank) our model sets a new state of the art. But on a large, document level dataset from a slightly different domain (the Yelp reviews) it is only competitive with standard baselines. The features of our model are learned via an unsupervised objective. A potential concern with this training methodology is that it is unreliable. There is no guarantee that the representations learned will be useful for a desired task because the model is not optimizing directly for this. This could lead to high variance of performance on desired tasks. This concern is further amplified for the properties of individual features and not just the representation as whole. While the work of has shown that networks differing in only random initialization can learn the same features, this was This product does what it is supposed to. I always keep three of these in my kitchen just in case ever I need a replacement cord. Great little item. Hard to put on the crib without some kind of embellishment. My guess is just like the screw kind of attachment I had. Best hammock ever! Stays in place and holds it's shape. Comfy (I love the deep neon pictures on it), and looks so cute. tested for CNNs trained for supervised image classification. To check whether this is the case for our approach, we trained a second model using the same hyperparameters differing only in weight parameterization and compared to the model discussed in this paper. The best single unit in this model (selected via validation set) achieves 92.42% test accuracy on IMDB compared to the 92.30% of the unit visualized in this paper. This suggests that the sentiment unit is an example of a convergent representation. Besides classification, we also evaluate on two other standard tasks: semantic relatedness and paraphrase detection. While our model performs competitively on Microsoft Research Paraphrase Corpus BID8 ) in TAB2, it performs poorly on the SICK semantic relatedness task BID34 ) in Table 4. It is likely that the form and content of the semantic relatedness task, which is built on top of descriptions of images and videos and contains sentences such as "A sea turtle is hunting for fish" is effectively out-of-domain for our model which has only been trained on the text of product reviews. Although the focus of our analysis has been on the properties of our model's representation, it is trained as a generative model and we are also interested in its generative capabilities. BID16 and BID9 both designed conditional generative models to disentangle the content of text from various attributes like sentiment or tense. We were curious whether a similar could be achieved using the sentiment unit. In TAB5 we show that by simply setting the sentiment unit to be positive or negative, the model generates corresponding positive or negative reviews. While all sampled negative reviews contain sentences with negative sentiment, they sometimes contain sentences with positive sentiment as well. This might be reflective of the bias of the training corpus which contains over 5x as many five star reviews as one star reviews. Nevertheless, it is interesting to see that such a simple manipulation of the model's representation has a noticeable effect on its behavior. The samples are also high quality for a byte level language model and often include valid sentences. It is an open question why our model recovers the concept of sentiment in such a precise, disentangled, interpretable, and manipulable way. It is possible that sentiment as a conditioning feature has strong predictive capability for language modelling. This is likely since sentiment is such an important component of a review. Previous work analyzing LSTM language models showed the existence of interpretable units that indicate position within a line or presence inside a quotation BID20 ). In many ways, the sentiment unit in this model is just a scaled up example of the same phenomena. The update equation of an LSTM could play a role. The element-wise operation of its gates may encourage axis-aligned representations. Models such as word2vec have also been observed to have small subsets of dimensions strongly associated with specific tasks BID28 ).Our work highlights the sensitivity of learned representations to the data distribution they are trained on. The make clear that it is unrealistic to expect a model trained on a corpus of books, where the two most common genres are Romance and Fantasy, to learn an encoding which preserves the exact sentiment of a review. Likewise, it is unrealistic to expect a model trained on Amazon product reviews to represent the precise semantic content of a caption of an image or a video. There are several promising directions for future work highlighted by our . The observed performance plateau, even on relatively similar domains, suggests improving the representation model both in terms of architecture and size. Since our model operates at the byte-level, hierarchical/multitimescale extensions could improve the quality of representations for longer documents. The sensitivity of learned representations to their training domain could be addressed by training on a wider mix of datasets with better coverage of target tasks. Finally, our work encourages further research into language modelling as it demonstrates that the standard language modelling objective with no modifications is sufficient to learn high-quality representations.
Byte-level recurrent language models learn high-quality domain specific representations of text.
1,322
scitldr
Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn. Sampling discrete latent variables can in high-variance gradient estimators for two primary reasons: 1) branching on the samples within the model, and 2) the lack of a pathwise derivative for the samples. While current state-of-the-art methods employ control-variate schemes for the former and continuous-relaxation methods for the latter, their utility is limited by the complexities of implementing and training effective control-variate schemes and the necessity of evaluating (potentially exponentially) many branch paths in the model. Here, we revisit the Reweighted Wake Sleep (RWS;) algorithm, and through extensive evaluations, show that it circumvents both these issues, outperforming current state-of-the-art methods in learning discrete latent-variable models. Moreover, we observe that, unlike the Importance-weighted Autoencoder, RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent-variable models as well. Our suggest that RWS is a competitive, often preferable, alternative for learning deep generative models. Learning deep generative models with discrete latent variables opens up an avenue for solving a wide range of tasks including tracking and prediction BID28, clustering BID31, model structure learning BID0, speech modeling BID16, topic modeling BID1, language modeling BID4, and concept learning BID17 BID20. Furthermore, recent deep-learning approaches addressing counting BID6, attention BID37, adaptive computation time BID9, and differentiable data structures BID9 BID11, underscore the importance of models with conditional branching induced by discrete latent variables. Current state-of-the-art methods optimize the evidence lower bound (ELBO) based on the importance weighted autoencoder (IWAE) BID3 by using either reparameterization BID19 BID32, continuous relaxations of the discrete latents BID15 or the REINFORCE method BID36 with control variates BID24 BID25 BID12 BID35 BID8.Despite the effective large-scale learning made possible by these methods, several challenges remain. First, with increasing number of particles, the IWAE ELBO estimator adversely impacts inference-network quality, consequently impeding learning of the generative model. Second, using continuous relaxations in a biased gradient estimator, and in models with stochastic branching, forces evaluation of potentially exponential number of branching paths. For example, a continuous relaxation of the cluster identity in a Gaussian mixture model (GMM) (Section 4.3) forces the evaluation of a weighted average of likelihood parameters over all clusters instead of selecting the parameters based on just one. Finally, while control-variate methods may be employed to reduce variance, their practical efficacy can be somewhat limited as in some cases they involve designing and jointly optimizing a separate neural network which can be difficult to tune (Section 4.3).To address these challenges, we revisit the reweighted wake-sleep (RWS) algorithm BID2, comparing it extensively with state-of-the-art methods for learning discrete latent-variable models, and demonstrate its efficacy in learning better generative models and inference networks, and improving the variance of the gradient estimators, over a range of particle budgets. Going forward, we review the current state-of-the-art methods for learning deep generative models with discrete latent variables (Section 2), revisit RWS (Section 3), and present an extensive evaluation of these methods (Section 4) on (i) the Attend, Infer, Repeat (AIR) model BID6 to perceive and localise multiple MNIST digits, (ii) a continuous latent-variable model on MNIST, and (iii) a pedagogical GMM example, exposing a shortcoming of RWS that we fix using defensive importance sampling BID13. Our experiments confirm that RWS is a competitive, often preferable, alternative that unlike IWAE, learns better models and inference networks with increasing particle budgets. Consider data (x (n) ) N n=1 sampled from a true (unknown) generative model p(x), a family of generative models p θ (z, x) of latent variable z and observation x parameterized by θ and a family of inference networks q φ (z|x) parameterized by φ. We would like to learn the generative model by maximizing the marginal likelihood over data: DISPLAYFORM0. We would simultaneously also like to learn an inference network q φ (z|x) that amortizes inference given observation x; i.e., q φ (z|x) maps an observation x to an approximation of p θ * (z|x). Amortization ensures that evaluation of this function is cheaper than performing approximate inference of p θ * (z|x) from scratch. Our focus here is on such joint learning of the generative model and the inference network, here referred to as "learning a deep generative model", although we note that other approaches exist that learn just the generative model BID7 BID26 or the inference network BID29 BID21 in isolation. We begin with a review of IWAEs BID3 as a general approach for learning deep generative models using stochastic gradient descent (SGD) methods, focusing on generative-model families with discrete latent variables, for which the high variance of the naïve gradient estimator impedes learning. We will also review control-variate and continuous-relaxation methods for gradient-variance reduction. The IWAE used alongside such gradient-variance reduction methods is currently the dominant approach for learning deep generative models with discrete latent variables. BID3 introduced the IWAE which maximizes an average of ELBOs over data,, where, given number of particles K, DISPLAYFORM0 When K = 1, this reduces to the variational autoencoder (VAE) BID19 BID32. BID3 show that ELBO K IS (θ, φ, x) is a lower bound on log p θ (x) and that increasing K leads to a tighter lower bound. Further, tighter lower bounds arising from increasing K improve learning of the generative model, but worsen learning of the inference network, as the signal-to-noise ratio of θ's gradient estimator is O(DISPLAYFORM1 Moreover, poor learning of the inference network, beyond a certain point (large K), can actually worsen learning of the generative model as well; a finding we explore in Section 4.3.Optimizing the IWAE objective using SGD methods requires unbiased gradient estimators of ELBO K IS (θ, φ, x) with respect to θ and φ BID33.The θ gradient ∇ θ ELBO K IS (θ, φ, x) is estimated by sampling z 1:K ∼ Q φ (·|x) and evaluating DISPLAYFORM2 is estimated similarly for models with reparameterizable latents, discrete (and other non-reparameterizable) latents require the REINFORCE gradient estimator BID36 DISPLAYFORM3 Since the gradient estimator in Eq. can often suffer from high variance, mainly due to the effect of 1, a number of approaches have been developed to ameliorate the issue. When employing continuous relaxation methods, all discrete latent variables in the model can be replaced by the Concrete or Gumbel-Softmax BID15 distribution, whose gradients can be approximated using the reparameterization trick BID19 BID32. The main practical difficulty here is tuning the temperature parameter: low temperatures reduce the gradient estimator bias, but rapidly increase its variance. Moreover, since such relaxations are defined on a simplex, applying them to models that exhibit branching requires the computationally expensive evaluation of all (potentially exponentially many) paths in the generative model. Variational inference for Monte Carlo objectives (VIMCO) BID25 ) is a method that doesn't require designing an explicit control variate, as it exploits the particle set obtained in IWAE.It replaces 1 with DISPLAYFORM0 where term B is independent of z and highly correlated with term A.Finally, assuming z k is a discrete random variable with C categories 1, REBAR BID35 ) and RELAX BID8 improve on the methods of BID24 and BID12, and replaces 1 with DISPLAYFORM1 where g k is a C-dimensional vector of reparameterized Gumbel random variates, z k is a one-hot argmax function of g k, andg k is a vector of reparameterized conditional Gumbel random variates conditioned on z k. The conditional Gumbel random variates are a form of Rao-Blackwellization used to reduce variance. The control variate c ρ, parameterized by ρ, is optimized to minimize the gradient variance estimates concurrently with the main ELBO optimization, leading to state-of-theart performance on, for example, sigmoid belief networks BID27. The main practical difficulty in using this method is choosing a suitable family of c ρ, as some choices lead to higher variance despite the concurrent gradient variance minimization. Moreover, the objective for the concurrent optimization requires evaluating a Jacobian-vector product that induces an overhead of O(D φ D ρ) where D φ, D ρ are number of inference network and control-variate parameters respectively. Reweighted wake-sleep (RWS) BID2 comes from another family of algorithms for learning deep generative models, eschewing a single objective over parameters θ and φ in favour of individual objectives for each. We review the RWS algorithm and discuss its advantages and disadvantages. Reweighted wake-sleep (RWS) BID2 is an extension of the wake-sleep algorithm both of which, like IWAE, jointly learn a generative model and an inference network given data. While IWAE targets a single objective, RWS alternates between objectives, updating the generative model parameters θ using a wake-phase θ update and the inference network parameters φ using either a sleep-or a wake-phase φ update (or both).Wake-phase θ update. Given a current value of φ, θ is updated using an unbiased estimate of DISPLAYFORM0, which can be obtained directly without needing reparameterization or control variates as the sampling distribution Q φ (·|x) is independent of θ.2 Sleep-phase φ update. Here, φ is updated to maximize the negative Kullback-Leibler (KL) divergence between the posteriors under the generative model and the inference network, averaged over the data distribution of the current generative model DISPLAYFORM1 The gradient of this objective is E p θ (z,x) [∇ φ log q φ (z|x)] and can be estimated by sampling z, x from the generative model p θ (z, x) and evaluating ∇ φ log q φ (z|x). The variance of such an estimator can be reduced at a standard Monte Carlo rate by increasing the number of samples of z, x. Wake-phase φ update. Here, φ is updated to maximize the negative KL divergence between the posteriors under the generative model and the inference network, averaged over the true data distribution DISPLAYFORM2 The outer expectation of the gradient DISPLAYFORM3 can be estimated using a single sample x from the true data distribution p(x), given which, the inner expectation can be estimated using a self-normalized importance sampler with K particles using q φ (z|x) as the proposal distribution. This in the following estimator DISPLAYFORM4 where DISPLAYFORM5, in a similar fashion to Eq.. Note that equation 5 is negative of the second term of the REINFORCE estimator of the IWAE ELBO in equation 2. The crucial difference of the wake-phase φ update to the sleep-phase φ update is that the expectation in Eq. FORMULA10 is over the true data distribution p(x) unlike the expectation in Eq. FORMULA9 which is under the current model distribution p θ (x). The former is desirable from the perspective of amortizing inference over data from p(x). Although the estimator in Eq. FORMULA12 is biased, this bias decreases as K increases. While BID2 refer to the use of the sleep-phase φ update followed by an equally-weighted mixture of the wake-phase and sleep-phase updates of φ, we here use RWS to refer to the variant that employs just the wake-phase φ update, and not the mixture. The rationale for our preference will be made clear from the empirical evaluations (Section 4). While the gradient update of θ targets the same objective as IWAE, the gradient update of φ targets the objective in Eq. in the sleep case and Eq. in the wake case. This leads to two advantages for RWS over IWAE. First, since we don't need to use REINFORCE, using RWS leads to much lower variance of gradient estimators of φ. Second, the φ updates in RWS directly target minimization of the expected KL divergences from true to approximate posteriors. Increasing the computational budget (using more Monte Carlo samples in the sleep-phase φ update case and higher number of particles K in the wake-phase φ update) in a better estimator of these expected KL divergences. This is different to IWAE, where optimizing ELBO K IS targets a KL divergence on an extended sampling space which for K > 1 doesn't correspond to a KL divergence between true and approximate posteriors (in any order). Consequently, increasing number of particles in IWAE leads to worse learning of inference networks. The objective of the sleep-phase φ update in equation 3 is an expectation of KL under the current model distribution p θ (x) rather than the true one p(x). This makes the sleep-phase φ update suboptimal since the inference network must always follow a "doubly moving" target (both p θ (x) and p θ (z|x) change during the optimization). The IWAE and RWS algorithms have primarily been applied to problems with continuous latent variables and/or discrete latent variables that do not actually induce branching such as sigmoid belief networks BID27. The purpose of the following experiments is to compare RWS to IWAE used alongside the control variate and continuous relaxation methods described in Section 3 on models with conditional branching, where, as we will show, the various control-variate schemes underperform in relation to RWS. In several ways, including ELBOs achieved and average distance between true and amortized posteriors, we empirically demonstrate that increasing the number of particles K hurts learning in IWAE but improves learning in RWS.The first experiment, using the deep generative model from Attend, Infer, Repeat (AIR) BID6, demonstrates better learning of the generative model in a model containing both discrete latent variables used for branching as well as continuous latent variables in a complex visual data domain (Section 4.1). The next experiment on MNIST (Section 4.2) does so in a model with continuous latent variables. Finally, a GMM experiment (Section 4.3) serves as a pedagogical example to understand sources of advantage for RWS in more detail. Notationally, the different variants of RWS will be referred to as wake-sleep (WS) and wake-wake (WW). The wake-phase θ update is always used. We refer to using it in conjunction with the sleep-phase φ update as WS and using it in conjunction with the wake-phase φ update as WW. We tried using both wake-and sleep-phase φ updates however, in addition to doubling the amount of sampling, found that doing so only improves performance on the continuous latent variable model. The number of particles K used for the wake-phase θ and φ updates will always be specified. The sleep phase φ update will also use K samples from p θ (z, x). First, we evaluate WW and VIMCO on AIR , a structured deep generative model with both discrete and continuous latent variables. AIR uses the discrete variable to decide how many continuous variables are necessary to explain an image (see supplementary material for details). The sequential inference procedure of AIR poses a difficult problem, since it implies a sequential decision process with possible branching. See BID6 for the details of the model (see supplementary material for the model in our notation).We set the maximum number of inference steps in AIR to three and we train it on images of size 50× 50 with zero, one or two MNIST digits. The training and testing data sets consist of 60000 and 10000 images, respectively, which are generated from the respective MNIST train/test datasets. Unlike AIR, which used Gaussian likelihood with fixed standard deviation and continuous inputs (i.e., input x ∈ 50×50 ), we use a Bernoulli likelihood and binarized data; the stochastic binarization is the same as in BID3. This choice is motivated by initial experiments, which have shown that the original setup is detrimental to the sleep phase of WS -samples from the generative model did not look similar to true data even after the training has converged. Training is performed over two million iterations by RmsProp BID34 with the learning rate of 10 −5, which is divided by three after 400k and 1000k training iterations. We set the glimpse size to 20 × 20.We first evaluate the generative model via the average test log marginal where each log marginal is estimated by a one-sample, 5000-particle IWAE estimate. The inference network is then evaluated via the average test KL from the inference network to the posterior under the current model where each D KL (q φ (z|x), p θ (z|x)) is estimated as a difference between the log marginal estimate above and a 5000-sample, one-particle IWAE estimate. Note that this KL estimate is merely a proxy to the desired KL from the inference network to the posterior under the true model. This experiment confirms FIG1 ) that increasing number of particles hurts learning in VIMCO but improves learning in WW. Increasing K improves WW monotonically but VIMCO only up to a point. WW also in significantly lower variance and better inference networks than VIMCO. RWS has typically only been considered for learning models with discrete latent variables. In order to gauge its applicability to a wider class of problems, we evaluate it on a variational autoencoder with normally distributed latent variables for MNIST. To do this, we use the training procedure and the model with a single stochastic layer of BID3 and their stochastic binarization of data. with the batch size of 32. We evaluate performance in the same way as we evaluate the AIR model. Additionally, we evaluate the number of active latent units BID3. This experiment confirms (FIG2 and TAB0) that increasing number of particles hurts learning in IWAE but improves learning in WW. Increasing K does improve the marginal likelihood attained by WW and IWAE with reparameterization. However, the latter learns a better model only up to a point (K = 128) -further increase in the number of particles has diminishing returns. WW also in better inference networks than IWAE as showed by the KL plot on the right of FIG2 To see what is going on, we study a GMM which branches on a discrete latent variable to select cluster assignments. The generative model and inference network are defined as DISPLAYFORM0 where z ∈ {0, . . ., C −1}, C is the number of clusters and µ c, σ Note that KL is between the inference network and the current generative model. Quality of the generative model: WS and WW improve with larger particle budget thanks to lower variance and lower bias estimators of the gradient respectively. IWAE methods suffer with a larger particle budget. WS performs the worst as a consequence of computing the expected KL under model distribution p θ (x) equation 3 instead of the true data distribution p(x) as with WW equation 4. WW suffers from zero-forcing (described in text) in low-particle regimes, but learns the best model fastest in the many-particle regime; δ-WW additionally learns well in the low-particle regime. (Middle) The quality of the inference network develops identically to that of the generative model. (Bottom) WW and WS have lower-variance gradient estimators of φ than IWAE, as they don't include the high-variance term 1 in equation 2. This is a necessary, but not sufficient, condition for efficient learning with other important factors being gradient direction and the ability to escape local optima. under the true model. The true model is set to p θtrue (x) where softmax(θ true) c = (c+5)/ C i=1 (i+5) (c = 0, . . ., C − 1), i.e. the mixture probabilities are linearly increasing with the z FIG0. We fix the mixture parameters in order to study the important features of the problem at hand in isolation. DISPLAYFORM1 IWAE was trained with REINFORCE, RELAX, VIMCO and the Concrete distribution. We also train using WS and WW. We fix C = 20 and increase number of particles from K = 2 to 20. We use the Adam optimizer with the learning rate 10 −3 and default β parameters. At each iteration, a batch of 100 data points is generated from the true model to train. Having searched over several temperature schedules for the Concrete distribution, we use the one with the lowest trainable terminal temperature (linearly annealing from 3 to 0.5). We found that using the control variate c ρ (g 1:K) =(1 + C)-16-16-1 (with tanh nonlinearity) led to most stable training (see supplementary material for more details).We evaluate the generative model, inference network and the variance of the gradient estimator of φ. The generative model is evaluated via the L2 distance between the probability mass functions (PMFs) of its prior and true prior as softmax(θ) − softmax(θ true). The inference network is evaluated via the L2 distance between PMFs of the current and true posteriors, averaged over a fixed set (M = 100) of observations (x (m) test ) M m=1 from the true model: DISPLAYFORM2 is the dth element of one of φ's gradient estimators (e.g. equation 2 for REINFORCE) and std(·) is estimated using 10 samples. Here, we demonstrate that using WS and WW with larger particle budgets leads to a better inference networks whereas this is not the case for IWAE methods FIG3. Recall that the former is because using more samples to estimate the gradient of the sleep φ objective equation 3 for WS reduces variance at a standard Monte Carlo rate and that using more particles in equation 5 to estimate the gradient of the wake φ objective in a lower bias. The latter is because using more particles in the signal-to-noise of IWAE's φ gradient estimator to drop at the rate O(1/ √ K).Learning of the generative model, as a consequence of inference-network learning, is also better for WS and WW, but worse for IWAE methods when the particle budget is increased. This is because the the θ gradient estimator (common to all methods), ∇ θ ELBO K IS (θ, φ, x) can be seen as an importance sampling estimator whose quality is tied to the proposal distribution (inference network).WW and WS have lower variance gradient estimators than IWAE, even if used with control-variate and continuous-relaxation methods. This is because φ's gradient estimators for WW and WS don't include the high-variance term 1 in equation 2. This is a necessary but not sufficient condition for efficient learning with other important factors being gradient direction and the ability to escape local optima (explored below). Employing the Concrete distribution gives low-variance gradients for φ to begin with, but the model learns poorly due to the high gradients bias (due to high temperature hyperparameter). We now describe a failure mode, that affects WS, WW, VIMCO and REINFORCE, which we will refer to as zero-forcing. It is best illustrated by inspecting the generative model and the inference network as the training progresses, focusing on the low-particle (K = 2) regime FIG4 ). For WS, the generative model p θ (z) peaks at z = 9 and puts zero mass for z > 9; the inference network q φ (z|x) becomes the posterior for this model which, in this model, also has support at most {0, . . ., 9} for all x. This is a local optimum for WS because (i) the inference network already approximates the posterior of the model p θ (z, x) well, and (ii) the generative model p θ (z), being trained using samples from q φ (z|x), has no samples outside of its current support ({0, . . ., 9}). Similar failure mode occurs for WW and VIMCO/REINFORCE although the support of the locally optimal p θ (z) is larger ({0, . . ., 14} and {0, . . ., 17} respectively). DISPLAYFORM3 While this failure mode is a particular feature of the GMM, we hypothesize that WS and WW suffer from it more, as they alternate between two different objectives for optimizing θ and φ. WS attempts to amortize inference for the current model distribution p θ (x) which reinforces the coupling between the generative model and the inference network, making it easier to get stuck in a local optimum. WW with few particles (say K = 1) on the other hand, in a highly-biased gradient estimator equation 5 that samples z from q φ (·|x) and evaluates ∇ φ log q φ (z|x); this encourages the inference network to concentrate mass. This behavior is not seen in WW with many particles where it is the best algorithm at learning both a good generative model and inference network (FIG3, right).We propose a simple extension of WW, denoted δ-WW, that mitigates this shortcoming by changing the proposal of the self-normalized importance sampling estimator in Eq. to q φ,δ (z|x) = (1 − δ)q φ (z|x) + δUniform(z). We use δ = 0.2, noting that the method is robust to a range of values. Using a different proposal than the inference network q φ (z|x) means that using the lowparticle estimator in Eq. no longer leads to zero-forcing. This is known as defensive importance sampling BID13, and is used to better estimate integrands that have long tails using short-tailed proposals. Using δ-WW outperforms all other algorithms in learning both the generative model and the inference network. Our experiments suggest that RWS learns both better generative models and inference networks in models that involve discrete latent variables, while performing just as well as state-of-the-art on continuous-variable models as well. The AIR experiment (Section 4.1) shows that the trained inference networks are unusable when trained with high number of particles. Moreover, the MNIST experiment (Section 4.2) suggests that RWS is competitive even on models with continuous latent variables, especially for high number of particles where IWAE ELBO starts suffering from worse inference networks. The GMM experiment (Section 4.3) illustrates that this is at least at least in part due to a lower variance gradient estimator for the inference network and the fact that for RWSunlike the case of optimizing IWAE ELBO )-increasing number of particles actually improves the inference network. In the low-particle regime, the GMM suffers from zeroforcing of the generative model and the inference network, which is ameliorated using defensive RWS. Finally, all experiments show that, beyond a certain point, increasing the particle budget starts to affect the quality of the generative model for IWAE ELBO whereas this is not the case for RWS. As a consequence of our findings, we recommend reconsidering using RWS for learning deep generative models, especially those containing discrete latent variables that induce branching.
Empirical analysis and explanation of particle-based gradient estimators for approximate inference with deep generative models.
1,323
scitldr
The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set. Unfortunately, state-of-the-art deep extreme classifiers are either not scalable or inaccurate for short text documents. This paper develops the DeepXML algorithm which addresses both limitations by introducing a novel architecture that splits training of head and tail labels. DeepXML increases accuracy by (a) learning word embeddings on head labels and transferring them through a novel residual connection to data impoverished tail labels; (b) increasing the amount of negative training data available by extending state-of-the-art negative sub-sampling techniques; and (c) re-ranking the set of predicted labels to eliminate the hardest negatives for the original classifier. All of these contributions are implemented efficiently by extending the highly scalable Slice algorithm for pretrained embeddings to learn the proposed DeepXML architecture. As a , DeepXML could efficiently scale to problems involving millions of labels that were beyond the pale of state-of-the-art deep extreme classifiers as it could be more than 10x faster at training than XML-CNN and AttentionXML. At the same time, DeepXML was also empirically determined to be up to 19% more accurate than leading techniques for matching search engine queries to advertiser bid phrases. Objective: This paper develops the DeepXML algorithm for deep extreme multi-label learning applied to short text documents such as web search engine queries. DeepXML is demonstrated to be significantly more accurate and an order of magnitude faster to train than state-of-the-art deep extreme classifiers XML-CNN and AttentionXML . As a , DeepXML could efficiently train on problems involving millions of labels on a single GPU that were beyond the scaling capabilities of leading deep extreme classifiers. This allowed DeepXML to be applied to the problem of matching millions of advertiser bid phrases to a user's query on a popular web search engine where it was found to increase prediction accuracy by more than 19 percentage points as compared to the leading techniques currently in production. Deep extreme multi-label learning: The objective in deep extreme multi-label learning is to learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set. Note that multi-label learning is a generalization of multi-class classification which aims to predict a single mutually exclusive label. Notation: Throughout the paper: N refers to number of training points, d refers to representation dimension, and L refers to number of labels. Additionaly, Y refers to the label matrix where y ij = 1 if j th label is relevant to i th instance, and 0 otherwise. Please note that differences in accuracies are reported in absolute percentage points unless stated otherwise. Matching queries to bid phrases: Web search engines allow ads to be served for not just queries bidded on directly by advertisers, referred to as bid phrases, but also for related queries with matching intent. Thus matching a query that was just entered by the user to the relevant subset of millions of advertiser bid phrases in milliseconds is an important research application which forms the focus of this paper. DeepXML reformulates this problem as an extreme multi-label learning task by treating each of the top 3 Million monetizable advertiser bid phrases as a separate label and learning a deep classifier to predict the relevant subset of bid phrases given an input query. For example, given the user query "what is diabetes type 2" as input, DeepXML predicts that ads corresponding to the bid phrases "what is type 2 diabetes mellitus", "diabetes type 2 definition", "do i have type 2 diabetes", etc. could be relevant to the user. Note that other high-impact applications have also been reformulated as the extreme classification of short text documents such as queries, webpage titles, etc. For instance, applied extreme multi-label learning to recommend the subset of relevant Bing queries that could be asked by a user instead of the original query. Similarly, extreme multi-label learning could be used to predict which subset of search engine queries might lead to a click on a webpage from its title alone for scenarios where the webpage content might not be available due to privacy concerns, latency issues in fetching the webpage, etc. State-of-the-art extreme classifiers: Unfortunately, state-of-the-art extreme classifiers are either not scalable or inaccurate for queries and other short text documents. In particular, leading extreme classifiers based on bag-of-words (BoW) features (b) and pretrained embeddings are highly scalable but inaccurate for documents having only 3 or 4 words. While feature engineering (; ;), including taking sub-word tokens, bigram tokens, etc can ameliorate the problem somewhat, their accuracy still lags that of deep learning methods which learn features specific to the task at hand. However, such methods, as exemplified by the state-of-the-art XML-CNN and AttentionXML , can have prohibitive training costs and have not been shown to scale beyond a million labels on a single GPU. At the same time, there is a lot of scope for improving accuracy as XML-CNN and AttentionXML's architectures have not been specialized for short text documents. Tail labels: It is worth noting that all the computational and statistical complexity in extreme classification arises due to the presence of millions of tail labels each having just a few, often a single, training point. Such labels can be very hard to learn due to data paucity. However, in most applications, predicting such rare tail labels accurately is much more rewarding than predicting common and obvious head labels. This motivates DeepXML to have specialized architectures for head and tail labels which lead to accuracy gains not only in standard metrics which assign equal weights to all labels but also in propensity scored metrics designed specifically for long-tail extreme classification. DeepXML: DeepXML improved both accuracy and scalability over existing deep extreme classifiers by partitioning all L labels into a small set of head labels, with cardinality less than 0.1L, containing the most frequently occuring labels and a large set of tail labels containing everything else. DeepXML first represented a document by the tf-idf weighted linear combination of its word-vector embeddings as this architecture was empirically found to be more suitable for short text documents than the CNN and attention based architectures of XML-CNN and AttentionXML respectively. The word-vector embeddings of the training documents were learnt on the head labels where there was enough data available to learn a good quality representation of the vocabulary. Accuracy was then further boosted by the introduction of a novel residual connection to fine-tune the document representation for head labels. This head architecture could be efficiently learnt on a single GPU with a fully connected final output layer due to the small number of labels involved. The word-vector embeddings were then transferred to the tail network where there wasn't enough data available to train them from scratch. Accuracy gains could potentially be obtained by fine tuning the embeddings but this led to a dramatic increase in the training and prediction costs. As an efficient alternative, DeepXML achieved state-of-the-art accuracies by fine tuning only the residual connection based document representation for tail labels. A number of modifications were made to the highly scalable Slice classifier for pre-trained embeddings to allow it to also train the tail residual connection without sacrificing scalability. Finally, instead of learning an expensive ensemble of base classifiers to increase accuracy (b;), DeepXML improved performance by re-ranking the set of predicted labels to eliminate the hardest negatives for the base classifier with only a 10% increase in training time. Results: Experiments on medium scale datasets of short text documents with less than a million labels revealed that DeepXML's accuracy gains over XML-CNN and AttentionXML could be up to 3.92 and 4.32 percentage points respectively in terms of precision@k and up to 5.32 and 4.2 percentage points respectively in terms of propensity-scored precision@k. At the same time, DeepXML could be up to 15× and 41× faster to train than XML-CNN and AttentionXML respectively on these datasets using a single GPU. Furthermore, XML-CNN and AttentionXML were unable to scale to a proprietary dataset for matching queries to bid phrases containing 3 million labels and 21 million training points on which DeepXML trained in 14 hours on a single GPU. On this dataset, DeepXML was found to be at least 19 percentage points more accurate than Slice, Parabel (b), and other leading query bid phrase-matching techniques currently running in production. Contributions: This paper makes the following contributions: (a) It proposes the DeepXML architecture for short text documents that is more accurate than state-of-the-art extreme classifiers; (b) it proposes an efficient training algorithm that allows DeepXML to be an order of magnitude more scalable than leading deep extreme classifiers; and (c) it demonstrates that DeepXML could be significantly better at matching user queries to advertiser bid phrases as compared to leading techniques in production on a popular web search engine. Source code for DeepXML and the short text document datasets used in this paper can be downloaded from . Much work has been done in extreme multi-label classification which can be broadly categorized in two categories: a) learning the classifier with pre-computed features (; Cissé et al., 2013; ; ; ; ; 2017; 2019; ; ; Babbar & Schölkopf, 2017; 2019; ; ; a; ;), b) jointly learning feature representation along with the classifier (; ; ; ; ; ; ; ; ; ;). More extensive survey of extreme classification and deep learning approaches can be found in section A.5 in the supplementary material. Traditionally, extreme classification approaches used sparse BoW features due to fast & efficient feature computation as well as state-of-the-art performance for a large number of labels. However, for short text documents such as queries, which form the focus of the paper, deep learning representations are more effective than BoW . Unfortunately, existing deep learning approaches for extreme classification yielding state-of-the-art accuracy are not scalable, while scalable approaches have not been shown to yield state-of-the-art accuracy. In particular, XML-CNN, acheives state-ofthe-art accuracy on short text documents but have not been shown to scale beyond a million labels, whereas AttentionXML was found to be slightly more scalable but less accurate as shown in table 1. Scalability: Deep learning techniques have been very successful and comprehensively beaten BoW features in small output space . Unfortunately, the scalablity of such techniques degrades in extreme multi-label (XML) setting as the final fully connected layer leads to cost linear in number of labels . Traditional approach to solve this problem is negative sampling . Unfortunately, at the extreme scale, negative sampling has to be applied more aggressively as demonstrated in Fig 5 in the appendix. To eliminate this problem, many approaches have been proposed such as tree based (b; ; ;) and hashing based , approximate nearest neighbor sub-sampling techniques . Adding to that, prediction time complexity remains O(dL), which is not suitable for real-time predictions. Approaches such as Maximum Inner Product Search and Local Sensitive Hashing can speed-up one-vs.-all inference but it leads to further loss in accuracy when applied post-training. Accuracy: Tail labels are harder to predict as compared to head labels due to training data scarcity but might also be more informative and rewarding in XML setting. For instance, predicting tag "Artificial intelligence researchers" could be more informative than tag "1954 deaths" on Wikipedia page of "Alan Turing". Hence, propensity based precision and nDCG are the focus of the paper. demonstrates that trimming tail labels leads to marginal decay in performance on vanilla metrics. However, a) trimming tail labels lead to loss in propensity based metrics, b) this approach has been demonstrated to scale to ≈ 30K labels only. Researchers have also tried to improve performance on tail labels by directly optimizing propensity scored metrics, posing learning in the presence of adversarial perturbations (Babbar & Schölkopf, 2019) and treating tail labels as outliers . Although, these approaches boost performance on tail labels, however they are not well suited for short text documents due to, a) support only for fixed features and, b) large training and prediction time. Matching user queries to bid phrases: Approaches in this domain can be categorized as embeddings based , sequence-to-sequence models (; ; ;) and query graph based models . Unfortunately, the trigger coverage, suggestion density and quality of recommendations could be poor for many of these techniques. For instance, query graph based methods can only recommend suggestions for previously seen triggers thereby limiting their trigger coverage. Additionally, Sequence-to-sequence models suffers from expensive training and prediction cost. Although, efficient structures such as trie have been deployed to reduce output complexity, however, at the cost of limited bid phrase coverage. DeepXML-h DeepXML-t Residual Block State-of-the-art deep extreme classifiers are neither scalable nor accurate for short text documents because: (a) representations learnt through CNNs (XML-CNN), LSTM+Attention (AttentionXML) or context (Bert , Elmo , etc.) might not be accurate given limited tail data with only a few words per document and might also be expensive to compute; and (b) they face scalability issues as the final fully connected output layer has millions of outputs making both the forward pass as well as gradient backpropagation prohibitive for even a single training point. DeepXML addresses these limitations by using a feature representation inspired by FastText and an output layer inspired by Slice as these have been demonstrated to be both accurate and scalable for short text documents in their individual capacities. Unfortunately, combining FastText and Slice in a straight forward fashion also turns out to be neither accurate nor scalable. The rest of this section details the design choices and modifications that needed to be made in order to get the combination to work. DeepXML, FastText & Slice: FastText represents a document as an efficiently computable linear combination of its (sub) word-vector embeddings making it highly scalable and well suited for extreme classification scenarios. Unfortunately, the FastText architecture is linear and low-capacity thereby leading to a loss in accuracy when there isn't a vast amount of unsupervised data available for training. Furthermore, previous attempts at learning FastText representations in a supervised manner by replacing the fully connected output layer by a fixed Probabilistic Label Trees (PLT) extreme classifier have led to even worse accuracies than XML-CNN and AttentionXML. Replacing the fixed tree PLT by learnt Slice improves accuracy somewhat but does not lead to state-of-the-art and also greatly increases training time (please see ablation experiments in Section 5). DeepXML addresses these limitations by adding a non linearity and residual block to make up for the lack of FastText's expressive power and training the word-vector embeddings using a fully connected output layer on a small number of head labels rather than through Slice. Please refer to Figure 1 for full architecture of DeepXML. Once the word-vector embeddings have been trained on the head, they are frozen and transferred to the tail where only the residual block is fine tuned. This increases both accuracy as there isn't enough training data available to learn good quality word embeddings from scratch on the tail as well as scalability as fine-tuning the head embeddings on the tail would prove too expensive on large problems. DeepXML-h: Model parameters i.e. words embeddings, residual block and fully connected classifier are learnt with Binary Cross Entropy loss and Adam optimizer. DeepXML-h can be efficiently trained with a fully connected final layer on a single GPU, as L h (Label set containing head labels)contains only a small subset of labels. In practice, the size of L h does not grow beyond 0.2M even for datasets with millions of labels. Additionally, an approximate nearest neighbour search (ANNS) structure was trained over label centroids L Here, |s| is the size of label shortlist queried from ANNS structure during prediction which is kept as 300 in practice. Unfortunately, ANNS trained over label centroids may lead to poor recall values when a single centroid is unable to capture diversity in training instances for labels with highly different contexts (say L h). For instance, 280K articles, ranging from scientists to athletes, are tagged with the'living people' tag in WikiTitle-500K dataset. Slice increases the shortlist to improve recall; however, at the cost of 6× increase in prediction time. DeepXML-h tackles this issue by allowing multiple representations for labels in L h. Documents are clustered using the KMeans algorithm into c clusters for each label in this set. Therefore, c representatives for each label in L h are computed. This leads to 5% increase in recall@300 and 6% precision@300 with a shortlist of size 300. Clustering will lead to |L h − L h | + c|L h | label representations and hence could potentially lead to increased time complexity. However, clustering just the top 3 labels into 300 clusters seems to work well in our experiments without significant change in training time, and it doesn't impact prediction time at all. DeepXML-t: DeepXML still relies on Slice to fine-tune the residual block in the tail network and learn the weights in the fully connected output layer for millions of tail labels. Slice cuts down the time for both the forward pass and the gradient backpropagation from linear to logarithmic in the number of labels. Slice achieves this by first representing each label by the unit normalized mean of the feature vectors of the training points tagged with the label. It then learns an ANNS data structure over the label representations to determine the most likely labels for a given data point (please see the supplementary material for a more detailed description). This technique was shown to efficiently scale to millions of labels and training points when the feature representation was fixed . Unfortunately, when the feature representation is being learnt, the ANNS data structure needs to be constantly updated as the label representations change with each training batch. This can lead to a marked slowdown as maintaining the ANNS data structure can incur significant computational cost. DeepXML speeds up training by redefining the label representation to be the unit normalized mean of the document representation before the residual block computed using the learnt word embeddings alone. This allows the ANNS data structure to be learnt just once after the word embeddings have been learnt on the head labels and before training starts on the tail. Unfortunately, this also leads to a loss in training accuracy as the label representation is now an approximation to the true representation that should have been defined as the unit normalized mean of the document representations computed after the residual block. This loss in accuracy can be compensated by requiring Slice to generate 3x more nearest labels to a given data point. This allowed the hard negative labels to now be present in the longer list but this significantly increased training time. It was empirically determined that a more efficient strategy was to extend the shortlist by adding randomly sampled negative labels as this led to no loss in accuracy with only a minimal increase in training time. Initially, classifier scores and ANNS scores from both DeepXML-h and DeepXML-t are merged in a single vectorŷ. As demonstrated in Fig. 1,ŷ clf −h is the classifier score (logit) from DeepXML-h andŷ anns−h is the cosine similarity from ANNS-h. Similarly,ŷ clf −t is the classifier score from DeepXML-t andŷ ann−t is the cosine similarity from ANNS-t. The final DeepXML score vector (ŷ) is computed as follows: Note thatŷ clf −t,ŷ clf −h,ŷ anns−h, andŷ anns−t are sparse vectors, σ is a sparse-sigmoid function computed only at non-zero entries and β ∈. The average cost of prediction can be broken down into the following four components: a) computing dense feature representation: O(dγ), b) generating a shortlist: O(d log |L t |), c) computing classifier scores: O(d|s|). Here, γ is the average number of features per document and |s| is the shortlist size in ANNS. Extreme classifiers such as Parabel (b), AttentionXML learn multiple models to get better prediction accuracy. However, this leads to increased training and prediction time (200% for both Parabel and AttentionXML) linearly with the number of models. Whereas, DeepXML-RE learns a re-ranker with training cost logarithmic in the number of labels by training over a shortlist of negative labels. Specifically, false positive labels predicted by DeepXML (a.k.a hardest negatives) are selected as negatives for DeepXML-RE. This leads to only 10 − 20% increase in training time. The architecture of DeepXML-RE is same as DeepXML-t, i.e., a word embedding layer, residual block, and classifier. Model parameters are learnt with binary cross entropy loss and SparseAdam optimizer. During prediction, DeepXML-RE evaluates on labels shortlisted by DeepXML only, thereby incurring a prediction cost of O(d|s|). Experiments were carried out on the Query to Bid phrases (Q2B-3M) dataset, with 3 million labels, by mining the logs of a popular search engine. Each user query was treated as an instance and relevant advertiser bid phrases became its labels. Unfortunately, only Slice and Parabel (b) could scale to this dataset. DeepXML is also compared to state-of-the-art methods for query keyword prediction such as Simrank++ and a sequence-to-sequence model based on BERT . Experiments were also carried out on four moderate size datasets . The applications considered were tagging Wikipedia pages (WikiTitles-500K), suggesting relevant articles (WikiSeeAlsoTitles-250K), and item-to-item recommendation of Amazon products (AmazonTitles-670K and AmazonTitles-3M). Please refer to supplementary Table 4 for dataset statistics. For these moderate size datasets DeepXML was compared to leading deep learning and BoW feature-based methods including XML-CNN, Attention-XML, Slice, AnnexML , PfastreXML, Parabel (b), XT , and DiSMEC (Babbar & Schölkopf, 2017). The implementation of all algorithms was provided by the respective authors. DeepXML has 7 hyperparameters: (a) learning rate and epochs for DeepXML-h; (b) learning rate and epochs for DeepXML-t; (c) embedding dimensions (d); (d) shortlist size (|s|) and (e) threshold to split label set into head and tail labels. Results are reported for d = 300 and |s| = 300. The label threshold is chosen via cross-validation (please refer to section A.2 in the supplementary material). Binary cross-entropy loss and the SparseAdam optimizer were used to update the model parameters. Please refer to Table 5 in the supplementary section for full parameter settings. Table 1 shows that DeepXML and DeepXML-RE can outperform state-of-the-art BoW-based approaches as well. Note that on AmazonTitles-3M, PfastreXML is 5.6 percentage points better than DeepXML as well as DeepXML-RE for propensity-scored metrics but incurs a loss of 10 percentage points for vanilla precision, which is clearly unacceptable for real world tasks where both metrics are important. Please refer to Section A.3 in the appendix for more discussion. Table 6 includes example documents which demonstrate that DeepXML can make accurate and diverse predictions. This section discuss impact of feature representations (DeepXML-SW, DeepXML-f) and classifiers (DeepXML-fr, DeepXML-ANNS, DeepXML-P, DeepXML-NS). The detailed architectures are included in section A.6 in the supplementary section. Sub-word features: DeepXML can also exploit sub-word features for an additional 1% gain in precision as demonstrated in Table 3. Label split and classifier: DeepXML-fr refers to a joint DeepXML architecture, i.e. without splitting labels, trained with a fully connected layer. As demonstrated in Table 1, the proposed algorithm is more scalable and accurate on tail labels w.r.t DeepXML-fr. DeepXML could be up to 1-3% more accurate then DeepXML-NS & DeepXML-ANNS where the classifier is trained using negative sampling. Additionally, DeepXML was found to be 1.3% more accurate than DeepXML-P which uses a tree based classifier. Pre-trained features: DeepXML could be upto 10% more accurate than pre-trained representations with Slice classifier (refer to Tables 1 & 3) such as FastText, BERT and SIF . This paper developed DeepXML, an algorithm to jointly learn representations for extreme multilabel learning on text data. The proposed algorithm addresses the key issues of scalability and low accuracy (especially on tail labels and very short documents) with existing approaches such as Slice, AttentionXML, and XML-CNN, and hence improves on them substantively. Experiments revealed that DeepXML-RE can lead to a 1.0-4.3 percentage point gain in performance while being 33-42× faster at training than AttentionXML. Furthermore, DeepXML was upto 15 percentage points more accurate than leading techniques for matching search engine queries to advertiser bid phrases. We note that DeepXML's gains are predominantly seen to be on predicting tail labels (for which very few direct word associations are available at train time) and on short documents (for which very few words are available at test time). This indicates that the method is doing especially well, compared to earlier approaches, at learning word representations which allow for richer and denser associations between words -which allow for the words to be well-clustered in a meaningful semantic space, and hence useful and generalisable information about document labels extracted even when the number of direct word co-occurrences observed is very limited. In the future we would like to better understand the nature of these representations, and explore their utility for other linguistic tasks. Table 5 lists the parameter settings for different data sets. Experiments were performed with a random-seed of 22 on a P40 GPU card with CUDA 10, CuDNN 7.4, and Pytorch 1.2 . Figure 4: Precision@5 in k(%) most frequent labels Table 5: Parameter setting for DeepXML on different datasets. Dropout with probability 0.5 was used for all datasets. Learning rate is decayed by Decay factor after interval of Decay steps. For HNSW, values of construction parameter M = 100, ef C = 300 and query parameter, ef S = 300. Denoted by'|', DeepXML-h and DeepXML-t might take different values for some parameters. Note that DeepXML-t uses a shortlist of size 500 during training. However, a shortlist of size 300 queried from ANNS is used at prediction time for both DeepXML-h and DeepXML-t. A Label set L is divided into two disjoint sets, i.e. L h and L t based on the frequency of the labels. Labels with a frequency more than splitting threshold γ are kept in set L h and others in L t. The splitting threshold γ is chosen while ensuring that most of the features (or words) are covered in documents that one at least one instances of label in the set L h and |L h | < 0.2M. Two components for DeepXML, DeepXML-h and DeepXML-t, are trained on L h and L t. Please note that other strategies like clustering of labels, connected components of labels in a graph were also tried, but the above-mentioned strategy provides good without any additional overhead. More sophisticated algorithms for splitting such as label clustering, may yield better , however at the cost of increased training time. DeepXML, DeepXML-RE yields 3 − 4% better accuracy on propensity scored metrics and can be upto 2% more accurate on vanilla metrics. Note that PfastreXML outperform DeepXML and DeepXML-RE on AmazonTitles-3M in propensity scored metrics, however suffers a substantial loss of 10% on vanilla precision and nDCG which is unacceptable for real world applications. Performance has been evaluated using propensity scored precision@k and nDCG@k, which are unbiased and more suitable metric in the extreme multi-labels setting Babbar & Schölkopf, 2019; a;. The propensity model and values available on The Extreme Classification Repository were used. Performance has also been evaluated using vanilla precision@k and nDCG@k (with k = 1, 3 and 5) for extreme classification. For a predicted score vectorŷ ∈ R L and ground truth vector y ∈ {0, 1} L: y l log(l + 1) Here, p l is propensity score of the label l proposed in. Representations: Deep architectures such as CNN , MLP, and LSTM with Attention have been applied to learn semantically and syntactically rich features. Barring AttentionXML , these methods suffer from low accuracy, which is more prominent on tail labels indicating inept document representation for tail labels. However, performance of AttentionXML also degrades for short text documents as discussed in section 4.3. Parabel: Parabel learns a hierarchy over labels to select hardest negatives for each labels and hence bring down the training cost to O (N d log L). However, Parabel is designed specifically for sparse BoW features and its performance degrades on low dimensional features. Slice: Negative sampling is a popular approach to reduce training complexity in extreme classification setting. Several strategies have been proposed in literature to select negatives labels for each instance; or negative examples for each label (; ; b). Slice approach has been shown to scale to 100 million labels and more accurate than alternate approaches (; b) Experiments were carried out with several variations of DeepXML where labels were not splitted in head and tail labels. Here are the different configurations: DeepXML-f: This variation refers to a word embedding layer, ReLU non-linearity, Dropout and a fully connected output layer. DeepXML-fr: This variation refers to a word embedding layer, ReLU non-linearity, Dropout, a residual block and a fully connected output layer. DeepXML-NS: This variation refers to a word embedding layer, ReLU non-linearity, Dropout, a residual block and a classifier trained via negative sampling . DeepXML-SW: DeepXML can exploit sub-words features as proposed in . This variations refers to DeepXML with sub-word information. Here, character tri-grams were added to the vocabulary in addition to unigrams. DeepXML-ANNS: This variation refers to a word embedding layer, ReLU non-linearity, Dropout, a residual block and a classifier trained via Slice. This version trains only with hardest negatives labels selected via ANNS . However, the hardest negatives keeps changing as the document representations are being updated. Hence, it requires ANNS graph to be trained multiple time. Here, ANNS graph was updated after an interval of 5 epochs. DeepXML-P: This variation refers to a word embedding layer, ReLU non-linearity, Dropout, a residual block and a shallow tree based classifier proposed in AttentionXML.
Scalable and accurate deep multi label learning with millions of labels.
1,324
scitldr
Robust estimation under Huber's $\epsilon$-contamination model has become an important topic in statistics and theoretical computer science. Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning. Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's $\epsilon$-contamination model. Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation. In the setting of Huber's -contamination model (; 1965), one has i.i.d observations X 1,..., X n ∼ (1 −)P θ + Q,and the goal is to estimate the model parameter θ. Under the data generating process, each observation has a 1 − probability to be drawn from P θ and the other probability to be drawn from the contamination distribution Q. The presence of an unknown contamination distribution poses both statistical and computational challenges to the problem. For example, consider a normal mean estimation problem with P θ = N (θ, I p). Due to the contamination of data, the sample average, which is optimal when = 0, can be arbitrarily far away from the true mean if Q charges a positive probability at infinity. Moreover, even robust estimators such as coordinatewise median and geometric median are proved to be suboptimal under the setting of (; a;). The search for both statistically optimal and computationally feasible procedures has become a fundamental problem in areas including statistics and computer science. For the normal mean estimation problem, it has been shown in that the minimax rate with respect to the squared 2 loss is p n ∨ 2, and is achieved by Tukey's median . Despite the statistical optimality of Tukey's median, its computation is not tractable. In fact, even an approximate algorithm takes O(e Cp) in time BID1; ).Recent developments in theoretical computer science are focused on the search of computationally tractable algorithms for estimating θ under Huber's -contamination model. The success of the efforts started from two fundamental papers Diakonikolas et al. (2016a); , where two different but related computational strategies "iterative filtering" and "dimension halving" were proposed to robustly estimate the normal mean. These algorithms can provably achieve the minimax rate p n ∨ 2 up to a poly-logarithmic factor in polynomial time. The main idea behind the two methods is a critical fact that a good robust moment estimator can be certified efficiently by higher moments. This idea was later further extended (; ; b; 2018a; c; b;) to develop robust and computable procedures for various other problems. However, many of the computationally feasible procedures for robust mean estimation in the literature rely on the knowledge of covariance matrix and sometimes the knowledge of contamination proportion. Even though these assumptions can be relaxed, nontrivial modifications of the algorithms are required for such extensions and statistical error rates may also be affected. Compared with these computationally feasible procedures proposed in the recent literature for robust estimation, Tukey's median and other depth-based estimators (; ; ; Mizera & Müller, 2004;) have some indispensable advantages in terms of their statistical properties. First, the depth-based estimators have clear objective functions that can be interpreted from the perspective of projection pursuit . Second, the depth-based procedures are adaptive to unknown nuisance parameters in the models such as covariance structures, contamination proportion, and error distributions . Last but not least, Tukey's depth and other depth functions are mostly designed for robust quantile estimation, while the recent advancements in the theoretical computer science literature are all focused on robust moments estimation. Although this is not an issue when it comes to normal mean estimation, the difference is fundamental for robust estimation under general settings such as elliptical distributions where moments do not necessarily exist. Given the desirable statistical properties discussed above, this paper is focused on the development of computational strategies of depth-like procedures. Our key observation is that robust estimators that are maximizers of depth functions, including halfspace depth, regression depth and covariance matrix depth, can all be derived under the framework of f -GAN . As a , these depth-based estimators can be viewed as minimizers of variational lower bounds of the total variation distance between the empirical measure and the model distribution (Proposition 2.1). This observation allows us to leverage the recent developments in the deep learning literature to compute these variational lower bounds through neural network approximations. Our theoretical give insights on how to choose appropriate neural network classes that lead to minimax optimal robust estimation under Huber's -contamination model. In particular, Theorem 3.1 and 3.2 characterize the networks which can robustly estimate the Gaussian mean by TV-GAN and JS-GAN, respectively; Theorem 4.1 is an extension to robust location estimation under the class of elliptical distributions which includes Cauchy distribution whose mean does not exist. Numerical experiments in Section 5 are provided to show the success of these GANs. We start with the definition of f -divergence (Csiszár, 1964; BID0 . Given a strictly convex function f that satisfies f = 0, the f -GAN between two probability distributions P and Q is defined by DISPLAYFORM0 Here, we use p(·) and q(·) to stand for the density functions of P and Q with respect to some common dominating measure. For a fully rigorous definition, see. Let f * be the convex conjugate of f. That is, f * (t) = sup u∈dom f (ut − f (u)). A variational lower bound of is DISPLAYFORM1 Note that the inequality holds for any class T, and it becomes an equality whenever the class T contains the function f (p/q) . For notational simplicity, we also use f for an arbitrary element of the subdifferential when the derivative does not exist. With i.i.d. observations X 1,..., X n ∼ P, the variational lower bound naturally leads to the following learning method DISPLAYFORM2 The formula is a powerful and general way to learn the distribution P from its i.i.d. observations. It is known as f -GAN , an extension of GAN , which stands for generative adversarial networks. The idea is to find a P so that the best discriminator T in the class T cannot tell the difference between P and the empirical distribution DISPLAYFORM3 Our f -Learning framework is based on a special case of the variational lower bound. That is, DISPLAYFORM0 where q(·) stands for the density function of Q. Note that here we allow the class Q Q to depend on the distribution Q in the second argument of D f (P Q). Compare with, and it is easy to realize that is a special case of with DISPLAYFORM1 Moreover, the inequality becomes an equality as long as P ∈ Q Q. The sample version of leads to the following learning method DISPLAYFORM2 The learning method will be referred to as f -Learning in the sequel. It is a very general framework that covers many important learning procedures as special cases. For example, consider the special case where Q Q = Q independent of Q, Q = Q, and f (x) = x log x. Direct calculations give f (x) = log x + 1 and f * (t) = e t−1. Therefore, becomes DISPLAYFORM3 which is the maximum likelihood estimator (MLE). An important generator f that we will discuss here is f (x) = (x−1) +. This leads to the total variation distance DISPLAYFORM0 |p − q|. With f (x) = I{x ≥ 1} and f * (t) = tI{0 ≤ t ≤ 1}, the TV-Learning is given by DISPLAYFORM1 A closely related idea was previously explored by;. The following proposition shows that when Q Q approaches to Q in some neighborhood, TV-Learning leads to robust estimators that are defined as the maximizers of various depth functions including Tukey's depth, regression depth, and covariance depth. Proposition 2.1. The TV-Learning includes the following special cases:1. Tukey's halfspace depth: DISPLAYFORM2 2. Regression depth: Take Q = P y,X = P y|X P X: DISPLAYFORM3 and Q η = P y,X = P y|X P X: DISPLAYFORM4 3. Covariance matrix depth: Take Q = {N (0, Γ): Γ ∈ E p }, where E p stands for the class of p × p covariance matrices, and Q Γ = N (0, Γ): DISPLAYFORM5 becomes DISPLAYFORM6 The formula is recognized as Tukey's median, the maximizer of Tukey's halfspace depth. A traditional understanding of Tukey's median is that maximizes the halfspace depth so that θ is close to the centers of all one-dimensional projections of the data. In the f -Learning framework, N (θ, I p) is understood to be the minimizer of a variational lower bound of the total variation distance. The formula gives the estimator that maximizes the regression depth proposed by. It is worth noting that the derivation of does not depend on the marginal distribution P X in the linear regression model. Finally, is related to the covariance matrix depth (; ;). All of the estimators, and are proved to achieve the minimax rate for the corresponding problems under Huber's -contamination model . DISPLAYFORM7 The connection to various depth functions shows the importance of TV-Learning in robust estimation. However, it is well-known that depth-based estimators are very hard to compute BID1 van; ), which limits their applications only for very low-dimensional problems. On the other hand, the general f -GAN framework has been successfully applied to learn complex distributions and images in practice (; ;). The major difference that gives the computational advantage to f -GAN is its flexibility in terms of designing the discriminator class T using neural networks compared with the pre-specified choice in f -Learning. While f -Learning provides a unified perspective in understanding various depth-based procedures in robust estimation, we can step back into the more general f -GAN for its computational advantages, and to design efficient computational strategies. In this section, we focus on the problem of robust mean estimation under Huber's -contamination model. Our goal is to reveal how the choice of the class of discriminators affects robustness and statistical optimality under the simplest possible setting. That is, we have i.i.d. observations X 1,..., X n ∼ (1 −)N (θ, I p) + Q, and we need to estimate the unknown location θ ∈ R p with the contaminated data. Our goal is to achieve the minimax rate p n ∨ 2 with respect to the squared 2 loss uniformly over all θ ∈ R p and all Q. DISPLAYFORM0, where b is maximized out for visualization. Samples are drawn from P = (1 −)N + N with = 0.2. Left: a surface plot of F (η, w). The solid curves are marginal functions for fixed η's: F (1, w) (red) and F (5, w) (blue), and the dash curves are marginal functions for fixed w's: F (η, −10) (orange) and F (η, 10) (green). Right: a heatmap of F (η, w). It is clear thatF (w) = F (η, w) has two local maxima for a given η, achieved at w = +∞ and w = −∞. In fact, the global maximum for F (w) has a phase transition from w = +∞ to w = −∞ as η grows. For example, the maximum is achieved at w = +∞ when η = 1 (blue solid) and is achieved at w = −∞ when η = 5 (red solid). Unfortunately, even if we initialize with η 0 = 1 and w 0 > 0, gradient ascents on η will only increase the value of η (green dash), and thus as long as the discriminator cannot reach the global maximizer, w will be stuck in the positive half space {w : w > 0} and further increase the value of η. We start with the total variation GAN (TV-GAN) with f (x) = (x − 1) + in. For the Gaussian location family, can be written as DISPLAYFORM0 with T (x) = D(x) in. Now we need to specify the class of discriminators D to solve the classification problem between N (η, I p) and the empirical distribution 1 n n i=1 δ Xi. One of the simplest discriminator classes is the logistic regression, DISPLAYFORM1 With FORMULA0, the procedure can be viewed as a smoothed version of TV-Learning. To be specific, the sigmoid function sigmoid(w T x + b) tends to an indicator function as w → ∞, which leads to a procedure very similar to. In fact, the class is richer than the one used in, and thus can be understood as the minimizer of a sharper variational lower bound than that of. DISPLAYFORM2 DISPLAYFORM3 with probability at least 1 − e −C (p+n 2) uniformly over all θ ∈ R p and all Q. The constants C, C > 0 are universal. Though TV-GAN can achieve the minimax rate Given the intractable optimization property of TV-GAN, we next turn to Jensen-Shannon GAN (JS-GAN) with f (x) = x log x − (x + 1) log x+1 2. The estimator is defined by DISPLAYFORM0 with T (x) = log D(x) in. This is exactly the original GAN specialized to the normal mean estimation problem. The advantages of JS-GAN over other forms of GAN have been studied extensively in the literature .Unlike TV-GAN, our experiment show that FORMULA0 with the logistic regression discriminator class FORMULA0 is not robust to contamination. However, if we replace by a neural network class with one or more hidden layers, the estimator will be robust and will also work very well numerically. To understand why and how the class of the discriminators affects the robustness property of JS-GAN, we introduce a new concept called restricted Jensen-Shannon divergence. Let g: R p → R d be a function that maps a p-dimensional observation to a d-dimensional feature space. The restricted Jensen-Shannon divergence between two probability distributions P and Q with respect to the feature g is defined as DISPLAYFORM1 In other words, P and Q are distinguished by a logistic regression classifier that uses the feature g(X). It is easy to see that JS g (P, Q) is a variational lower bound of the original Jensen-Shannon divergence. The key property of JS g (P, Q) is given by the following proposition. Proposition 3.1. Assume W is a convex set that contains an open neighborhood of 0. Then, JS g (P, Q) = 0 if and only if E P g(X) = E Q g(X).The proposition asserts that JS g (·, ·) cannot distinguish P and Q if the feature g(X) has the same expected value under the two distributions. This generalized moment matching effect has also been studied by for general f -GANs. However, the linear discriminator class considered in Liu et al. FORMULA0 is parameterized in a different way compared with the discriminator class here. When we apply Proposition 3.1 to robust mean estimation, the JS-GAN is trying to match the values of DISPLAYFORM2 for the feature g(X) used in the logistic regression classifier. This explains what we observed in our numerical experiments. A neural net without any hidden layer is equivalent to a logistic regression with a linear feature g(X) = (X T, 1) DISPLAYFORM3, which implies that the sample mean is a global maximizer of. On the other hand, a neural net with at least one hidden layers involves a nonlinear feature function g(X), which is the key that leads to the robustness of.We will show rigorously that a neural net with one hidden layer is sufficient to make robust and optimal. Consider the following class of discriminators, DISPLAYFORM4 The class consists of two-layer neural network functions. While the dimension of the input layer is p, the dimension of the hidden layer can be arbitrary, as long as the weights have a bounded 1 norm. The nonlinear activation function σ(·) is allowed to take 1) indicator: DISPLAYFORM5 1+e −x, 3) ramp: σ(x) = max(min(x + 1/2, 1), 0). Other bounded activation functions are also possible, but we do not exclusively list them. The rectified linear unit (ReLU) will be studied in Appendix A. Theorem 3.2. Consider the estimator θ defined by with D specified by. Assume DISPLAYFORM6 with probability at least 1 − e −C (p+n 2) uniformly over all θ ∈ R p and all Q. The constants C, C > 0 are universal. An advantage of Tukey's median is that it leads to optimal robust location estimation under general elliptical distributions such as Cauchy distribution whose mean does not exist. In this section, we show that JS-GAN shares the same property. A random vector X ∈ R p follows an elliptical distribution if it admits a representation DISPLAYFORM0 where U is uniformly distributed on the unit sphere {u ∈ R p : u = 1} and ξ ≥ 0 is a random variable independent of U that determines the shape of the elliptical distribution . The center and the scatter matrix is θ and Σ = AA T.For a unit vector v, let the density function of ξv T U be h. Note that h is independent of v because of the symmetry of U. Then, there is a one-to-one relation between the distribution of ξ and h, and thus the triplet (θ, Σ, h) fully parametrizes an elliptical distribution. Note that h and Σ = AA T are not identifiable, because ξA = (cξ)(c −1 A) for any c > 0. Therefore, without loss of generality, we can restrict h to be a member of the following class DISPLAYFORM1 This makes the parametrization (θ, Σ, h) of an elliptical distribution fully identifiable, and we use EC(θ, Σ, h) to denote an elliptical distribution parametrized in this way. The JS-GAN estimator is defined as DISPLAYFORM2 where E p (M) is the set of all positive semi-definite matrix with spectral norm bounded by M.Theorem 4.1. Consider the estimator θ defined above with D specified by. DISPLAYFORM3 for some sufficiently small constant c > 0, and set DISPLAYFORM4 with probability at least 1 − e DISPLAYFORM5 Remark 4.1. The of Theorem 4.1 also holds (and is proved) under the strong contamination model (a). That is, we have i.i.d. observations X 1,..., X n ∼ P for some P satisfying TV(P, EC(θ, Σ, h)) ≤. See its proof in Appendix D.2.Note that Theorem 4.1 guarantees the same convergence rate as in the Gaussian case for all elliptical distributions. This even includes multivariate Cauchy where mean does not exist. Therefore, the location estimator is fundamentally different from Diakonikolas et al. (2016a); , which is only designed for robust mean estimation. We will show such a difference in our numerical . To achieve rate-optimality for robust location estimation under general elliptical distributions, the estimator FORMULA0 is different from only in the generator class. They share the same discriminator class. This underlines an important principle for designing GAN estimators: the overall statistical complexity of the estimator is only determined by the discriminator class. The estimator also outputs (Σ, h), but we do not claim any theoretical property for (Σ, h) in this paper. This will be systematically studied in a future project. In this section, we give extensive numerical studies of robust mean estimation via GAN. After introducing the implementation details in Section 5.1, we verify our theoretical on minimax estimation with both TV-GAN and JS-GAN in Section 5.2. Comparison with other methods on robust mean estimation in the literature is given in Section 5.3. The effects of various network structures are studied in Section 5.4. Adaptation to unknown covariance is studied in Section 5.5. In all these cases, we assume i.i.d. observations are drawn from (1 −)N (0 p, I p) + Q with and Q to be specified. Finally, adaptation to elliptical distributions is studied in Section 5.6. We adopt the standard algorithmic framework of f -GANs for the implementation of JS-GAN and TV-GAN for robust mean estimation. In particular, the generator for mean estimation is G η (Z) = Z + η with Z ∼ N (0 p, I p); the discriminator D is a multilayer perceptron (MLP), where each layer consisting of a linear map and a sigmoid activation function and the number of nodes will vary in different experiments to be specified below. Details related to algorithms, tuning, critical hyper-parameters, structures of discriminator networks and other training tricks for stabilization and acceleration are discussed in Appendix B.1. A PyTorch implementation is available at https://github.com/zhuwzh/Robust-GAN-Center. We verify the minimax rates achieved by TV-GAN (Theorem 3.1) and JS-GAN (Theorem 3.2) via numerical experiments. Two main scenarios we consider here are p/n < and p/n >, where in both cases, various types of contamination distributions Q are considered. Specifically, the choice of contamination distributions Q includes N (µ * 1 p, I p) with µ ranges in {0.2, 0.5, 1, 5}, N (0.5 * 1 p, Σ) and Cauchy(τ * 1 p). Details of the construction of the covariance matrix Σ is given in Appendix B.2. The distribution Cauchy(τ * 1 p) is obtained by combining p independent one-dimensional standard Cauchy with location parameter τ j = 0.5. n = 1, 000, = 0.1 and p ranges from 10 to 100) and 1/ √ n (right: p = 50, = 0.1 and n ranges from 50 to 1, 000), respectively. Net structure: One hidden layer with 20 hidden units (JS-GAN), zero hidden layer (TV-GAN). The vertical bars indicate ± standard deviations. The main experimental are summarized in FIG5, where the 2 error we present is the maximum error among all choices of Q, and detailed numerical can be founded in Tables 7, 8 and 9 in Appendix. We separately explore the relation between the error and one of, √ p and 1/ √ n with the other two parameters fixed. The study of the relation between the 2 error and is in the regime p/n < so that dominates the minimax rate. The scenario p/n > is considered in the study of the effects of √ p and 1/ √ n. As is shown in FIG5, the errors are approximately linear against the corresponding parameters in all cases, which empirically verifies the of Theorem 3.1 and Theorem 3.2. We perform additional experiments to compare with other methods including dimension halving and iterative filtering under various settings. We emphasize that our method does not require any knowledge about the nuisance parameters such as the contamination proportion. Tuning GAN is only a matter of optimization and one can tune parameters based on the objective function only. Table 1: Comparison of various robust mean estimation methods. Net structure: One-hidden layer network with 20 hidden units when n = 50, 000 and 2 hidden units when n = 5, 000. The number in each cell is the average of 2 error θ − θ with standard deviation in parenthesis estimated from 10 repeated experiments and the smallest error among four methods is highlighted in bold. Table 1 shows the performances of JS-GAN, TV-GAN, dimension halving, and iterative filtering. The network structure, for both JS-GAN and TV-GAN, has one hidden layer with 20 hidden units when the sample size is 50,000 and 2 hidden units when sample size is 5,000. The critical hyper-parameters we apply is given in Appendix and it turns out that the choice of the hyper-parameter is robust against different models when the net structures are the same. To summarize, our method outperforms other algorithms in most cases. TV-GAN is good at cases when Q and N (0 p, I p) are non-separable but fails when Q is far away from N (0 p, I p) due to optimization issues discussed in Section 3.1 FIG0 ). On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones. DISPLAYFORM0 We further study the performance of JS-GAN with various structures of neural networks. The main observation is tuning networks with one-hidden layer becomes tough as the dimension grows (e.g. p ≥ 200), while a deeper network can significantly refine the situation perhaps by improving the landscape. Some experiment are given in Table 2. On the other hand, one-hidden layer performs not worse than deeper networks when dimension is not very large (e.g. p ≤ 100). More experiments are given in Appendix B.4. Additional theoretical for deep neural nets are given in Appendix A. Table 2: Experiment for JS-GAN using networks with different structures in high dimension. Settings: = 0.2, p ∈ {200, 400} and n = 50, 000. The robust mean estimator constructed through JS-GAN can be easily made adaptive to unknown covariance structure, which is a special case of. We define DISPLAYFORM0 The estimator θ, as a , is rate-optimal even when the true covariance matrix is not necessarily identity and is unknown (see Theorem 4.1). Below, we demonstrate some numerical evidence of the optimality of θ as well as the error of Σ in Table 3.Data generating process Network structure Table 3: Numerical experiments for robust mean estimation with unknown covariance trained with 50, 000 samples. The covariance matrices Σ 1 and Σ 2 are generated by the same way described in Appendix B.2. DISPLAYFORM1 We consider the estimation of the location parameter θ in elliptical distribution EC(θ, Σ, h) by the JS-GAN defined in. In particular, we study the case with i. DISPLAYFORM0 The density function of Cauchy(θ, Σ) is given by p(x; θ, Σ) ∝ |Σ| DISPLAYFORM1 Compared with Algorithm, the difference lies in the choice of the generator. We consider the generator G 1 (ξ, U) = g ω (ξ)U + θ, where g ω (ξ) is a non-negative neural network parametrized by ω and some random variable ξ. The random vector U is sampled from the uniform distribution on {u ∈ R p : u = 1}. If the scatter matrix is unknown, we will use the generator G 2 (ξ, U) = g ω (ξ)AU +θ, with AA T modeling the scatter matrix. Table 4 shows the comparison with other methods. Our method still works well under Cauchy distribution, while the performance of other methods that rely on moment conditions deteriorates in this setting. In this section, we investigate the performance of discriminator classes of deep neural nets with the ReLU activation function. Since our goal is to learn a p-dimensional mean vector, a deep neural network discriminator without any regularization will certainly lead to overfitting. Therefore, it is crucial to design a network class with some appropriate regularizations. Inspired by the work of; Bartlett & Mendelson FORMULA1, we consider a network class with 1 regularizations on all layers except for the second last layer with an 2 regularization. With G H 1 (B) = g(x) = ReLU(v T x): v 1 ≤ B, a neural network class with l + 1 layers is defined as DISPLAYFORM0 Combining with the last sigmoid layer, we obtain the following discriminator class, DISPLAYFORM1 Note that all the activation functions are ReLU(·) except that we use sigmoid(·) in the last layer in the feature map g(·). A theoretical guarantees of the class defined above is given by the following theorem. DISPLAYFORM2 with probability at least 1 − e −C (p log p+n 2) uniformly over all θ ∈ R p such that θ ∞ ≤ √ log p and all Q.The theorem shows that JS-GAN with a deep ReLU network can achieve the error rate p log p n ∨ 2 with respect to the squared 2 loss. The condition θ ∞ ≤ √ log p for the ReLU network can be easily satisfied with a simple preprocessing step. We split the data into two halves, whose sizes are log n and n − log n, respectively. Then, we calculate the coordinatewise median θ using the small half. It is easy to show that θ − θ ∞ ≤ log p log n ∨ with high probability. Then, for each X i from the second half, the conditional distribution of X i − θ given the DISPLAYFORM3 and thus we can apply the estimator using the shifted data X i − θ from the second half. The theoretical guarantee of Theorem A.1 will be DISPLAYFORM4 with high probability. Hence, we can use θ + θ as the final estimator to achieve the same rate in Theorem A.1.On the other hand, our experiments show that this preprocessing step is not needed. We believe that the assumption θ ∞ ≤ √ log p is a technical artifact in the analysis of the Rademacher complexity. It can probably be dropped by a more careful analysis. The implementation for JS-GAN is given in Algorithm 1, and a simple modification of the objective function leads to that of TV-GAN. DISPLAYFORM0, generator network G η (z) = z + η, learning rates γ d and γ g for the discriminator and the generator, batch size m, discriminator steps in each iteration K, total epochs T, average epochs T 0. Initialization: Initialize η with coordinatewise median of S. Initialize w with N (0, .05) independently on each element or Xavier .1: for t = 1,..., T do 2: DISPLAYFORM1 Sample mini-batch DISPLAYFORM2 end for 7: DISPLAYFORM3 η ← η − γ g g η 10: end for Return: The average estimate η over the last T 0 epochs. Several important implementation details are discussed below.• How to tune parameters? The choice of learning rates is crucial to the convergence rate, but the minimax game is hard to evaluate. We propose a simple strategy to tune hyper-parameters including the learning rates. Suppose we have estimators θ 1,..., θ M with corresponding discriminator networks D w1,..., D w M. Fixing η = θ, we further apply gradient descent to D w with a few more epochs (but not many in order to prevent overfitting, for example 10 epochs) and select the θ with the smallest value of the objective function (JS-GAN) or (TV-GAN). We note that training discriminator and generator alternatively usually will not suffer from overfitting since the objective function for either the discriminator or the generator is always changing. However, we must be careful about the overfitting issue when training the discriminator alone with a fixed η, and that is why we apply an early stopping strategy here. Fortunately, the experiments show if the structures of networks are same (then of course, the dimensions of the inputs are same), the choices of hyper-parameters are robust to different models and we present the critical parameters in Table 5 to reproduce the experiment in Table 1 and Table 2.• When to stop training? Judging convergence is a difficult task in GAN trainings, since sometimes oscillation may occur. In computer vision, people often use a task related measure and stop training once the requirement based on the measure is achieved. In our experiments below, we simply use a sufficiently large T which works well, but it is still interesting to explore an efficient early stopping rule in the future work.• How to design the network structure? Although Theorem 3.1 and Theorem 3.2 guarantee the minimax rates of TV-GAN without hidden layer and JS-GAN with one hidden layer, one may wonder whether deeper network structures will perform better. From our preliminary experiments, TV-GAN with one hidden layer is significantly better than TV-GAN without any hidden layer. Moreover, JS-GAN with deep network structures can significantly improve over shallow networks especially when the dimension is large (e.g. p ≥ 200). For a network with one hidden layer, the choice of width may depend on the sample size. If we only have 5,000 samples of 100 dimensions, two hidden units performs better than five hidden units, which performs better than twenty hidden units. If we have 50,000 samples, networks with twenty hidden units perform the best.• How to stabilize and accelerate TV-GAN? As we have discussed in Section 3.1, TV-GAN has a bad landscape when N (θ, I p) and the contamination distribution Q are linearly separable (see FIG0). An outlier removal step before training TV-GAN may be helpful. Besides, spectral normalization is also worth trying since it can prevent the weight from going to infinity and thus can increase the chance to escape from bad saddle points. To accelerate the optimization of TV-GAN, in all the numerical experiments below, we adopt a regularized version of TV-GAN inspired by Proposition 3.1. Since a good feature extractor should match nonlinear moments of P = (1 −)N (θ, I p) + Q and N (η, I p), we use an additional regularization term that can accelerate training and sometimes even leads to better performances. Specifically, let D(x) = sigmoid(w T Φ(x)) be the discriminator network with w being the weights of the output layer and Φ D (x) be the corresponding network after removing the output layer from D(x). The quantity Φ D (x) is usually viewed as a feature extractor, which naturally leads to the following regularization term , defined as DISPLAYFORM4 where DISPLAYFORM5 We introduce the contamination distributions Q used in the experiments. We first consider Q = N (µ, I p) with µ ranges in {0.2, 0.5, 1, 5}. Note that the total variation distance between N (0 p, I p) and N (µ, I p) is of order 0 p − µ = µ. We hope to use different levels of µ to test the algorithm and verify the error rate in the worst case. Second, we consider Q = N (1.5 * 1 p, Σ) to be a Gaussian distribution with a non-trivial covariance matrix Σ. The covariance matrix is generated according to the following steps. First generate a sparse precision matrix Γ = (γ ij) with each entry γ ij = z ij * τ ij, i ≤ j, where z ij and τ ij are independently generated from Uniform(0.4, 0.8) and Bernoulli(0.1). We then define γ ij = γ ji for all i > j andΓ = Γ + (| min eig(Γ)| + 0.05)I p to make the precision matrix symmetric and positive definite, where min eig(Γ) is the smallest eigenvalue of Γ. The covariance matrix is Σ =Γ −1. Finally, we consider Q to be a Cauchy distribution with independent component, and the jth component takes a standard Cauchy distribution with location parameter τ j = 0.5. In Section 5.3, we compare GANs with the dimension halving and iterative filtering .• Dimension Halving. Experiments conducted are based on the code from https://github.com/ kal2000/AgnosticMeanAndCovarianceCode. The only hyper-parameter is the threshold in the outlier removal step, and we take C = 2 as suggested in the file outRemSperical.m.• Iterative Filtering. Experiments conducted are based on the code from https://github.com/ hoonose/robust-filter. We assume is known and take other hyper-parameters as suggested in the file filterGaussianMean.m. The experiments are conducted with i.i.d. observations drawn from (1 −)N (0 p, I p) + N (0.5 * 1 p, I p) with = 0.2. Table 6 summarizes for p = 100, n ∈ {5000, 50000} and various network structures. We observe that TV-GAN that uses neural nets with one hidden layer improves over the performance of that without any hidden layer. This indicates that the landscape of TV-GAN might be improved by a more complicated network structure. However, adding one more layer does not improve the . For JS-GAN, we omit the without hidden layer because of its lack of robustness (Proposition 3.1). Deeper networks sometimes improve over shallow networks, but this is not always true. We also observe that the optimal choice of the width of the hidden layer depends on the sample size. Table 8: Scenario II-a: p/n >. Setting: n = 1, 000, = 0.1, and p from 10 to 100. Other details are the same as above. Table 9: Scenario II-b: p/n >. Setting: p = 50, = 0.1, and n from 50 to 1, 000. Other details are the same as above. Q Net n = 50 n = 100 n = 200 n = 500 n = 1000 In the first example, consider DISPLAYFORM0 DISPLAYFORM1 In other words, Q is the class of Gaussian location family, and Q η is taken to be a subset in a local neighborhood of N (η, I p). Then, with Q = N (η, I p) and Q = N (η, I p), the event q(X)/q(X) ≥ 1 is equivalent to X − η 2 ≤ X − η 2. Since η − η ≤ r, we can write η = η + ru for some r ∈ R and u ∈ R p that satisfy 0 ≤ r ≤ r and u = 1. Then, becomes DISPLAYFORM2 Letting r → 0, we obtain, the exact formula of Tukey's median. The next example is a linear model y|X ∼ N (X T θ, 1). Consider the following classes DISPLAYFORM3 Here, P y,X stands for the joint distribution of y and X. The two classes Q and Q share the same marginal distribution P X and the conditional distributions are specified by N (X T η, 1) and N (X T η, 1), respectively. Follow the same derivation of Tukey's median, let r → 0, and we obtain the exact formula of regression depth. It is worth noting that the derivation of does not depend on the marginal distribution P X.The last example is on covariance/scatter matrix estimation. For this task, we set Q = {N (0, Γ): Γ ∈ E p }, where E p is the class of all p × p covariance matrices. Inspired by the derivations of Tukey depth and regression depth, it is tempting to choose Q in the neighborhood of N (0, Γ). However, a native choice would lead to a definition that is not even Fisher consistent. We propose a rank-one neighborhood, given by DISPLAYFORM4 Then, a direct calculation gives DISPLAYFORM5 Since lim r→0 DISPLAYFORM6, depending on whether r tends to zero from left or from right. Therefore, with the above Q and Q Γ, becomes under the limit r → 0. Even though the definition of is given by a rank-one neighborhood of the inverse covariance matrix, the formula can also be derived with DISPLAYFORM7 T by applying the Sherman-Morrison formula. A similar formula to in the literature is given by dN (0,Γ) (X) ≥ 1, a special case of, the formula can be derived directly from TV-GAN with discriminators in the form of I dN (0,β Γ) dN (0,βΓ) (X) ≥ 1 by following a similar rank-one neighborhood argument. This completes the derivation of Proposition 2.1. DISPLAYFORM8 To prove Proposition 3.1, we define F (w) = E P log sigmoid(w T g(X)) + E Q log(1 − sigmoid(w T g(X))) + log 4, so that JS g (P, Q) = max w∈W F (w). The gradient and Hessian of F (w) are given by DISPLAYFORM9 1 + e w T g(X) g(X), DISPLAYFORM10 Therefore, F (w) is concave in w, and max w∈W F (w) is a convex optimization with a convex W. Suppose JS g (P, Q) = 0. Then max w∈W F (w) = 0 = F, which implies ∇F = 0, and thus we have E P g(X) = E Q g(X). Now suppose E P g(X) = E Q g(X), which is equivalent to ∇F = 0. Therefore, w = 0 is a stationary point of a concave function, and we have JS g (P, Q) = max w∈W F (w) = F = 0. In this section, we present proofs of all main theorems in the paper. We first establish some useful lemmas in Section D.1, and the the proofs of main theorems will be given in Section D.2. Lemma D.1. Given i.i.d. observations X 1,..., X n ∼ P and the function class D defined in, we have for any δ > 0, DISPLAYFORM0 with probability at least 1 − δ for some universal constant C > 0. DISPLAYFORM1. It is clear that f (X 1, ..., X n) satisfies the bounded difference condition. By McDiarmid's inequality , we have DISPLAYFORM2, with probability at least 1 − δ. Using a standard symmetrization technique , we obtain the following bound that involves Rademacher complexity, DISPLAYFORM3 where 1,..., n are independent Rademacher random variables. The Rademacher complexity can be bounded by Dudley's integral entropy bound, which gives DISPLAYFORM4 where N (δ, D, · n) is the δ-covering number of D with respect to the empirical 2 distance f − ). This leads to the bound DISPLAYFORM5 DISPLAYFORM6 n, which gives the desired . Lemma D.2. Given i.i.d. observations X 1,..., X n ∼ P, and the function class D defined in, we have for any δ > 0, DISPLAYFORM7 with probability at least 1 − δ for some universal constant C > 0. DISPLAYFORM8 Therefore, by McDiarmid's inequality , we have DISPLAYFORM9 with probability at least 1 − δ. By the same argument of FORMULA1, it is sufficient to bound the Rademacher complexity E sup D∈D 1 n n i=1 i log(2D(X i)). Since the function ψ(x) = log(2sigmoid(x)) has Lipschitz constant 1 and satisfies ψ = 0, we have p log p n + log(1/δ) n, with probability at least 1 − δ for some universal constants C > 0.Proof. Write f (X 1, ..., X n) = sup D∈F H L (κ,τ,B)1 n n i=1 log D(X i) − E log D(X). Then, the inequality holds with probability at least 1 − δ. It is sufficient to analyze the Rademacher complexity. Using the fact that the function log(2sigmoid(x)) is Lipschitz and Hölder's inequality, we have 1 n DISPLAYFORM10 Now we use the notation Z i = X i − θ ∼ N (0, I p) for i = 1,..., n. We bound E sup g∈G H L−1 (B) 1 n n i=1 i g(Z i + θ) by induction. Since DISPLAYFORM11 This leads to the desired under the conditions on τ and θ ∞. Proof of Theorem 3.1. We first introduce some notations. Define F (P, η) = max w,b F w,b (P, η), where With this definition, we have θ = argmin η F (P n, η), where we use P n for the empirical distribution 1 n n i=1 δ Xi. We shorthand N (η, I p) by P η, and then F (P θ, θ) ≤ F ((1 −)P θ + Q, θ) +≤ F (P n, θ) + + C p n + log(1/δ) n ≤ F (P n, θ) + + C p n + log(1/δ) n ≤ F ((1 −)P θ + Q, θ) + + 2C p n + log(1/δ) n ≤ F (P θ, θ) + 2 + 2C p n + log(1/δ) n = 2 + 2C p n + log(1/δ) n.With probability at least 1 − δ, the above inequalities hold. We will explain each inequality. Since The inequality FORMULA1 is a direct consequence of the definition of θ. Finally, it is easy to see that F (P θ, θ) = 0, which gives. In summary, we have derived that with probability at least 1 − δ, F w,b (P θ, θ) ≤ 2 + 2C p n + log(1/δ) n, for all w ∈ R p and b ∈ R. For any u ∈ R p such that u = 1, we take w = u and b = −u T θ, and we have DISPLAYFORM0 where f (t) = 1 1+e z+t φ(z)dz, with φ(·) being the probability density function of N. It is not hard to see that as long as |f (t) − f| ≤ c for some sufficiently small constant c > 0, then |f (t) − f| ≥ c |t| for some constant c > 0. This implies DISPLAYFORM1 with probability at least 1 − δ. The proof is complete. Proof of Theorem 3.2. We continue to use P η to denote N (η, I p). Define DISPLAYFORM2 ≤ F (P n, θ) + 2κ + Cκ p n + log(1/δ) n ≤ F (P n, θ) + 2κ + Cκ p n + log(1/δ) n ≤ F ((1 −)P θ + Q, θ) + 2κ + 2Cκ p n + log(1/δ) n ≤ F (P θ, θ) + 4κ + 2Cκ p n + log(1/δ) n = 4κ + 2Cκ p n + log(1/δ) n.where F w,u,b (P, (η, Γ, g)) = E P log D(X) + E EC(η,Γ,g) log (1 − D(X)) + log 4, with D(x) = sigmoid j≥1 w j σ(u T j x + b j). Let P be the data generating process that satisfies TV(P, P θ,Σ,h) ≤, and then there exist probability distributions Q 1 and Q 2, such that P + Q 1 = P θ,Σ,h + Q 2.The explicit construction of Q 1, Q 2 is given in the proof of Theorem 5.1 of. This implies that |F (P, (η, Γ, g)) − F (P θ,Σ,h, (η, Γ, g))|≤ sup |E Q2 log(2D(X)) − E Q1 log(2D(X))| ≤ 2κ.
GANs are shown to provide us a new effective robust mean estimate against agnostic contaminations with both statistical optimality and practical tractability.
1,325
scitldr
Long Short-Term Memory (LSTM) is one of the most powerful sequence models. Despite the strong performance, however, it lacks the nice interpretability as in state space models. In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL), which generalizes the earlier work \cite{zaheer2017latent} of combining topic models with LSTM. However, unlike \cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm. We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly. Experimental confirms the superiority and stability of this SMC inference algorithm on a variety of domains. State space models (SSMs), such as hidden Markov models (HMM) and linear dynamical systems (LDS), have been the workhorse of sequence modeling in the past decades From a graphical model perspective, efficient message passing algorithms BID34 BID17 are available in compact closed form thanks to their simple linear Markov structure. However, simplicity comes at a cost: real world sequences can have long-range dependencies that cannot be captured by Markov models; and the linearity of transition and emission restricts the flexibility of the model for complex sequences. A popular alternative is the recurrent neural networks (RNN), for instance the Long Short-Term Memory (LSTM) BID14 ) which has become a standard for sequence modeling nowadays. Instead of associating the observations with stochastic latent variables, RNN directly defines the distribution of each observation conditioned on the past, parameterized by a neural network. The recurrent parameterization not only allows RNN to provide a rich function class, but also permits scalable stochastic optimization such as the backpropagation through time (BPTT) algorithm. However, flexibility does not come for free as well: due to the complex form of the transition function, the hidden states of RNN are often hard to interpret. Moreover, it can require large amount of parameters for seemingly simple sequence models BID35.In this paper, we propose a new class of models State Space LSTM (SSL) that combines the best of both worlds. We show that SSLs can handle nonlinear, non-Markovian dynamics like RNNs, while retaining the probabilistic interpretations of SSMs. The intuition, in short, is to separate the state space from the sample space. In particular, instead of directly estimating the dynamics from the observed sequence, we focus on modeling the sequence of latent states, which may represent the true underlying dynamics that generated the noisy observations. Unlike SSMs, where the same goal is pursued under linearity and Markov assumption, we alleviate the restriction by directly modeling the transition function between states parameterized by a neural network. On the other hand, we bridge the state space and the sample space using classical probabilistic relation, which not only brings additional interpretability, but also enables the LSTM to work with more structured representation rather than the noisy observations. Indeed, parameter estimation of such models can be nontrivial. Since the LSTM is defined over a sequence of latent variables rather than observations, it is not straightforward to apply the usual BPTT algorithm without making variational approximations. In BID35, which is an instance of SSL, an EM-type approach was employed: the algorithm alternates between imputing the latent states and optimizing the LSTM over the imputed sequences. However, as we show below, the inference implicitly assumes the posterior is factorizable through time. This is a restrictive assumption since the benefit of rich state transition brought by the LSTM may be neutralized by breaking down the posterior over time. We present a general parameter estimation scheme for the proposed class of models based on sequential Monte Carlo (SMC) BID8, in particular the Particle Gibbs BID1. Instead of sampling each time point individually, we directly sample from the joint posterior without making limiting factorization assumptions. Through extensive experiments we verify that sampling from the full posterior leads to significant improvement in the performance. Related works Enhancing state space models using neural networks is not a new idea. Traditional approaches can be traced back to nonlinear extensions of linear dynamical systems, such as extended or unscented Kalman filters , where both state transition and emission are generalized to nonlinear functions. The idea of parameterizing them with neural networks can be found in BID12, as well as many recent works BID22 BID2 BID15 BID23 BID18 thanks to the development of recognition networks BID20 BID32. Enriching the output distribution of RNN has also regain popularity recently. Unlike conventionally used multinomial output or mixture density networks BID4, recent approaches seek for more flexible family of distributions such as restricted Boltzmann machines (RBM) BID6 or variational auto-encoders (VAE) BID13 BID7.On the flip side, there have been studies in introducing stochasticity to recurrent neural networks. For instance, BID30 and BID3 incorporated independent latent variables at each time step; while in BID9 the RNN is attached to both latent states and observations. We note that in our approach the transition and emission are decoupled, not only for interpretability but also for efficient inference without variational assumptions. On a related note, sequential Monte Carlo methods have recently received attention in approximating the variational objective BID27 BID24 BID29. Despite the similarity, we emphasize that the context is different: we take a stochastic EM approach, where the full expectation in E-step is replaced by the samples from SMC. In contrast, SMC in above works is aimed at providing a tighter lower bound for the variational objective. In this section, we provide a brief review of some key ingredients of this paper. We first describe the SSMs and the RNNs for sequence modeling, and then outline the SMC methods for sampling from a series of distributions. Consider a sequence of observations x 1:T = (x 1, . . ., x T) and a corresponding sequence of latent states z 1:T = (z 1, . . ., z T). The SSMs are a class of graphical models that defines probabilistic dependencies between latent states and the observations. A classical example of SSM is the (Gaussian) LDS, where real-valued states evolve linearly over time under the first-order Markov assumption. Let x t ∈ R d and z t ∈ R k, the LDS can be expressed by two equations: DISPLAYFORM0 where A ∈ R k×k, C ∈ R d×k, and Q and R are covariance matrices of corresponding sizes. They are widely applied in modeling the dynamics of moving objects, with z t representing the true state of the system, such as location and velocity of the object, and x t being the noisy observation under zero-mean Gaussian noise. We mention two important inference tasks BID21 ) associated with SSMs. The first tasks is filtering: at any time t, compute p(z t |x 1:t), i.e. the most up-to-date belief of the state z t conditioned on all past and current observations x 1:t. The other task is smoothing, which computes p(z t |x 1:T), i.e. the update to the belief of a latent state by incorporating future observations. One of the beauties of SSMs is that these inference tasks are available in closed form, thanks for p = 1,..., P.to the simple Markovian dynamics of the latent states. For instance, the forward-backward algorithm BID34, the Kalman filter BID17, and RTS smoother BID31 are widely appreciated in the literature of HMM and LDS.Having obtained the closed form filtering and smoothing equations, one can make use of the EM algorithm to find the maximum likelihood estimate (MLE) of the parameters given observations. In the case of LDS, the E-step can be computed by RTS smoother and the M-step is simple subproblems such as least-squares regression. We refer to BID11 for a full exposition on learning the parameters of LDS using EM iterations. RNNs have received remarkable attention in recent years due to their strong benchmark performance as well as successful applications in real-world problems. Unlike SSMs, RNNs aim to directly learn the complex generative distribution of p(x t |x 1:t−1) using a neural network, with the help of a deterministic internal state s t: DISPLAYFORM0 where RNN(·, ·) is the transition function defined by a neural network, and g(·) is an arbitrary differentiable function that maps the RNN state s t to the parameter of the distribution of x t. The flexibility of the transformation function allows the RNN to learn from complex nonlinear nonGaussian sequences. Moreover, since the state s t is a deterministic function of the past observations x 1:t−1, RNNs can capture long-range dependencies, for instance matching brackets in programming languages BID19.The BPTT algorithm can be used to find the MLE of the parameters of RNN(·, ·) and g(·). However, although RNNs can, in principle, model long-range dependencies, directly applying BPTT can be difficult in practice since the repeated application of a squashing nonlinear activation function, such as tanh or logistic sigmoid, in an exponential decay in the error signal through time. LSTMs BID14 are designed to cope with the such vanishing gradient problems, by introducing an extra memory cell that is constructed as a linear combination of the previous state and signal from the input. In this work, we also use LSTMs as building blocks, as in BID35. Sequential Monte Carlo (SMC) BID8 ) is an algorithm that samples from a series of potentially unnormalized densities π 1 (z 1),..., π T (z 1:T). At each step t, SMC approximates the target density π t with P weighted particles using importance distribution f (z t |z 1:t−1): DISPLAYFORM0 where α p t is the importance weight of the p-th particle and δ x is the Dirac point mass at x. Repeating this approximation for every t leads to the SMC method, outlined in Algorithm 1.The key to this method lies in the resampling, which is implemented by repeatedly drawing the ancestors of particles at each step. Intuitively, it encourages the particles with a higher likelihood to survive longer, since the weight reflects the likelihood of the particle path. The final Monte Carlo estimate consists of only survived particle paths, and sampling from this point masses is equivalent to choosing a particle path according to the last weights α T. We refer to BID8; BID1 for detailed proof of the method. In this section, we present the class of State Space LSTM (SSL) models that combines interpretability of SSMs and flexibility of LSTMs. The key intuition, motivated by SSMs, is to learn dynamics in the state space, rather than in the sample space. However, we do not assume transition in the state space is linear, Gaussian, or Markovian. Existing approaches such as the extended Kalman filter (EKF) attempted to work with a general nonlinear transition function. Unfortunately, additional flexibility also introduced extra difficulty in the parameter estimation: EKF relies heavily on linearizing the nonlinear functions. We propose to use LSTM to model the dynamics in the latent state space, as they can learn from complex sequences without making limiting assumptions. The BPTT algorithm is also well established so that no additional approximation is needed in training the latent dynamics. Generative process Let h(·) be the emission function that maps a latent state to a parameter of the sample distribution. As illustrated in FIG2 (a), the generative process of SSL for a single sequence is: DISPLAYFORM0 The generative process specifies the following joint likelihood, with a similar factorization as SSMs except for the Markov transition: DISPLAYFORM1 where p ω (z t |z 1:t−1) = p(z t ; g(s t)), ω is the set of parameters of LSTM(·, ·) and g(·), and φ is the parameters of h(·). The structure of the likelihood function is better illustrated in FIG2, where each latent state z t is dependent to all previous states z 1:t−1 after substituting s t recursively. This allows the SSL to have non-Markovian state transition, with parsimonious parameterization thanks to the recurrent structure of LSTMs. Parameter estimation We continue with a single sequence for the ease of notation. A variational lower bound to the marginal data likelihood is given by DISPLAYFORM2 where q(z 1:T) is the variational distribution. Following the (stochastic) EM approach, iteratively maximizing the lower bound w.r.t. q and the model parameters (ω, φ) leads to the following updates:• E-step: The optimal variational distribution is given by the posterior: DISPLAYFORM3 In the case of LDS or HMM, efficient smoothing algorithms such as the RTS smoother or the forward-backward algorithm are available for computing the posterior expectations of sufficient statistics. However, without Markovian state transition, although the forward messages can still be computed, the backward recursion can no longer evaluated or efficiently approximated.• S-step: Due to the difficulties in taking expectations, we take an alternative approach to collect posterior samples instead: DISPLAYFORM4 given only the filtering equations. We discuss the posterior sampling algorithm in detail in the next section.• M-step: Given the posterior samples z 1:T, which can be seen as Monte Carlo estimate of the expectations, the subproblem for ω and φ are DISPLAYFORM5 which is exactly the MLE of an LSTM, with z 1:T serving as the input sequence, and the MLE of the given emission model. Having seen the generative model and the estimation algorithm, we can now discuss some instances of the proposed class of models. In particular, we provide two examples of SSL, for continuous and discrete latent states respectively. Example 1 (Gaussian SSL) Suppose z t and x t are real-valued vectors. A typical choice of the transition and emission is the Gaussian distribution: DISPLAYFORM6 where g µ (·) and g σ (·) map to the mean and the covariance of the Gaussian respectively, and similarly h µ (·) and h σ (·). For closed form estimates for the emission parameters, one can further assume DISPLAYFORM7 where C is a matrix that maps from state space to sample space, and R is the covariance matrix with appropriate size. The MLE of φ = (C, b, R) is then given by the least squares fit. Example 2 (Topical SSL, BID35) Consider x 1:T as the sequence of websites a user has visited. One might be tempted to model the user behavior using an LSTM, however due to the enormous size of the Internet, it is almost impossible to even compute a softmax output to get a discrete distribution over the websites. There are approximation methods for large vocabulary problems in RNN, such as the hierarchical softmax BID28. However, another interesting approach is to operate on a sequence with a "compressed" vocabulary, while learning how to perform such compression at the same time. Let z t be the indicator of a "topic", which is a distribution over the vocabulary as in BID5. Accordingly, define DISPLAYFORM8 where W is a matrix that maps LSTM states to latent states, b is a bias term, and φ zt is a point in the probability simplex. If z t lies in a lower dimension than x t, the LSTM is effectively trained over a sequence z 1:T with a reduced vocabulary. On the other hand, the probabilistic mapping between z t and x t is interpretable, as it learns to group similar x t's together. The estimation of φ is typically performed under a Dirichlet prior, which then corresponds to the MAP estimate of the Dirichlet distribution BID35. In this section, we discuss how to draw samples from the posterior, corresponding to the S-step of the stochastic EM algorithm: DISPLAYFORM0 Assuming the integration and normalization can be performed efficiently, the following quantities can be evaluated in the forward pass without Markov state transition: DISPLAYFORM1 The task is to draw from the joint posterior of z 1:T only given access to these forward messages. One way to circumvent the tight dependencies in z 1:T is to make a factorization assumption, as in BID35. More concretely, the joint distribution is decomposed as DISPLAYFORM2 where z prev 1:t−1 is the assignments from the previous inference step. However, as we confirm in the experiments, this assumption can be restrictive since the flexibility of LSTM state transitions is offset by considering each time step independently. In this work, we propose to use a method based on SMC, which is a principled way of sampling from a sequence of distributions. More importantly, it does not require the model to be Markovian BID10 BID26. As described earlier, the idea is to approximate the posterior with point masses, i.e., weighted particles. Let f (z t |z 1:t−1, x t) be the importance density, and P be the number of particles. We then can run Algorithm 1 with π t (z 1:t) = p(x 1:t, z 1:t) being the unnormalized target distribution at time t, where the weight becomes DISPLAYFORM3 As for the choice of the proposal distribution f (·), one could use the transition density p ω (z t |z 1:t−1), in which case the algorithm is also referred to as the bootstrap particle filter. An alternative is the predictive distribution, a locally optimal proposal in terms of variance BID1: DISPLAYFORM4 which is precisely one of the available forward messages: DISPLAYFORM5 Notice the similarity between terms in FORMULA8 and FORMULA0. Indeed, with the choice of predictive distribution as the proposal density, the importance weight simplifies to DISPLAYFORM6 which is not a coincidence that the name collides with the message α t. Interestingly, this quantity no longer depends on the current particle z p t. Instead, it marginalizes over all possible particle assignments of the current time step. This is beneficial computationally since the intermediate terms from can be reused in. Also note that the optimal proposal relies on the fact that the normalization in can be performed efficiently, otherwise the bootstrap proposal should be used. After a full pass over the sequence, the algorithm produces Monte Carlo approximation of the posterior and the marginal likelihood: The inference is completed by a final draw from the approximate posterior, DISPLAYFORM7 DISPLAYFORM8 which is essentially sampling a particle path indexed by the last particle. Specifically, the last particle z p T is chosen according to the final weights α T, and then earlier particles can be obtained by tracing backwards to the beginning of the sequence according to the ancestry indicators a p t at each position. Since SMC produces a Monte Carlo estimate, as the number of particles P → ∞ the approximate posterior is guaranteed to converge to the true posterior for a fixed sequence. However, as the length of the sequence T increases, the number of particles needed to provide a good approximation grows exponentially. This is the well-known depletion problem of SMC BID1.One elegant way to avoid simulating enormous number of particles is to marry the idea of MCMC with SMC BID1. The idea of such Particle MCMC (PMCMC) methods is to treat the particle estimatep(·) as a proposal, and design a Markov kernel that leaves the target distribution invariant. Since the invariance is ensured by the MCMC, it does not demand SMC to provide an accurate approximation to the true distribution, but only to give samples that are approximately distributed according to the target. As a , for any fixed P > 0 the PMCMC methods ensure the target distribution is invariant. We choose the Gibbs kernel that requires minimal modification from the basic SMC. The ing algorithm is Particle Gibbs (PG), which is a conditional SMC update in a sense that a reference path z ref 1:T with its ancestral lineage is fixed throughout the particle propagation of SMC. It can be shown that this simple modification to SMC produces a transition kernel that is not only invariant, but also ergodic under mild assumptions. In practice, we use the assignments from previous step as the reference path. The final algorithm is summarized in Algorithm 2. Combined with the stochastic EM outer iteration, the final algorithm is an instance of the particle SAEM BID25 BID33, under non-Markovian state transition. We conclude this section by deriving forward messages for the previous examples. Example 1 (Gaussian SSL, continued) The integration and normalization preserves normality thanks to the Gaussian identify. The messages are given by DISPLAYFORM9 where DISPLAYFORM10 Example 2 (Topical SSL, continued) Let θ t = softmax(W s t + b). Since the distributions are discrete, we have DISPLAYFORM11 where • denotes element-wise product. Note that the integration for α t corresponds to a summation in the state space. It is then normalized across P particles to form a weight distribution. For γ t the normalization is performed in the state space as well, hence the computation of the messages are manageable. We now present empirical studies for our proposed model and inference (denoted as SMC) in order to establish that SSL is flexible in capturing underlying nonlinear dynamics, our inference is accurate yet easily applicable to complicated models, and it opens new avenues for interpretable yet nonlinear and non-Markovian sequence models, previously unthinkable. To illustrate these claims, we evaluate on synthetic sequence tracking of varying difficulties, language modeling, and user modeling utilizing complicated models for capturing the intricate dynamics. For SMC inference, we gradually increase the number of particles P from 1 to K during training. Software & hardware All the algorithms are implemented on TensorFlow BID0. We run our experiments on a commodity machine with Intel R Xeon R CPU E5-2630 v4 CPU, 256GB RAM, and 4 NVidia R Titan X (Pascal) GPU. To test the flexibility of SSL, we begin with inference using synthetic data. We consider four different dynamics in 2D space: (i) a straight line, (ii) a sine wave, (iii) a circle, and (iv) a swiss role. Note that we do not add additional states such as velocity, keeping the dynamics nonlinear except for the first case. Data points are generated by adding zero mean Gaussian noise to the true underlying dynamics. The true dynamics and the noisy observations are plotted in the top row of FIG4. The first 60% of the sequence is used for training and the rest is left for testing. The middle and bottom row of FIG4 show the of SSL and vanilla LSTM trained for same number of iterations until both are sufficiently converged. The red points refer to the prediction of z t after observing x 1:t, and the green points are blind predictions without observing any data. We can observe that while both methods are capturing the dynamics well in general, the predictions of LSTM tend to be more sensitive to initial predictions. In contrast, even when the initial predictions are not incorrect, SSL can recover in the end by remaining on the latent dynamic. For Topical SSL, we compare our SMC inference method with the factored old algorithm BID35 documents and test on the rest, using the same settings in BID35. FIG5 shows the test perplexity (lower is better) and number of nonzeros in the learned word topic count matrix (lower is better). In all cases, the SMC inference method consistently outperforms the old factored method. For comparison, we also run LSTM with the same number of parameters, which gives the lowest test perplexity of 1942.26. However, we note that LSTM needs to perform expensive linear transformation for both embedding and softmax at every step, which depends linearly on the vocabulary size V. In contrast, SSL only depends linearly on number of topics K V. Ablation study We also want to explore the benefit of the newer inference as dataset size increases. We observe that in case of natural languages which are highly structured the gap between factored approximation and accurate SMC keeps reducing as dataset size increases. But as we will see in case of user modeling when the dataset is less structured, the factored assumption leads to poorer performance. Also when the data size is fixed and the number of topics are varying, the SMC algorithm gives better perplexity compared to the old algorithm. Therefore we the SMC inference is consistently better in various settings. Visualizing particle paths In FIG7, we show the particle paths on a snippet of an article about a music album 1. As we can see from the top row, which plots the particle paths at the initial iteration, the model proposed a number of candidate topic sequences since it is uncertain about the latent semantics yet. However, after 100 epochs, as we can see from the bottom row, the model is much more confident about the underlying topical transition. Moreover, by inspecting the learned parameters φ of the probabilistic emission, we can see that the topics are highly concentrated on topics related to music and time. This confirms our claim about flexible sequence modeling while retaining interpretability. We use an anonymized sample of user search click history to measure the accuracy of different models on predicting users future clicks. An accurate model would enable better user experience by presenting the user with relevant content. The dataset is anonymized by removing all items appearing less than a given threshold, this in a dataset with 100K vocabulary and we vary the number of users from 500K to 1M. We fix the number of topics at 500 for all user experiments. We used the same setup to the one used in the experiments over the Wikipedia dataset for parameters. The dataset is less structured than the language modeling task since users click patterns are less predictable than the sequence of words which follow definite syntactic rules. As shown in table 1, the benefit of new inference method is highlighted as it yields much lower perplexity than the factored model. In this paper we revisited the problem of posterior inference in Latent LSTM models as introduced in BID35. We generalized their model to accommodate a wide variety of state space models and most importantly we provided a more principled Sequential Monte-Carlo (SMC) algorithm for posterior inference. Although the newly proposed inference method can be slower, we showed over a variety of dataset that the new SMC based algorithm is far superior and more stable. While computation of the new SMC algorithm scales linearly with the number of particles, this can be naively parallelized. In the future we plan to extend our work to incorporate a wider class of dynamically changing structured objects such as time-evolving graphs.
We present State Space LSTM models, a combination of state space models and LSTMs, and propose an inference algorithm based on sequential Monte Carlo.
1,326
scitldr
Given a large database of concepts but only one or a few examples of each, can we learn models for each concept that are not only generalisable, but interpretable? In this work, we aim to tackle this problem through hierarchical Bayesian program induction. We present a novel learning algorithm which can infer concepts as short, generative, stochastic programs, while learning a global prior over programs to improve generalisation and a recognition network for efficient inference. Our algorithm, Wake-Sleep-Remember (WSR), combines gradient learning for continuous parameters with neurally-guided search over programs. We show that WSR learns compelling latent programs in two tough symbolic domains: cellular automata and Gaussian process kernels. We also collect and evaluate on a new dataset, Text-Concepts, for discovering structured patterns in natural text data. A grand challenge for building more flexible AI is developing learning algorithms which quickly pick up a concept from just one or a few examples, yet still generalise well to new instances of that concept. In order to instill algorithms with the correct inductive biases, research in few-shot learning usually falls on a continuum between model-driven and data-driven approaches. Model-driven approaches place explicit domain-knowledge directly into the learner, often as a stochastic program describing how concepts and their instances are produced. For example, we can model handwritten characters with a motor program that composes distinct pen strokes BID13, or spoken words as sequences of phonemes which obey particular phonotactic constraints. Such representationally explicit models are highly interpretable and natural to compose together into larger systems, although it may be difficult to completely pre-specify the required inductive biases. By contrast, data-driven approaches start with only minimal assumptions about a domain, and instead acquire the inductive biases themselves from a large dataset. This is typified by recent work in deep meta-learning, such as the Neural Statistian BID5; see also BID9 ), MAML BID6; see also BID14 ) and Prototypical Networks BID15. Crucially, these models rely on stochastic gradient descent (SGD) for the meta-learning phase, as it is a highly scalable algorithm that applies easily to datasets with thousands of classes. Ideally these approaches would not be exclusive -for many domains of AI we have access to large volumes of data and also rich domain knowledge, so we would like to utilise both. In practice, however, different algorithms are suited to each end of the continuum: SGD requires objectives to be differentiable, but explicit domain knowledge often introduces discrete latent variables, or programs. Thus, meta-learning from large datasets is often challenging in more explicit models. In this work, we aim to bridge these two extremes: we learn concepts represented explicitly as stochastic programs, while meta-learning generative parameters and an inductive bias over programs from a large unlabelled dataset. We introduce a simple learning algorithm, Wake-Sleep-Remember (WSR), which combines SGD over continuous parameters with neurally-guided search over latent programs to maximize a variational objective, the evidence lower bound (ELBo).In evaluating our algorithm, we also release a new dataset for few-shot concept learning in a highlystructured natural domain of short text patterns (see TAB0). This dataset contains 1500 concepts such as phone numbers, dates, email addresses and serial numbers, crawled from public GitHub repositories. Such concepts are easy for humans to learn using only a few examples, and are well described as short programs which compose discrete, interpretable parts. Thus, we see this as an excellent challenge domain for structured meta-learning and explainable AI. 2 : HELMHOLTZ MACHINES AND VARIATIONAL BAYES Suppose we wish to learn generative models of spoken words unsupervised, using a large set of audio recordings. We may aim to include domain knowledge that words are built up from different short phonemes, without defining in advance exactly what the kinds of phoneme are, or exactly which phonemes occur in each recording. This means that, in order to learn a good model of words in general, we must also infer the particular latent phoneme sequence that generated each recording. This latent sequence must be re-estimated whenever the global model is updated, which itself can be a hard computational problem. To avoid a costly learning'inner-loop', a longstanding idea in machine learning is to train two distinct models simultaneously: a generative model which describes the joint distribution of latent phonemes and sounds, and a recognition model which allows phonemes to be inferred quickly from data. These two models together are often called a Helmholtz Machine BID2.Formally, algorithms for training a Helmholtz Machine are typically motivated by Variational Bayes. Suppose we wish to learn a generative model p(z, x), which is a joint distribution over latent variables z and observations x, alongside a recognition model q(z; x), which is a distribution over latent variables conditional on observations. It can be shown that the marginal likelihood of each observation is bounded below by DISPLAYFORM0 where D KL [q(z; x)||p(z|x)] is the KL divergence from the true posterior p(z|x) to the recognition model's approximate posterior q(z; x). Learning a Helmholtz machine is then framed as maximisation of this evidence lower bound (or ELBo), which provides the shared basis for two historically distinct approaches to learning. The first method, proposed by Hinton et al., is an alternating optimisation algorithm: alternate between updates to the generative model p and recognition model q. The update for p(x|z), called the'wake' phase, can be derived simply from Eq. 2 as:Wake phase: Maximise E[log p(x|z)] of observed data x using inferred latent variables z ∼ q(z; x)Unfortunately, the exact update for q(z; x), which is minimisation of D KL [q(z; x) More recently BID12 proposed the Variational Autoencoder (VAE). This offers an alternative solution to the problem of training q without relying on the above KL-divergence approximation. Instead, the authors note that it is possible to construct an unbiased approximation to the ELBo (Eq. 2) using only a single sample from q. Under specific assumptions about the form of q -specifically, that it is a continuous distribution which can be reparametrised by transforming a fixed auxiliary distribution -they use this to construct a low variance estimate for the gradient of the ELBo. As it is unbiased, this gradient estimate can used in SGD to train both q and p, typically neural networks, simultaneously towards the ELBo VAE Update: Sample z ∼ q(z; x) and take a gradient step on log DISPLAYFORM0 When z are discrete, VAEs cannot be trained through the use of reparameterisation but instead rely on the policy gradient (otherwise called) estimator from reinforcement learning. This estimator is notoriously high variance, in many cases rendering SGD ineffectual. This difficulty has motivated a wide literature on variance reduction techniques, BID8 BID11 BID16 BID7 ), yet training VAEs with discrete latent variables remains a challenging open research problem. The above description highlights a bias-variance tension between these two approaches (TAB2). The wake-sleep algorithm applies well to a wide variety of models, including structured models with discrete latent variables, but relies on an approximate update for q which may be heavily biased. By contrast, VAEs are proven to converge to a local optimum of the evidence lower bound (and so are often seen as more 'principled') but require much stronger assumptions on the form of the model in order for learning to be practical. Additionally, both VAEs and Wake-sleep rely on the ability of the recognition model, q(z; x), to learn to carry out posterior inference accurately; any departure from this changes the optimal p. This strong constraint is often unrealistic and unnecessary: on hard problems, a recognition model may still be useful if only one in a hundred samples are of high quality. Recent work aims to address this in both VAEs BID1 and Wake-sleep BID0 ) by using importance weighting over many samples from q. This solution is well suited when fully amortised inference is just out of reach of the recognition model, but is bottlenecked by how many samples it is practical to evaluate per gradient step. The next section describes our alternative approach, motivated by the idea that good explanations needn't be forgotten. Simply put, we mitigate the difficulties of discrete inference by introducing a separate'memory' into the Helmholtz Machine, explicitly keeping track of the best discovered latent explanations, or programs z i, for each observation x i. Comparison of VAE and Wake-Sleep algorithms for training Helmholtz machines. Wakesleep uses an approximation to the correct update for q, which may be heavily biased. VAE updates are unbiased, but for discrete variables they are often too high variance for learning to succeed. Figure 1: For VAEs and Wake-sleep, the recognition model q also serves as the variational distribution that trains p. WSR distinguishes these, learning a recognition model r and a categorical variational posterior q which is separate from r. This means that like VAEs, WSR jointly trains p and q using an unbiased estimate of the variational objective (blue). Like wake-sleep, the recognition model can train self-supervised (green), allowing WSR to handle discrete latent variables. To optimise the finite support of q, WSR incorporates a memory module M that remembers the best values of z i found by r(z i ; x i) across iterations. In this work we start from a different set of modelling assumptions to those typical of VAE-family models. Rather than describe each observation with a latent vector z which lacks explicit structure, we assume each observation is generated by an explicit latent program, and wish to learn:1. A posterior distribution over the latent program q i (z i) for each instance x i 2. A prior p(z) that captures a global inductive bias over programs. 4. An approximate recognition network r(z; x) which helps infer programs for novel data. Using programs as a latent representation makes this setup challenging for two reasons. First, as seen in TAB2, training discrete Helmholtz machines usually requires accepting either high bias or high variance in the learning objective. Second, by assumption, inferring programs from data is a hard problem, so performing highly accurate amortised inference may be overly ambitious. We therefore desire a learning algorithm for which weaker recognition models may reduce the speed of learning but will not change the set of stable solutions to the learning problem. To achieve this, we depart from the usual Helmholtz machines formulation by separating the recognition model from the variational distribution (Figure 1). As in Wake-sleep, we train the recognition model r(z; x) self supervised using samples from the prior -an effective strategy when z is discrete. Figure 2: What trains what? r is a recognition network trained self-supervised on samples from the p. The memory M i for each task i is a set of the'best' z values ever sampled by r, selected according their joint probability p(z, x i). p is then trained using samples from M.However, unlike Wake-sleep we do not use samples from r directly to train the generative model p. Instead, they are routed through a separate Memory module, M, which maintains a set of the best values of z found for each x, across training iterations, weighted by their joint probability p(z, x). Then, we simply resample from M in order to train p. By weighting each z proportional to its join probability, we guarantee that every update to M decreases the KL divergence between M i and the true posterior p(z i |x i). Thus, we may view each M i as a truncated variational posterior over z i, which is optimised towards the ELBo using samples from r as proposals. Our full training procedure is detailed in Algorithm 1.Algorithm 1: Basic WSR training procedure (batching omitted for notational simplicity). In practice, we avoid evaluation of p θ in the each wake phase by maintainging a cache of p θ (z M, x) in the sleep phase. We re-calculate each p θ (z, x) only as a final correctness check before modifying DISPLAYFORM0 1. Update memory with sample from recognition network DISPLAYFORM1 2. Train generative model with sample from memory DISPLAYFORM2 3. Train recognition network with sample from generative model DISPLAYFORM3 4. Train prior with sample from reference distribution g = ∇ θ s sleep + s wake (+s hyper) θ = θ + λg Gradient step (e.g. SGD, Adam) until convergence In the above problem setup, we assumed that the prior p(z) either was fixed, or was learned to maximise the ELBo training objective. However, for many modelling problems neither is adequate: we often have some idea about the global distribution over of latent programs, but deciding on the exact p(z) in advance would be too strong a commitment. In these cases we would rather provide a reference distribution p (z) as a first approximation to the global distribution, while but still allow the model to update its prior p(z) to move away from p as it learns from data. In this situation we may place a hyperprior over p(z), defined with respect to the reference distribution as: DISPLAYFORM0 where α is a concentration parameter controlling the level of confidence in the reference distribution p. This form of hyperprior can be integrated into the training objective simply by addition of an extra term: Ez∼p α log p(z), estimated by sampling from p. Algorithm 1 includes this as an optional variant, which corresponds to maximum a posteriori estimation over p. We first test our algorithm at learning the rules for noisy 1-dimensional cellular automata, from the images they generate. We create 64 × 64 binary images generated row by row, sampling each pixel using a rule that depends only on its'neighbours' in the row above. Specifically, given a'neighbourhood size' D and'corruption probability', we generate images by the following procedure:• Choose a binary vector z ∈ {0, 1} 2 D to represent the update rule for the cellular automaton.• Sample the first generation uniformly at random, as g 1 ∈ {0, 1} • For each subsequent row i = 2,..., 64 and each cell (i, j):1. read the neighbouring D cells from the previous row: DISPLAYFORM0 2. sample g ij according to: p(g ij = z sij) = 1 − Figure 3: One-shot generalisations produced by each algorithm on each the cellular automata datasets. For each input image we sample a program z from the variational distribution q, then synthesize a new image in the same style from p (z|x) using the learned.We create easy, medium and hard datasets corresponding to increasingly complex cellular automaton rules, with neighbourhood sizes of D = 3, 4, 5 respectively (easy corresponds to the full set of 256 elementary automata studied by BID18 . For medium and hard, we sample 10,000 of the 65,000 and 4 billion available rules). All datasets share a noise parameter = 0.01.Our goal is discover a latent rule, or program, z i corresponding to each image in the dataset, while also learning the global noise. Thus, we learn a p (z, x) with same structure as the true generative process, and use a CNN with independent Bernoulli outputs as the recognition network r(z; x). Fixing this architecture, we train WSR using k = 5 as the memory size, and compare performance of for the against three baseline algorithms:• VAE. We use policy-gradient (REINFORCE) for discete choices, and additionally reduce variance by subtracting a learned baseline for each task.• Wake-Sleep. We perform gradient descent on the recognition model q and generative model p together, using samples from the p to train q, and samples from q to train p.• No Recognition. We evaluate a lesioned Algorithm 1 in which no recognition model is learned. We instead propose updates to M i using samples from the prior z p ∼ p(z).Our highlight clear differences between these approaches. Despite our efforts at variance reduction, a VAE reliably struggles to get off the ground on any dataset, and instead learns quickly to model all instances as noise (Figure 3 and 4 bottom). Wake-sleep is able to learn accurate rules for images from the easiest datasets, but on the most challenging dataset but its performance appears to asymptote prematurely. By contrast, WSR reliably learns accurate programs that can be used to classify unseen images 100-way with > 99% accuracy, even on the hard dataset. Figure 4: Quantitative on all variants of the cellular automata dataset. In all cases WSR learns programs which generalise to unseen images of the same concepts, achieving > 99% accuracy on a 100-way classification task (second row). WSR also best recovers the true noise parameter = 0.01 (third row). Note: x-axis is wallclock time on a single Titan-X GPU to allow a fair comparison, as WSR requires several times more computation per iteration. Next, we evaluate our algorithm on the the task of finding explainable models for time-series data. We draw inspiration from BID4, who frame this problem as Gaussian process (GP) kernel learning. They describe a grammar for building kernels compositionally, and demonstrate that inference in this grammar can produce highly interpretable and generalisable descriptions of the structure in a time series. Inference is achieved on a single time-series through a custom greedy search algorithm, requiring a costly inner loop that approximately marginalises over kernel parameters. Here, we follow a similar approach embedded within a larger goal: we use a dataset of many different time-series, and learn a hierarchical model over time-series. That is, we learn a separate GP kernel for each series in the dataset while also learning an inductive bias over kernels themselves. We start with time series data provided by the UCR Time Series Classification Archive. This dataset contains 1-dimensional times series data from a variety of sources (such as household electricity usage, apparent brightness of stars, and seismometer readings). In this work, we use 1000 time series randomly drawn from this archive, and normalise each to zero mean and unit variance. For our model, we define the following simple grammar over kernels: DISPLAYFORM0 • WN is the White Noise kernel, DISPLAYFORM1 We wish to learn a prior distribution over both the symbolic structure of a kernel and its continuous variables (σ, l, etc.). Rather than describe a prior over kernel structures directly, we define the latent program to z to be a symbolic kernel'expression': a string over the characters {(,), +, *, WN, SE, Per, C} We define an LSTM prior p θ (z) over these kernel expressions, alongside parametric prior distributions over continuous latent variables (p θσ (σ), p θ l (l),...). As in previous work, exact evaluation of the marginal likelihood p(x|z) of a kernel expression z is intractable and so requires an approximation. For this we use a simple variational inference scheme which cycles through coordinate updates to each continuous latent variable (up to 100 steps), and estimates a lowerbound on p(x|z) using 10 samples from the variational distribution. Finally, following section 3.1, we place a hyperprior on the distribution over kernel expressions, using the grammar above (Eq. 3) as a reference distribution. Examples of latent programs discovered by our model are displayed in Figure 5. These programs describe meaningful compositional structure in the time series data, and can also be used to make highly plausible extrapolations. Figure 5: Kernels inferred by the WSR for various real time series in the UCR dataset. Blue (left) is a 256-timepoint observation, and orange (right) is a sampled extrapolation using the inferred kernel (top, simplified where possible). The explicit compositional structure of this latent representation allows each discovered concept to be easily translated into natural language. Finally, we test our model on the task of learning short text concepts, such as'phone number' or'email address', from a few examples. For this task we created a new dataset, Text-Concepts comprising 1500 of such concepts with 10 examples of each (Figure 1).To collect the data, we first crawled several thousand randomly chosen spreadsheets from online public repositories on GitHub. We then automatically selected a subset of 1500 the columns from this set, filtered to remove columns that contain only numbers, English words longer than 5 characters, or common first names and surnames. To promote diversity in the dataset, we also filtered so that no two columns originated from the same spreadsheet, and no more than 3 columns share the same column header. This us to capture a wide variety of concept types (e.g. 'date', 'time') while maintaining variation that exists within each type. Common to most patterns in the Text-Concepts dataset is that they can be well described by concatenative structure, as they usually involve the composition of discrete parts. With this in mind, we aim to model of this dataset using the language of Regular Expressions (regex).We first define a grammar over regular expressions as follows, borrowing the standard syntax that is common to many programming languages: DISPLAYFORM0 where Character can produce any printable ASCII character, and is the empty string. We assume that each class x i in the Text-Conceptsdataset can be described by a latent regular expression z i from this grammar. However, for our purposes, we endow each regex z with probabilistic, generative semantics. We define a likelihood (decoder) p θ (x|z) by placing probability distributions over every random choice involved in generating a string from regex, as given in TAB3.To evaluate the probability of a regex z generating a set of strings i, we use dynamic programming to efficiently calculate the exact probability of the most probable parse for each string in the set, and multiply these to serve as our likelihood p θ (x|z). As in the Gaussian Process example, our p θ (z) is parametrised as a simple LSTM, and we define a hyperprior over this by using the above grammar (Eq. 4) the reference grammar distribution. For the recognition model we require a network which is able to generate a sequence of tokens (the regex) taking a set of strings as an input. We achieve this using a variant of the RobustFill architecture, introduced in BID3. We pass each string in the set individually through an LSTM, and then attend to these while decoding a regular expression character by character. Given this problem setup, our goal is to learn a regex z corresponding to each set of strings, x in the dataset, while also learning a global distribution p(z) and a recognition model r(z; x) to guide inference on novel sets. For any regex expression e: e * evaluates to e+ with probability θ+, and with otherwise. e+ evaluates to ee *. e|e2 evaluates to e with probability θ |, and e2 otherwise. e? evaluates to e with probability θ?, and otherwise.. evaluates to any character, with probabilities θ. w evaluates to any alphanumeric character, with probabilities θw d evaluates to any digit, with probabilities θ d u evaluates to any uppercase character, with probabilities θu l evaluates to any lowercase character, with probabilities θ l s evaluates to any whitespace character, with probabilities θs where θ are parameters to be learned Quantitative from training the above model using the WSR algorithm (with k = 5), are shown in FIG0. From five examples of each concept, WSR learns a regular expression that generalises well to new examples, achieving over 75% accuracy in a challenging 100-way classification task. Comparing to Wake-Sleep and No-Recognition baselines, we find that WSR crucially utilises on both its recognition model and its memory in order to achieve this -neither are sufficient alone. The VAE algorithm was unable to learn effectively in our regex model, even when using control variates to reduce variance. For a more fair comparison, we also provide from training a VAE using a different model architecture to which is better suited: for VAE-LSTM we use a 32-dimensional vector for the latent representation, with a fixed Gaussian prior p(z), and LSTM networks for both p(x|z) and q(z|x). While this model is able to optimise its training objective effectively, it instead suffers from the lack of domain knowledge built into its structure. The latent representations it infers for concepts are not only less explicit but also generalise less effectively to new examples of a given concept. EA., SD., CSB.,... TAB5.3 ordered by WSR ELBo (descending).Finally, investigate whether WSR learns a realistic inductive bias over concepts, by sampling new concepts from the learned prior p θ (z) and then for each of these sampling a set of instances from p θ (x|z). In TAB5.3, we see that our model generalises meaningfully from the training data, learning higher level part structure that is common in the dataset (e.g. strings of uppercase characters) and then composing these parts in new ways. In this paper, we consider learning interpretable concepts from one or a few examples: a difficult task which gives rise to both inductive and computational challenges. Inductively, we aim to achieve strong generalisation by starting with rich domain knowledge and then'filling in the gaps', using a large amount of data. Computationally, we aim to tackle the challenge of finding high-probability programs by using a neural recognition model to guide search. Putting these pieces together we propose the Wake-Sleep-Remember algorithm, in which a Helmholtz machine is augmented with an persistent memory of discovered latent programs -optimised as a finite variational posterior. We demonstrate on several domains that our algorithm can learn generalisable concepts, and comparison with baseline models shows that WSR (a) utilises both its recognition model and its memory in order to search for programs effectively, and (b) utilises both domain knowledge and extensive data in order to make strong generalisations.
We extend the wake-sleep algorithm and use it to learn to learn structured models from few examples,
1,327
scitldr
The knowledge regarding the function of proteins is necessary as it gives a clear picture of biological processes. Nevertheless, there are many protein sequences found and added to the databases but lacks functional annotation. The laboratory experiments take a considerable amount of time for annotation of the sequences. This arises the need to use computational techniques to classify proteins based on their functions. In our work, we have collected the data from Swiss-Prot containing 40433 proteins which is grouped into 30 families. We pass it to recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) model and compare it by applying trigram with deep neural network and shallow neural network on the same dataset. Through this approach, we could achieve maximum of around 78% accuracy for the classification of protein families. Proteins are considered to be essentials of life because it performs a variety of functions to sustain life. It performs DNA replication, transportation of molecules from one cell to another cell, accelerates metabolic reactions and several other important functions carried out within an organism. Proteins carry out these functions as specified by the informations encoded in the genes. Proteins are classified into three classes based on their tertiary structure as globular, membrane and fibrous proteins. Many of the globular proteins are soluble enzymes. Membrane proteins enables the transportation of electrically charged molecules past the cell membranes by providing channels. Fibrous proteins are always structural. Collagen which is a fibrous protein forms the major component of connective tissues. Escherichia coli cell is partially filled by proteins and 3% and 20% fraction of DNA and RNA respectively contains proteins. All of this contributes in making proteomics as a very important field in modern computational biology. It is therefore becoming important to predict protein family classification and study their functionalities to better understand the theory behind life cycle. Proteins are polymeric macromolecules consisting of amino acid residue chains joined by peptide bonds. And proteome of a particular cell type is a set of proteins that come under the same cell type. Proteins is framed using a primary structure represented as a sequence of 20-letter alphabets which is associated with a particular amino acid base subunit of proteins. Proteins differ from one another by the arrangement of amino acids intent on nucleotide sequence of their genes. This in the formation of specific 3D structures by protein folding which determines the unique functionality of the proteins. The primary structure of proteins is an abstracted version of the complex 3D structure but retains sufficient information for protein family classification and infer the functionality of the families. Protein family consists of a set of proteins that exhibits similar structure at sequence as well as molecular level involving same functions. The lack of knowledge of functional information about sequences in spite of the large number of sequences known, led to many works identifying family of proteins based on primary sequences BID0 BID1 BID2. Dayhoff identified the families of numerous proteins BID3. Members of the same protein family can be identified using sequence homology which is defined as the evolutionary relatedness. It also exhibits similar secondary structure through modular protein domains which further group proteins families into super families BID4. These classifications are listed in database like SCOP BID5. Protein family database (Pfam) BID6 is an extremely large source which classify proteins into family, domain, repeat or motif. Protein classification using 3D structure is burdensome and require complex techniques like X-ray crystallography and NMR spectroscopy. This led to the works BID7 BID8 BID9 which uses only primary structure for protein family classification. In this work we use data from Swiss-Prot for protein family classification and obtain a classification accuracy of about 96%.In our work we gathered family information of about 40433 protein sequences in Swiss-Prot from Protein family database(Pfam), which consists of 30 distinct families. The application of keras embedding and n-gram technique is used with deep learning architectures and traditional machine learning classifiers respectively for text classification problems in the cyber security BID33, BID34, BID35, BID36, BID37. By following, we apply keras word embedding and pass it to various deep neural network models like recurrent neural network(RNN), long short term memory(LSTM) and gated recurrent unit(GRU) and then compare it performance by applying trigram with deep and shallow neural networks for protein family classification. To verify the model used in our work, we test it over dataset consisting of about 12000 sequences from the same database. The rest of the part of this paper are organized as follows. Section 2 discusses the related work, Section 3 provides details of deep learning architecture, Section 4 discusses the proposed methodology, Section 5 provides and submissions and at last the and future work directions are placed in Section 6. There have been many works till date to identify protein functions based on the primary structures aka protein sequences. In this section we describe briefly about the works done in that area. Needleman BID10 along with Wunsch developed an algorithm using dynamic programming which uses global alignment to find similarity between protein and DNA sequences. This method is used when the sequences does not share similar patterns. Whereas in Smith work BID11 they used local alignment of protein and DNA sequences and does clustering of protein sequences based on the length of the different fragments in the sequence. In the current decade people mostly rely on computational techniques like machine learning, deep learning and pattern recognition for the classification of protein families instead of depending on the old techniques which make use of alignment of the sequences. Some of the works which uses machine learning techniques are explained briefly below. In the works BID12 BID13 BID14 primary structure aka protein sequences is used to classify protein families using classifiers like support vector machines(SVM). But apart from protein sequences these methods require additional information for feature extraction. Some of theese are polarity, hydrophobicity, surface tension, normalized Van der Waals volume, charge, polarizability, solvent accessibility and seconday structure which requires a lot of computational power to analyze. In the work BID12 protein classification for 54 families achieved 69.1-99.6% accuracy. In another study, Jeong et al. BID15 used position-specific scoring matrix(PSSM) for extracting feature from a protein sequence. They used classifiers such as Naive Bayesian(NB), Decision Tree(DT), Support Vector Machine(SVM) and Random Forest(RF) to verify their approach and achieved maximum accuracy of about 72.5%.Later on hashing was introduced for mapping high dimentional features to low dimentional features using has keys. Caragea et al BID16 used this technique to map high dimenstional features obtained through k-gram representation, by storing frequency count of each k-gram in the feature vectors obtained and hashing it together with the same hash key. This method gave accuracy of about 82.83%. Yu et al. BID17 proposed a method to represent protein sequences in the form of a k-string dictionary. For this singular value decomposition(SVD) was applied to factorize the probability matrix. Mikolov et.al. BID18 proposed a model architecture to represent words as continous vectors. This approach aka word2vec map words from the lexicon to vectors of real numbers in low dimentional space. When it is trained over a large set of data the linguistic context could be studied from the given data and it will be mapped close to each other in the euclidean space. In BID19 they have applied word2vec architecture to the biological sequences. And introduced a new representation called bio-vectors (BioVec) for the biological sequences with ProtVec for protein sequences and GeneVec for gene sequences. The k-mers derived from the data is then given as input to the embedding layer. They achieved family classification accuracy of about 93% by using ProtVec as a dense representation for biologial sequences. In our work, the proposed architecture is trained solely on primary sequence information, achieving a high accuracy when used for classification of protein families. Text representation aka text encoding can be done in several ways which are mainly of two types, sequential representation and non-sequential representation. Transforming the raw text data to these representations involves preprocessing and tokenizing the texts. During preprocessing, all uppercase characters are changed to lowercase and a dictionary is maintained which assigns a unique identification key to all the characters present in the text corpus. Later we use it in order to map input texts to vector sequence representation. After mapping character with a unique id, vocabulary is created using the training data. The preprocessing of the text data is completed by finally converting all varying length sequences to fixed length sequences. In this work we represent the text data as a sequence and therefore maintains the word order which incorporates more information to the representation. A network can be modeled by training the data using one's own representation or by using the existing ones. For our work, we have used keras embedding for text representation. This maps the discrete character ids to its vectors of continuous numbers. The character embedding captures the semantic meaning of the given protein sequence by mapping them into a high dimensional geometric space. This high dimensional geometric space is called as character embedding space. The newly formed continuous vectors are fed to other layers in the network. Features for text data can be obtained using several techniques, one of them being n-grams. N-grams can be used to represent text data which gives unique meaning when combined together, these combinations are obtained by taking continuous sequences of n characters from the given input sequence. The general equation for the N-gram approximation to the conditional probability of the next word in a sequence is, DISPLAYFORM0 In this work trigrams is used as feature for some of the models. Trigrams is a combination of three adjacent elements of a set of tokens. While computing trigram probability, we use two pseudo-words in the beginning of each sentence to create the first trigram (i.e., P(I | <s><s>).After the data is preprocessed and represented in the form of continuous vectors, it is fed into other layers like RNN LSTM GRU. RNN was developed to improve the performance of feed forward network(FFN) introduced in 1990 BID20. Both these networks differ by the way they pass the information to the nodes of the network where a series of mathematical operations are performed. FFN pass the information never touching a node twice whereas RNN pass it through a loop and ingesting their own outputs at a later moment as input, hence called recurrent. In a sequence there will be some information and RNN use it to perform the tasks that FNNs fail to do. RNN handles sequence data efficiently for natural language processing(NLP) tasks because it acts on arbitrary length sequence and the unfolded RNN model shares the weight across time steps. We can represent mathematically the process of carrying memory forward in a RNN as follows: DISPLAYFORM0 In the equation, x t is the input state at time step t and s t is the hidden state(or memory) at time step t. The function f is a nonlinearity activation function such as tanh or ReLU. During training of long sequences, vanishing and exploding gradient problem will arise due to this form of transition function BID21 BID22. To cope up with this issue long short-term memory(LSTM) was introduced BID23 using a special unit called memory block. Afterwards many variants to the LSTM architecture was introduced, prominent ones being inclusion of forget gate BID24 and peephole connections BID25.RNN shares same parameters(M, N, W in Fig. 1) at each layer unlike other traditional deep neural networks. This method reduces the total number of parameters to be learnt. To minimize the loss function an optimal weight parameter (M, N, W) is to be found, using stochastic gradient descent(SGD). Gradient estimation in RNN is done using backpropogation through time(BPTT) BID26.LSTM is an upgraded version of vanilla RNN BID27. Both LSTM and RNN uses backpropagation through time for training the network. While training a traditional RNN, there arises a case where the gradient becomes very small and further learning becomes extremely slow. This happens because the gradient vector can end up multiplied by the weight matrix a large number of times. If the values of the weight matrix is small, then it can lead to vanishing gradient. Similarly if the value of the weight matrix is high, then it can lead to exploding gradient. These problems makes the learning very difficult. The weight matrix plays a major role in training a RNN. These limitations of RNN are the key motivation of LSTM model. The LSTM model introduced the concept of memory cell. The memory cell consists of: an input gate, a neuron with a self recurrent connection, a forget gate and an output gate. In the self recurrence connection of the LSTM network, identity function is used as the activation function and has derivative 1.0. This ensures that the gradient neither explodes nor vanishes since the back-propagated gradient remains constant. Therefore the LSTM is able to learn long term dependencies BID28.Gated Recurrent Units (GRU) is a variant of LSTM recurrent neural networks BID29. Unlike other deep neural networks, GRU and LSTM have parameters specifically to control memory updation. GRU and LSTM are widely used in sequence modelling. They both can capture short term and long term dependencies in sequences. Even though both can be used in sequence modelling, the parameters in a GRU network is less compared to an LSTM network and hence the training in a GRU network is faster when compared to LSTM network. The mathematical expression for a GRU network is as follows: DISPLAYFORM1 In the above equations, x t, h t, f t represents the input, output, forget vector respectively. And W, U, b are parameter matrices and b bias respectively. Theoretically the reset and forget gate in a GRU network ensures that the memory doesn't get used up by tracing short-term dependencies. In a GRU network the memory is protected by learning how to use its gates, so as to make long term predictions. In this section, the performance of deep and shallow neural networks using bigram and RNN, LSTM and GRU with word embedding are evaluated on a data set of protein sequences. First the description of proteins and data set is discussed followed by the proposed architecture. Proteins is framed using a primary structure represented as a sequence of 20-letter alphabets which is associated with a particular amino acid base subunit of proteins. Proteins differ from one another by the arrangement of amino acids intent on nucleotide sequence of their genes which gives them different functionalities. Depending on their functions proteins are grouped under different families. Thus able to identify the family of a protein sequence will give us the information of its functions. In our work we have classified protein sequences into 30 families only using the primary structure information. We gathered family information of about 40433 protein sequences in Swiss-Prot from Protein family database(Pfam), which consists of 30 distinct families. Swiss-Prot is a curated database of primary protein sequences which is manually annotated and reviewed. There is no redundancy of protein sequences in the database and is evaluated based on obtained through experiments. We have divided the obtained protein sequences into 12000 sequences for test data and the rest 28433 protein sequences for training the model. The details of the 30 family names can be found in Table 1. The proposed architecture typically called as DeepProteomics which composed of Character Embedding, Feature representation, Regularization and Classification sections. Each section is discussed in detail below. By using the aforementioned approach, a matrix is constructed for training(28433*3988) and testing(12000*3988) for the given dataset. These matrices are then passed to an embedding layer with batch size 128. An embedding layer maps each character onto a 128 length real valued vector. This can be considered as one hyper parameter, we choose 128 to provide further level of freedom to the deep learning architectures. This collaboratively works with other layers in the deep network during backpropagation. This facilitates sequence character clustering and similar characters cluster together. This kind of character clustering facilitates other layers to easily detect the semantics and contextual similarity structures of protein sequences. For comparative study, trigram representation is constructed for protein sequence and using feature hashing approach, the protein sequence lengths are set to 1000. We adopt deep layer RNN for feature representation. Recurrent layer extract sequential information of the protein sequences. We have used RNN, LSTM and GRU as the recurrent structures. In all the experiments, 1 layer of any of the algorithms like RNN, LSTM and GRU is used. The number of units used is 128. The recurrent structuress is followed by a dropout of 0.2 while training. This in turn is followed by fully connected layer with 30 neurons in the output layer. A Dropout layer with 0.2 is used in each models between recurrent structures and fully connected layer that acts as a regularization parameter to prevent from overfitting. A Dropout is a method for removing the neurons randomly along with their connections during training a deep learning model. The embedded character level vectors coordinately works with the recurrent structures to obtain optimal feature representation. This kind of feature representation learns the similarity among the sequences. Finally, the feature representations of recurrent structures is passed to the fully-connected network to compute the probability that the sequence belongs to a particular family. The non-linear activation function in the fully connected layer facilitates in classifying the feature vectors to the respective families. The 1000 length protein sequence vectors are passed as input to shallow DNN, deep DNN and other traditional machine learning classifiers for comparative study. In fully-connected layer, each neuron in the previous layer has connection to every other neuron in the next layer. It has two layers, a fully connected layer with 128 units followed by fully connected layer with 30 units. In categorizing the proteins to 30 families, the prediction loss of deep learning models is computed using categorical-cross entropy,Where p is true probability distribution and q is predicted probability distribution. To minimize the loss of categorical-cross entropy we used Adam optimization algorithm BID31. The detailed architecture details of GRU, LSTM and RNN are placed in TAB2, Table 3 and Table 4 respectively. The detailed architecture of shallow and deep DNN module is given in TAB3 and Table 6 respectively. Table 6 Configuration details of proposed DNN Architecture All the experiments are run on GPU enabled TensorFlow BID39 and Keras BID40 higher level API. The detailed statistical measures for the protein sequence dataset for the various algorithms used is reported in TAB3. The overall performance of the neural network models are better than the traditional machine learning techniques. Thus, we claim that the character level URL embedding with deep learning layers can be a powerful method for automatic feature extraction in the case of protein family classification. In our work we have analyzed the performance of different recurrent models like RNN, LSTM and GRU after applying word embedding to the sequence data to classify the protein sequences to their respective families. We have also compared the by applying trigram with deep neural network and shallow neural network. Neural networks are preferred over traditional machine learning models because they capture optimal feature representation by themselves taking the primary protein sequences as input and give considerably high family classification accuracy of about 96%.Deep neural networks architecture is very complex therefore, understanding the mechanics of a neural network model remain as a black box and thus the internal operation of the network is only partially demonstrated. In the future work, the internal working of the network can be explored by examining the Eigenvalues and Eigenvectors across several time steps obtained by transforming the state of the network to linearized dynamics BID32.
Proteins, amino-acid sequences, machine learning, deep learning, recurrent neural network(RNN), long short term memory(LSTM), gated recurrent unit(GRU), deep neural networks
1,328
scitldr
Spoken term detection (STD) is the task of determining whether and where a given word or phrase appears in a given segment of speech. Algorithms for STD are often aimed at maximizing the gap between the scores of positive and negative examples. As such they are focused on ensuring that utterances where the term appears are ranked higher than utterances where the term does not appear. However, they do not determine a detection threshold between the two. In this paper, we propose a new approach for setting an absolute detection threshold for all terms by introducing a new calibrated loss function. The advantage of minimizing this loss function during training is that it aims at maximizing not only the relative ranking scores, but also adjusts the system to use a fixed threshold and thus enhances system robustness and maximizes the detection accuracy rates. We use the new loss function in the structured prediction setting and extend the discriminative keyword spotting algorithm for learning the spoken term detector with a single threshold for all terms. We further demonstrate the effectiveness of the new loss function by applying it on a deep neural Siamese network in a weakly supervised setting for template-based spoken term detection, again with a single fixed threshold. Experiments with the TIMIT, WSJ and Switchboard corpora showed that our approach not only improved the accuracy rates when a fixed threshold was used but also obtained higher Area Under Curve (AUC). Spoken term detection (STD) refers to the proper detection of any occurrence of a given word or phrase in a speech signal. Typically, any such system assigns a confidence score to every term it presumably detects. A speech signal is called positive or negative, depending on whether or not it contains the desired term. Ideally, an STD system assigns a positive speech input with a score higher than the score it assigns to a negative speech input. During inference, a detection threshold is chosen to determine the point from which a score would be considered positive or negative. The choice of the threshold represents a trade-off between different operational settings, as a high value of the threshold could cause an excessive amount of false negatives (instances incorrectly classified as negative), whereas a low value of the threshold could cause additional false positives (instances incorrectly classified as positive).The performance of STD systems can be measured by the Receiver Operation Characteristics (ROC) curve, that is, a plot of the true positive (spotting a term correctly) rate as a function of the false positive (mis-spotting a term) rate. Every point on the graph corresponds to a specific threshold value. The area under the ROC curve (AUC) is the expected performance of the system for all threshold values. A common practice for finding the threshold is to empirically select the desired value using a cross validation procedure. In BID2, the threshold was selected using the ROC curve. Similarly, in BID7 BID16 and the references therein, the threshold was chosen such that the system maximized the Actual Term Weighted Value (ATWV) score BID14. Additionally, BID18 claims that a global threshold that was chosen for all terms was inferior to using a term specific threshold BID17.In this paper we propose a new method to embed an automatic adjustment of the detection threshold within a learning algorithm, so that it is fixed and known for all terms. We present two algorithmic implementations of our method: the first is a structured prediction model that is a variant of the discriminative keyword spotting algorithm proposed by BID15 BID20 BID21, and the second implementation extends the approach used for the structured prediction model on a variant of whole-word Siamese deep network models BID9 BID1 BID13. Both of these approaches in their original form aim to assign positive speech inputs with higher scores than those assigned to negative speech inputs, and were shown to have good on several datasets. However, maximizing the gap between the scores of the positive and negative examples only ensures the correct relative order between those examples, and does not fix a threshold between them; therefore it cannot guarantee a correct detection for a global threshold. Our goal is to train a system adjusted to use a global threshold valid for all terms. In this work, we set the threshold to be a fixed value, and adjust the decoding function accordingly. To do so, we propose a new loss function that trains the ranking function to separate the positive and negative instances; that is, instead of merely assigning a higher score to the positive examples, it rather fixes the threshold to be a certain constant, and assigns the positive examples with scores greater than the threshold, and the negative examples with scores less than the threshold. Additionally, this loss function is a surrogate loss function which extends the hinge loss to penalize misdetected instances, thus enhancing the system's robustness. The new loss function is an upper bound to the ranking loss function, hence minimizing the new loss function can lead to minimization of ranking errors, or equivalently to the maximization of the AUC. In the STD task, we are provided with a speech utterance and a term and the goal is to decide whether or not the term is uttered. The term can be provided as a sequence of phonemes or by an acoustic representation given by a speech segment in which the term is known to be uttered. Throughout the paper, scalars are denoted using lower case Latin letters, e.g., x, and vectors using bold face letters, e.g., x. A sequence of elements is denoted with a bar (x) and its length is written as |x|.Formally, the speech signal is represented by a sequence of acoustic feature vectorsx = (x 1, . . ., x T), where each feature vector is d dimensional x t ∈ R d for all 1 ≤ t ≤ T. Note that in our setting the number of frames T is not fixed. We denote by X = (R d) * the set of all finite length sequences over DISPLAYFORM0, where p l ∈ P for all 1 ≤ l ≤ L r and P is the set of phoneme symbols. We denote by P * the set of all finite length sequences over P.A term is a word or a short phrase and is presented to the system either as a sequence of phonemes in the strongly supervised setting or as an acoustic segment containing the term in the weakly supervised setting. We denote the abstract domain of the term representations (as either a phoneme sequence or an acoustic segment) by R. Our goal is to find a spoken term detector, which takes as input a speech segment and a term and returns a binary output indicating whether the term was pronounced in the acoustic segment or not. Most often the spoken term detector is a function that returns a real value expressing the confidence that the target term has been uttered. The confidence score outputted by this function is compared to a threshold, and if the score is above the threshold the term is declared to have been pronounced in the speech segment. Formally, the detector is a function f from X × R to R. The detection threshold is denoted by the scalar θ ∈ R. Usually there is no single threshold for all terms, and it needs to be adjusted after decoding. Our goal in this work is to propose a new method to learn the spoken term detector from a training set of examples, so that the model is adjusted to use a fixed given threshold for all terms. The function f is found from a training set of examples, where each example is composed of two speech segments and a representation of a term. Although the training set contains many different terms, the function f should be able to detect any term, not only those already seen in the training phase. In this section we describe our main idea, whereas in the next sections we propose two implementations: one with a structured prediction model where the training data is fully supervised and the term is given as a phoneme sequence, and the other with a deep learning model where the training data is weakly supervised and the term is given using a segment of speech. Recall that during inference the input to the detector is a speech segment and a term and the output is a confidence that the term was pronounced in the speech segment, which is compared to a threshold. Since the detection threshold is typically not fixed and does depend on the input term, it is often desired to learn the function f such that the confidence of a speech segment that contains the term is higher than the confidence of a speech segment that does not contain the term. Formally, let us consider two sets of speech segments. Denote by X r+ a set of speech segments in which the term r is articulated. Similarly, denote by X r− a set of speech segments in which the term r is not articulated. We assume that term r, and two instancesx + ∈ X r+ andx − ∈ X r− are drawn from a fixed but unknown probability distribution, and we denote by P{π} and E[π] the probability and the expectation of an event π under this distribution. The probability that the confidence ofx + is higher than the confidence ofx − is the area under the ROC curve (AUC) BID10 BID0: DISPLAYFORM0 Instead of keeping a threshold for each term, we adjust f so that the detection threshold will be fixed, and set to a predefined value. Assume that the predefined threshold is θ, then the accuracy in the prediction can be measured by DISPLAYFORM1 where ∧ is the logical conjunction symbol. Hence our goal is to find the parameters of the function f so as to maximize the accuracy Acc θ for a given threshold θ. Equivalently we find the parameters of function f that minimize the error defined as DISPLAYFORM2 where ∨ is the logical disjunction symbol, and I{π} is the indicator function, that equals 1 if the predicate π holds true and 0 otherwise. Unfortunately, we cannot minimize the error function directly, since it is a combinatorial quantity. A common practice is to replace the error function with a surrogate loss function which is easy to minimize. We suggest to minimize a convex upper-bound to the error function. Specifically, we replace the last term with the hinge upper bound, DISPLAYFORM3 where [π] + = max{π, 0}. The last upper bound holds true since I{π < 0} ≤ [1 − π] +. Adding the margin of 1 means that the function f faces a harder problem: not only does it need to have a confidence greater than θ for a positive speech segment and a confidence lower than θ for a negative speech segment -the confidence value must be at least θ + 1 and at most θ − 1 for positive and negative speech segments, respectively. We now turn to present two algorithmic implementations that are aimed at minimizing the loss function derived from, namely, DISPLAYFORM4 Hopefully the minimization of this loss function will lead to the minimization of Err θ in. Our first construction is based on previous work on discriminative keyword spotting and spoken term detection BID15 BID20 BID21, where the goal was to maximize the AUC. In this setting we assume that the term is expressed as a sequence of phonemes denotedp r ∈ P *.In this fully-supervised setting we define the alignment between a phoneme sequence and a speech signal. We denote by y l ∈ N the start time of phoneme p l (in frame units), and by e l = y l+1 − 1 the end time of phoneme p l, except for the phoneme p L, where the end frame is e L. The alignment sequenceȳ r corresponding to the phonemes sequencep r is a sequence of start-times and an end-time, DISPLAYFORM0, where y l is the start-time of phoneme p l and e L is the end-time of the last phoneme p L.Similar to previous work BID15 BID20 BID21, our detection function f is composed of a predefined set of n feature functions, {φ j} n j=1, each of the form φ j: X * × P * × N * → R. Each feature function takes as input an acoustic representation of a speech utterancex ∈ X *, together with the term phoneme sequencep r ∈ P *, and a candidate alignment sequenceȳ r ∈ N *, and returns a scalar in R which represents the confidence in the suggested alignment sequence given the term r. For example, one element of the feature function can sum the number of times phoneme p comes after phoneme p, while other elements of the feature function may extract properties of each acoustic feature vector x t provided that phoneme p is pronounced at time t. Our basic set of feature functions is the same as the set used in BID15.We believe that the threshold value for each term depends on the term's phonetic content and its relative duration. In order to allow f to learn these subtle differences from the data we introduced an additional set of 4 feature functions: a feature function representing a bias; a feature function that counts the number of occurrences of a phoneme in a term, i.e., |{q|q ∈p r}|; a feature function holding the number of phonemes in the term, i.e., |p r |; and a feature function holding the average length of the phonemes in a term, i.e., DISPLAYFORM1 As mentioned above, our goal is to learn a spoken term detector f, which takes as input a sequence of acoustic featuresx, a termp r, and returns a confidence value in R. The form of the function f we use is f (x,p r) = max DISPLAYFORM2 where w ∈ R n is a vector of importance weights that should be learned and φ ∈ R n is a vector function composed out of the feature functions φ j. In other words, f returns a confidence prediction about the existence of the term in the utterance by maximizing a weighted sum of the scores returned by the feature function elements over all possible alignment sequences. If the confidence of the function f is above the threshold θ then we predict that the term is pronounced in the signal and located in the time span defined by the alignment sequenceȳ that maximizes: DISPLAYFORM3 where the search for the best sequence is practically performed using the Viterbi algorithm as described in BID15. Specifically, the algorithm finds the optimal time segment for the keyword r in the speech signalx, and then aligns the phoneme sequencep r within the chosen time segment. The parameters of the model w are found by minimizing the loss function defined in. In the fully supervised case we use a slightly modified version of it, which is defined as DISPLAYFORM4 This is a convex function in the vector of the parameters w. We use the Passive-Aggressive (PA) algorithm BID5 BID15 to find the parameters w. The algorithm receives as input a set of training examples DISPLAYFORM5 and examines each of them sequentially. Initially, we set w = 0. At each iteration i, the algorithm updates w according to the current example (p ri,x DISPLAYFORM6 Denote by w i−1 the value of the weight vector before the ith iteration. We set the next weight vector w i to be the minimizer of the following optimization problem, DISPLAYFORM7 where C serves as a complexity-accuracy trade-off parameter and ξ is a non-negative slack variable, which indicates the loss of the ith example. Intuitively, we would like to minimize the loss of the current example, i.e., the slack variable ξ, while keeping the weight vector w as close as possible to our previous weight vector w i−1 . The constraint makes the projection of the utterance in which the term is uttered onto w greater than θ + 1, and the projection of the utterance in which the term is not uttered onto w less than θ − 1. The closed form solution to the above optimization problem can be derived using the Karush-Kuhn-Tucker conditions in the same lines of [6, App. A].The loss in is composed of two hinge functions and therefore introduces a more elaborate solution than the one derived for the ranking loss of BID15. We call this algorithm PA-ACC (Passive-Aggressive to maximize Accuracy). Details about the implementation of the algorithm can be seen in BID8. The PA-ACC algorithm is an online algorithm, and deals with drifting hypotheses; therefore, it is highly influenced by the recent examples. Common methods to convert an online algorithm to a batch algorithm are either by taking the average over all the parameters {w i}, or by taking the best w i over a validation set BID6 BID3. We turn to exemplify our idea in the weakly supervised setting using deep networks. This implementation is based on recent work on whole-word segmental systems BID9 BID1 BID13. These works present a Siamese network model trained with a ranking loss function. Siamese networks BID4 are neural networks with a tied set of parameters which take as input a pair of speech segments and are trained to minimize or maximize the similarity between the segments depending on whether the same term has been pronounced in the pair of segments. In this setting the term r is represented by two speech segments rather than a phoneme sequence: a speech segment in which the term r is pronounced,x +, and a speech segment, in which the term r is not pronounced,x −. Similar to those works, we assume that each example in the training set is composed of the triplet (x t,x +,x −), wherex t,x + ∈ X r+ andx − ∈ X r−. The goal in training the network is that the similarity score betweenx t andx + should be above the similarity score between DISPLAYFORM0 Denote by g u: X * → R d a deep network (the specific architecture is discussed in Section 6) with a set of parameters u, where d is the dimension of the output. Denote by ρ: R d × R d → R a measure of similarity between two output vectors of size d. The spoken term detector f u: X * × X * → R is the composition of Siamese networks g u and the similarity function. Hence an unknown speech segmentx t can be compared to a positive or negative speech segment, as follows: DISPLAYFORM1 The tied parameters u of all the models were found in BID9 BID1 BID13 using the minimization of the ranking loss function DISPLAYFORM2 for different options of the similarity function ρ. In this work we propose to minimize the loss function in FORMULA5, which is defined for the weakly supervised case as follows: DISPLAYFORM3 when the margin of γ > 0 is used. In this case, the parameter γ is not set to 1, since the function f u is not a linear function and hence is not scale invariant to the margin, as in the structured prediction case. In the next section we present our empirical comparison for all the loss functions on different speech corpora. In this section we present experimental that demonstrate the effectiveness of our proposed calibrated loss function BID6. We compared the proposed loss to the standard approach of maximizing AUC using the ranking loss as in BID11 where no fixed threshold can be set. The experiments on the structured prediction model were conducted using fully supervised training sets of read speech (TIMIT, WSJ). The experiments on the deep network model performed on a weakly supervised data of conversational speech (Switchboard). To validate the effectiveness of the proposed approach, we performed experiments with the TIMIT corpus. The training and validation sets were taken from the TIMIT training set. The training set was composed from 1,512 randomly chosen terms, corresponding to 11,139 pairs of positive and negative utterances (each term repeated more than once). Similarly, the validation set was composed from 378 different randomly chosen terms, corresponding to 2,892 pairs. The validation set was used to tune the algorithm's parameters. The test set was composed of 80 terms that were suggested as a benchmark in BID15, and are distinct from the terms used in the training and validation sets. For each term, we randomly picked at most 20 utterances in which the term was uttered and at most 20 utterances in which it was not uttered. The utterances were taken from the TIMIT test set. The number of test utterances in which the term was uttered was not always 20, since some terms were uttered less than 20 times in the whole TIMIT test set. We measure performance using the AUC defined in and using the accuracy of a fixed threshold θ denoted Acc θ. Specifically, we calculate AUC on the test set of m test examples according to DISPLAYFORM0 and the accuracy by DISPLAYFORM1 We tested the PA-ACC algorithm using two options. The first was whether the final weight vector was a of averaging or was the best to perform on the validation set. The second option was whether the new feature functions we introduced were normalized by the length of the phoneme sequence |p r | or not. The AUC and Acc θ rates found on our validation and test set are presented in Table 1. In training PA-ACC we chose arbitrarily θ = 0. Table 1: AUC and ACC θ rates of the PA-ACC algorithm. The first column indicates whether the new feature functions were normalized or not. The second column indicates whether the final weight vector was a of averaging or was the best to perform on the validation set. We can see from the table that since the TIMIT dataset is very clean the detection rates are very good and the AUC is almost always 1. The presented here are improved over the presented in BID15 due to the introduction of the new feature functions. It is interesting to note that the best Acc 0 on the validation set were obtained when the additional features were not normalized and the final weight vector was selected over the validation set, while the best Acc 0 on the test set were obtained with the opposite conditions: when the final weight vector was the average one and the additional feature functions were normalized. Further research on feature functions should be conducted and extended to a larger dataset. We now turn to compare the performance of our algorithm against two other algorithms. The first is the discriminative keyword spotting algorithm presented in BID15, which is the Passive-Aggressive algorithm trained with the ranking loss to maximize the AUC. It is denoted here as PA-AUC. We introduce two versions of this algorithm: the original version and an extended version with the additional set of feature functions described in Sec. 4. When using the extended version of PA-AUC, normalizing the features had no affect on our . Similarly, a comparison of using the final weight vector versus the best weight vector yielded similar outcomes. The second algorithm is an HMM-based spoken term detection algorithm presented in BID15. BID0 For all the algorithms we report the AUC and Acc θ in TAB1. For the two versions of PA-AUC we selected a single threshold θ that gave the best Acc θ on the validation set. Similarly we selected the best threshold for the HMM algorithm. For PA-ACC we arbitrarily selected θ = 0. It is interesting to see that the AUC of PA-ACC is the same or even higher than that of the PA-AUC. Since Acc θ is a lower bound to AUC, the AUC can be thought of as Acc θ with the best threshold selected for every term in the set. Indeed from TAB1 we see that the Acc θ was very close to the AUC but did not reach it. We evaluate the model trained on TIMIT on the Wall Street Journal (WSJ) corpus BID19. This corpus corresponds to read articles of the Wall Street Journal, and hence presents a different linguistic context compared to TIMIT. Both the discriminative system and the HMM-based system were trained on the TIMIT corpus as described above and evaluated on a different set of 80 keywords from the WSJ corpus. For each keyword, we randomly picked at most 20 utterances in which the keyword was uttered and at most 20 utterances in which it was not uttered from the si_tr_s portion of the WSJ corpus. We used the same setting as in BID15. As before we arbitrarily chose the threshold θ = 0. The are presented in TAB2. Again we see from TAB2 that the model trained with the proposed loss function led to higher accuracy rates with similar AUC rates, meaning a better separation between the positive speech utterances and the negative speech utterances. Our second set of experiments is focused on deep networks trained on weakly supervised data. Our model is based on previous work on network training using the ranking loss BID9 BID1 BID13. We used the same experimental setup as BID13 BID13. The term weak supervision BID13, refers to the fact that supervision is given in the form of known word pairs, rather than the exact location of the term and its phonetic content as in Subsection 6.1.The data was taken from the Switchboard corpus of English conversational telephone speech. Melfrequency cepstral coefficients (MFCCs) with first and second order derivatives features were extracted and cepstral mean and variance normalization (CMVN) was applied per conversation side. The training set consisted of the set of about 10k word tokens from BID11 BID12; it consisted of word segments of at least 5 characters and 0.5 seconds in duration extracted from a forced alignment of the transcriptions, and comprises about 105 minutes of speech. For the Siamese convolutional neural networks (CNNs), this set in about 100k word segment pairs. For testing, we used the 11k-token set from BID11 BID12.The architecture of each network was the same as BID13 BID13: 1-D convolution with 96 filters over 9 frames; ReLU (Rectified Linear Unit); max pooling over 3 units; 1-D convolution with 96 filters over 8 units; ReLU; max pooling over 3 units; 2048-unit fully-connected ReLU; 1024-unit fully-connected linear layer. All weights were initialized randomly. Models were trained using ADADELTA BID22.We reproduced the in BID13 by training the Siamese network using the ranking loss in with the cosine similarity as a similarity function ρ. The cosine similarity of two vectors v 1 ∈ R d and v 2 ∈ R d is defined as DISPLAYFORM0 where this function returns a number close to 1 if the two vectors are similar and a number close to -1 if the two vectors are not. We also train the network using the same similarity function using the Acc θ loss function with θ = 0 as in BID12. For the ranking loss we used γ = 0.15 while for the Acc θ loss we used γ = 0.10, because the margin γ is counted twice for Acc θ loss. These values were chosen by maximizing Acc θ on a validation set over 5 epochs. The AUC and Acc θ values for training of 5 to 30 epochs are given in TAB3. Other training parameters and settings were exactly the same as in BID13. We can see in the table that both AUC and Acc θ are higher when the system is trained with the calibrated ranking loss function. The reason that the AUC was also improved is most likely because the calibrated ranking loss function is harder to optimize than the ranking loss. In this work, we introduced a new loss function that can be used to train a spoken term detection system with a fixed desired threshold for all terms. We introduced a new discriminative structured prediction model that is based on the Passive-Aggressive algorithm. We show that the new loss can be used in training weakly supervised deep network models as well. Results suggest that our new loss function yields AUC and accuracy values that are better than previous works' .
Spoken Term Detection, using structured prediction and deep networks, implementing a new loss function that both maximizes AUC and ranks according to a predefined threshold.
1,329
scitldr
Federated learning involves jointly learning over massively distributed partitions of data generated on remote devices. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by resource allocation strategies in wireless networks that encourages a more fair accuracy distribution across devices in federated networks. To solve q-FFL, we devise a scalable method, q-FedAvg, that can run in federated networks. We validate both the improved fairness and flexibility of q-FFL and the efficiency of q-FedAvg through simulations on federated datasets. With the growing prevalence of IoT-type devices, data is frequently collected and processed outside of the data center and directly on distributed devices, such as wearable devices or mobile phones. Federated learning is a promising learning paradigm in this setting that pushes statistical model training to the edge .The number of devices in federated networks is generally large-ranging from hundreds to millions. While one can naturally view federated learning as a multi-task learning problem where each device corresponds to a task , the focus is often instead to fit a single global model over these distributed devices/tasks via some empirical risk minimization objective . Naively minimizing the average loss via such an objective may disproportionately advantage or disadvantage some of the devices, which is exacerbated by the fact that the data are often heterogeneous across devices both in terms of size and distribution. In this work, we therefore ask: Can we devise an efficient optimization method to encourage a more fair distribution of the model performance across devices in federated networks?There has been tremendous recent interest in developing fair methods for machine learning. However, current methods that could help to improve the fairness of the accuracy distribution in federated networks are typically proposed for a much smaller number of devices, and may be impractical in federated settings due to the number of involved constraints BID5. Recent work that has been proposed specifically for the federated setting has also only been applied at small scales (2-3 groups/devices), and lacks flexibility by optimizing only the performance of the single worst device .In this work, we propose q-FFL, a novel optimization objective that addresses fairness issues in federated learning. Inspired by work in fair resource allocation for wireless networks, q-FFL minimizes an aggregate reweighted loss parameterized by q such that the devices with higher loss are given higher relative weight to encourage less variance in the accuracy distribution. In addition, we propose a lightweight and scalable distributed method, qFedAvg, to efficiently solve q-FFL, which carefully accounts for important characteristics of the federated setting such as communication-efficiency and low participation of devices BID3 ). We empirically demonstrate the fairness, efficiency, and flexibility of q-FFL and q-FedAvg compared with existing baselines. On average, q-FFL is able to reduce the variance of accuracies across devices by 45% while maintaining the same overall average accuracy. Fairness in Machine Learning. There are several widespread approaches in the machine learning community to address fairness, which is typically defined as the protection of some specific attribute(s) (e.g., ). In addition to preprocess the data and post-process the model BID8 ), another set of works optimize an objective under some explicit fairness constraints during training time BID2 BID5;;; BID8. Our work also enforces fairness during training, though we define fairness as the accuracy distribution across devices in federated learning, as opposed to the protection of a specific attribute (Section 3). BID5 use a notion of'minimum accuracy' as one special case of the rate constraints, which is conceptually similar to our goal. However, it requires each device to have one constraint, which is not practical in the federated setting. In federated settings, proposes a minimax optimization scheme, Agnostic Federated Learning (AFL), which optimizes for the performance of the single worst device. This method has only been applied at small scales (for a handful of groups). In addition, our objective is more flexible because q may be tuned based on the amount of fairness desired. Fairness in Resource Allocation. Fair resource allocation has been extensively studied in fields such as network management BID6;; ) and wireless communications BID7;; ). In these contexts, the problem is defined as allocating a scarce shared resource, e.g. communication time or power, among many users. In these cases directly maximizing utilities such as total throughput usually leads to unfair allocations where some users receive poor service. Several measurements have been proposed to balance between fairness and total throughput. Among them, a unified framework is captured through α-fairness , in which the emphasis on fairness can be tuned by changing a single parameter, α. If we think of the global model as a resource to serve the users (or devices), it is natural to ask similar questions about the fairness of the service that devices receive and use similar tools to promote fairness. Despite this, we are unaware of any work that uses fairness criteria from resource allocation to modify training objectives in machine learning. Inspired by the α-fairness metric, we propose a similarly modified objective function, q-Fair Federated Learning (q-FFL), to encourage a more fair accuracy distribution across devices in the context of federated training. We empirically demonstrate its benefits in Section 4.Federated and Distributed Optimization. Federated learning faces fundamental challenges such as expensive communication, variability in hardware, network connection, and power of devices, and heterogeneous local data distribution amongst devices, making it distinct from classical distributed optimization (; ; BID4 . In order to reduce communication, as well as to tolerate heterogeneity, methods that allow for local updating and low participation among devices have become de facto solvers for this setting BID4 ;). We incorporate recent advancements in this field when designing methods to solve the q-FFL objective, which we describe in Section 3.3. We first formally define the classical federated learning objective and methods, and introduce our proposed notion of fairness in Section 3.1. We then introduce q-FFL, a novel objective that encourages a more fair accuracy distribution across all devices (Section 3.2). Finally, in Section 3.3, we describe q-FedAvg, an efficient distributed method we develop to solve the objective in federated settings. Federated learning involves fitting a global model on distributed data generated on hundreds to millions of remote devices. In particular, the goal is to minimize: DISPLAYFORM0 where m is the total number of devices, p k ≥ 0, and DISPLAYFORM1 The local objective F k's can be defined by empirical risks over local data, i.e., DISPLAYFORM2 where n k is the number of samples available locally. We can set p k to be n k n, where n = k n k is the total number of samples in the entire dataset. Most prior work solves by first subsampling devices with probabilities proportional to n k at each round, and then applying an optimizer such as Stochastic Gradient Descent (SGD) locally to perform updates. These local updating methods enable flexible and efficient communication by running the optimizer for a variable number of iterations locally on each device, e.g., compared to traditional distributed (stochastic) gradient descent, which would simply calculate a subset of the gradients (; ; ;). FedAvg (Algorithm 2, Appendix A) is one of the leading methods to solve.However, solving in this manner can implicitly introduce unfairness among different devices. For instance, the learned model may be biased towards the devices with higher number of data points. Formally, we define our desired fairness criteria for federated learning below. Definition 1 (Fairness of distribution). For trained models w andw, we say that model w provides a more fair solution to Objective than modelw if the variance of the performance of model w on the m devices, {a 1, . . . a m}, is smaller than the variance of the performance of modelw on the m devices, i.e., Var(a 1, . . ., a m) ≤ Var(ã 1, . . .,ã m).In this work, we take'performance' for device k, a k, to be the testing accuracy of applying the trained model w on the test data for device k. Our goal is to reduce the variance while maintaining the same (or similar) average accuracy. A natural idea to achieve fairness as defined in would be to reweight the objective-assigning higher weight to devices with poor performance, so that the distribution of accuracies in the network reduces in variance. Note that this re-weighting must be done dynamically, as the performance of the devices depends on the model being trained, which cannot be evaluated a priori. Drawing inspiration from α-fairness, a utility function used in fair resource allocation in wireless networks, we propose the following objective q-FFL. For given local non-negative cost functions F k and parameter q > 0, we define the overall q-Fair Federated Learning (q-FFL) objective as DISPLAYFORM0 Intuitively, the larger we set q, the larger relative price we pay for devices k with high local empirical loss, DISPLAYFORM1 Here, q is a tunable parameter that depends on the amount of fairness we wish to impose in the network. Setting q = 0 does not encourage any fairness beyond the classical federated learning objective. A larger q means that we emphasize devices with higher losses (lower accuracies), thus reducing the variance between the accuracy distribution and potentially inducing more fairness in accordance with Definition 1.3.3. The solver: FedAvg-style q-Fair Federated Learning (q-FedAvg)We first propose a fair but less efficient method q-FedSGD, to illustrate the main techniques we use to solve the q-FFL objective. We then provide a more efficient counterpart q-FedAvg, by considering key properties of federated algorithms such as local updating schemes. Hyperparameter tuning: q and step-sizes. In devising a method to solve q-FFL, we begin by noting that it is important to first determine how to set q. In practice, q can be tuned based on the desired amount of fairness. It is therefore common to train a family of objectives for different q values so that a practitioner can explore the tradeoff between accuracy and fairness for the application at hand. Nevertheless, to optimize q-FFL in a scalable fashion, we rely on gradient-based methods, where the stepsize inversely depends on the Lipchitz constant of the function's gradient, which is often unknown and selected via grid search BID10 ). As we intend to optimize q-FFL for various values of q, the Lipchitz constant will change as we change q-requiring step-size tuning for all values of q. This can quickly cause the search space to explode. To overcome this issue, we propose estimating the local Lipchitz constant of the gradient for the family of q-FFL by using the Lipchitz constant we infer on q = 0. This allows us to dynamically adjust the step-size for the q-FFL objective, avoiding the manual tuning for each q. In Lemma 2 we formalize the relation between the Lipschitz constant, L, for q = 0 and q > 0.Lemma 2. If the non-negative function f (·) has a Lipchitz gradient with constant L, then for any q ≥ 0 and at any point w, DISPLAYFORM2 is an upper-bound for the local Lipchitz constant of the gradient of DISPLAYFORM3 Furthermore, the gradient DISPLAYFORM4 See proof in Appendix B.A first approach: q-FedSGD. In our first fair algorithm qFedSGD, we solve Objective using mini-batch SGD on a subset of devices at each round, and apply the above to each selected device to obtain local Lipchitz constants for gradients of local functions F k. By averaging those estimates, we obtain an estimate for the Lipchitz constant for the gradient of q-FFL. Then, the step-size (inverse of this estimate) is applied, like other gradient based algorithms; see Algorithm 3 in Appendix A for more details. Algorithm 1 q-FedAvg (proposed method) DISPLAYFORM5 Server chooses a subset S t of K devices at random (each device k is chosen with prob. p k) Server sends w t to all chosen devices 5:Each device k updating w t for E epochs of SGD with step size η to obtainw to run some number of local updates and then apply the updates in the gradient computation of q-FFL. The details of our method (q-FedAvg) are given in Algorithm 1. Note that when q=0, q-FFL corresponds to the normal objective in federated learning (Equation FORMULA0), and q-FedAvg is also reduced to FedAvg where no fairness modification is introduced. We first describe our experimental setup, then demonstrate the improved fairness of the q-FFL objective by comparing q-FFL with several baselines, and finally show the efficiency of q-FedAvg compared with q-FedSGD.Experiment setups. We explore both convex and nonconvex models on four federated datasets curated from prior work in federated learning (; BID4 BID4 . Full details of the datasets and models are given in Appendix D. Throughout the experiments, we show on the Vehicle dataset. Similar on all datasets are provided in Appendix C. Fairness of q-FFL. We verify that the proposed objective q-FFL leads to more fair solutions (according to Definition 1) for federated data, compared with FedAvg and two other baselines that are likely to impose fairness in federated networks. Compare with FedAvg. In Figure 1 (left), we compare the final testing accuracy distributions of two objectives (q=0 and a tuned value of q=5) averaged across 5 random shuffles of Vehicle. We observe that the objective with q=5 in more centered (i.e., fair) testing accuracy distributions with lower variance. We further report the worst and best 10% testing accuracies and the variances of accuracies in Table 1. We see that the average testing accuracy remains almost unchanged with the proposed objective despite significant reductions in variance. See similar on other datasets in Figure 2 and TAB2 in Appendix C. Compare with weighing each device equally. We compare q-FFL with a heuristic that samples devices uniformly and report testing accuracy in Figure 1 (middle). A table with the statistics of accuracy distribution on all datasets is given in the appendix in Table 3. While the'weighing each device equally' heuristic tends to outperform our method in training accuracy distributions (Figure 5 and TAB8 in Appendix D.3), our method produces more fair solutions in terms of testing accuracies. One explanation for this is that uniform sampling is a static method and can easily overfit to devices with very few data points, whereas q-FFL has better generalization properties due to its dynamic nature. FORMULA0 ). Middle: Fairness of q-FFL q>0 compared with the uniform sampling baseline. Right: q-FedAvg converges faster than q-FedSGD. Table 1. Statistics of the testing accuracy distribution for q-FFL on Vehicle. By setting q > 0, the variance of the final accuracy distribution decreases, and the worst 10% accuracy increases, while the overall accuracy remains fairly constant. Avg. Worst 10% Best 10% Var. q = 0 87.3% 43.0%95.7% 291 q = 5 87.7% 69.9%94.0% 48performance of the device with the highest loss. This is the only work we are aware of that aims to address fairness issues in federated learning. See Appendix D.2 for details of our AFL implementation. We also observe that q-FFL outperforms AFL when q is set appropriately TAB4, Appendix D). We note that q is also tunable depending on the amount of fairness desired. Interestingly, we observe q-FFL converges faster compared with AFL (see Figure 7 in Appendix D.3) in terms of communication rounds. Choosing q. A natural question is determine how q should be tuned in the q-FFL objective. The framework is flexible in that it allows one to choose q to tradeoff between reduced variance of the accuracy distribution and a high average accuracy. In particular, a reasonable approach in practice would be to run Algorithm 1 with multiple q's in parallel to obtain multiple final global models, and then let each device select amongst these based on performance on the validation data. We show benefits of this device-specific strategy in TAB9 in Appendix D.3.Efficiency of q-FedAvg. Finally, we show the efficiency of q-FedAvg by comparing Algorithm 1 with its nonlocal-updating baseline q-FedSGD (Algorithm 3) with the same objective (q > 0). At each communication round, q-FedAvg runs one epoch of local updates on each selected device, while q-FedSGD runs gradient descent using all local training data on that device. In Figure 1 (We summarize the FedAvg algorithm proposed in below. Server chooses a subset S t of K devices at random (each device k is chosen with probability p k) Server sends w t to all chosen devices Each device k updates w t for E epochs of SGD on F k with step-size η to obtain w t+1 kEach chosen device k sends w t+1 k back to the server Server aggregates the w's as w DISPLAYFORM0 We summarize our proposed method q-FedSGD below. Algorithm 3 q-FedSGD DISPLAYFORM1 Server chooses a subset S t of K devices at random (each device k is chosen with prob. p k) Server sends w t to all chosen devices 5:Each device computes: DISPLAYFORM0 Each chosen device k sends ∆ Server aggregates the computes w t+1 as: DISPLAYFORM1 Proof. At any point w, we can compute ∇ 2 f (w) DISPLAYFORM0 As a , DISPLAYFORM1 Fairness of q-FFL. We demonstrate the improved fairness of q-FFL on all the four datasets in Figure 2 and TAB2.Comparison with uniform sampling. We compare q-FFL with uniform sampling schemes and report testing accuracy on all datasets in Figure 3. A table with the final accuracies and variances is given in Table 3. While the'weighing each device equally' heuristic tends to outperform our method in training accuracy distributions (Figure 5 and TAB8), our Figure 2. q-FFL leads to fairer test accuracy distributions. With q > 0, the distributions shift towards the center as low accuracies increase at the cost of decreasing high accuracies on some devices. Setting q=0 corresponds to the original objective (Equation FORMULA0). The selected q values for q > 0 on the four datasets, as well as distribution statistics, are shown in TAB2. Figure 3. q-FFL (q > 0) compared with uniform sampling. In terms of testing accuracy, our objective produces more fair solutions than uniform sampling. Distribution statistics are provided in Table 3. Table 3. More statistics indicating the ing fairness of q-FFL compared with the uniform sampling baseline. Again, we observe that the testing accuracy of the worst 10% devices tends to increase, and the variance of the final testing accuracies is smaller. Figure 4. Fix an objective (i.e., using the same q) for each dataset, q-FedAvg (Algorithm 1) compared with q-FedSGD (Algorithm 3). We can see that our method adopting local updating schemes converges faster in terms of communication rounds on most datasets.method produces more fair solutions in terms of testing accuracies. One explanation for this is that uniform sampling is a static method and can easily overfit to devices with very few data points, whereas q-FFL has better generalization properties due to its dynamic nature. Comparison with weighing each device adversarially. We show the of comparing q-FFL with AFL on the two datasets in TAB4. q-FFL outperforms AFL in terms of increasing the lowest accuracies. In addition, q-FFL is more flexible as the parameter q enables the trade-off between increasing the worst accuracies and decreasing the best accuracies. Efficiency of q-FedAvg. In Figure 4, we show that on most datasets, q-FedAvg converges faster than q-FedSGD in terms of communication rounds due to its local updating scheme. We note here that number of rounds is a reasonable metric for comparison between these methods as they process the same amount of data and perform equivalent amount of communication at each round. Our method is also lightweight, and can be easily integrated into existing implementations of federated learning algorithms such as TensorFlow Federated (TFF). We provide full details on the datasets and models used in our experiments. The statistics of four federated datasets are summarized in TAB6. We report total number of devices, total number of samples, and mean and deviation in the sizes of total data points on each device. Additional details on the datasets and models are described below.• Synthetic: We follow a similar set up as that in and impose additional heterogeneity. The model is y = argmax(softmax(W x + b)), x ∈ R 60, W ∈ R 10×60, b ∈ R 10, and the goal is to learn a global W and b. Samples (X k, Y k) and local models on each device k satisfies DISPLAYFORM0, where the covariance matrix Σ is diagonal with Σ j,j = j −1.2. Each element in v k is drawn from TAB2 Fair Resource Allocation in Federated Learning N (B k, 1), B k ∼ N. There are 100 devices in total and the number of samples on each devices follows a power law.• Vehicle 1: We use the same Vehicle Sensor (Vehicle) dataset as , modelling each sensor as a device. Each sample has a 100-dimension feature and a binary label indicating whether this sample is on an AAV-type or DWtype vehicle. We train a linear SVM. We tune the hyperparameters in SVM and report the best configuration.• Sent140: It is a collection of tweets from Sentiment140 BID10 ) (Sent140). The task is text sentiment analysis which we model as a binary classification problem. The model takes as input a 25-word sequence, embeds each word into a 300-dimensional space using pretrained Glove , and outputs a binary label after two LSTM layers and one densely-connected layer.• We implement a non-stochastic version of AFL where all devices are selected and updated each round and do a grid search on the AFL hyperparameters, γ w and γ λ. In order to draw a fair comparison, we modify Algorithm 1 by sampling all devices and letting each of them run gradient descent at each round. We use the same public datasets (Adult and Fashion MNIST) as in . We randomly split data on each local device into 80% training set, 10% testing set, and 10% validation set. We tune q from {0.001, 0.01, 0.1, 1, 2, 5, 10, 15} on the validation set and report accuracy distributions on the testing set. For each dataset TAB2 Fair Resource Allocation in Federated Learning Table 6. Average testing accuracy under q-FFL objectives. We show that the ing solutions of q=0 and q¿0 objectives have approximately the same accuracies both with respect to all data points and with respect to all devices. Objective Accuracy w.r. Figure 5. q-FFL (q > 0) compared with uniform sampling in training accuracy. We see that in most cases uniform sampling has higher (and more fair) training accuracies due to the fact that it is overfitting to devices with few samples.we repeat this process for five randomly selected train/test/validation splits, and report the mean and standard deviation across these five runs where applicable. For Synthetic, Vehicle, Sent140, and Shakespeare, optimal 2 q values are 1, 5, 1, and 0.001 respectively. We randomly sample 10 devices each round. We tune the learning rate on FedAvg and use the same learning rate for all experiments of that dataset. The learning rates for Synthetic, Vehicle, Sent140, and Shakespeare are 0.1, 0.01, 0.03, and 0.8 respectively. When running AFL methods, we search for a best γ w and γ λ such that AFL achieves the highest testing accuracy on the device with the highest loss within a fixed number of rounds. For Adult, we use γ w = 0.1 and γ λ = 0.1; for Fashion MNIST, we use γ w = 0.001 and γ λ = 0.01. We use the same γ w as step sizes for q-FedAvg on Adult and Fashion MNIST. In TAB4, q 1 = 0.01, q 2 = 2 for q-FFL on Adult and q 1 = 5, q 2 = 15 for q-FFL on Fashion MNIST. The number of local epochs is fixed to 1 whenever we do local updates. Average testing accuracy with respect to devices. We have shown that q-FFL leads to more fair accuracy distributions while maintaining approximately the same testing accuracies in Section 4. Note that we report average testing accuracy with respect to all data points in Table 1 and 2. We observe similar on average accuracy with respect to all devices between q = 0 and q > 0 objectives, as shown in Table 6.Efficiency of q-FFL compared with AFL. One added benefit of q-FFL is that it leads to faster convergence than AFL even when we use non-local-updating methods for both objectives. In Figure 7, we show that when fixing the final testing accuracy for the single worst device, q-FFL converges faster than AFL. As the number of devices increases (from Fashion MNIST to Vehicle), the performance gap between AFL and q-FFL becomes larger because AFL introduces larger variance TAB2 Fair Resource Allocation in Federated Learning Figure 6. The convergence speed of q-FFL compared with FedAvg. We plot the distance to the highest accuracy achieved versus communication rounds. Although q-FFL with q > 0 is a more difficult optimization problem, for the q values we choose that could lead to more fair , the convergence speed is comparable to that of q = 0.Choosing q. We solve q-FFL with q ∈ {0, 0.001, 0.01, 0.1, 1, 2, 5, 10} in parallel. After training, each device selects the best ing model based on the validation data and tests the performance of the model using testing set. We report the in terms of testing accuracy in TAB9. Using this strategy, accuracy variance is reduced and average accuracy is increased. However, this will induce more local computation and additional communication load in each round. But this does not increase the number of communication rounds. Convergence speed of q-FFL. In Section 4, we show that our solver q-FedAvg using local updating schemes converges significantly faster than q-FedSGD. A natural question one might ask is: will the q-FFL (q > 0) objective slows the convergence compared with FedAvg? We empirically investigate this on real datasets. We use q-FedAvg to solve q-FFL, and compare it with FedAvg. As demonstrated in Figure 6, the q values we are choosing that in more fair solutions do not significantly slowdown convergence. 52.1% ±.3% 42.1% ± 2.1% 69.0% ± 4.4% 54 ± 27 multiple q's 52.0 ± 1.5% % 41.0% ± 4.3% 72.0% ± 4.8% 72 ± 32
We propose a novel optimization objective that encourages fairness in heterogeneous federated networks, and develop a scalable method to solve it.
1,330
scitldr
We propose a novel autoencoding model called Pairwise Augmented GANs. We train a generator and an encoder jointly and in an adversarial manner. The generator network learns to sample realistic objects. In turn, the encoder network at the same time is trained to map the true data distribution to the prior in latent space. To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss. Here we train a discriminator to distinguish two types of pairs: an object with its augmentation and the one with its reconstruction. We show that such adversarial loss compares objects based on the content rather than on the exact match. We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative on CIFAR10. Deep generative models are a powerful tool to sample complex high dimensional objects from a low dimensional manifold. The dominant approaches for learning such generative models are variational autoencoders (VAEs) and generative adversarial networks (GANs) BID12. VAEs allow not only to generate samples from the data distribution, but also to encode the objects into the latent space. However, VAE-like models require a careful likelihood choice. Misspecifying one may lead to undesirable effects in samples and reconstructions (e.g., blurry images). On the contrary, GANs do not rely on an explicit likelihood and utilize more complex loss function provided by a discriminator. As a , they produce higher quality images. However, the original formulation of GANs BID12 lacks an important encoding property that allows many practical applications. For example, it is used in semi-supervised learning, in a manipulation of object properties using low dimensional manifold BID7 and in an optimization utilizing the known structure of embeddings BID11.VAE-GAN hybrids are of great interest due to their potential ability to learn latent representations like VAEs, while generating high-quality objects like GANs. In such generative models with a bidirectional mapping between the data space and the latent space one of the desired properties is to have good reconstructions (x ≈ G(E(x))). In many hybrid approaches BID30 BID34 BID36 BID3 BID33 as well as in VAE-like methods it is achieved by minimizing L 1 or L 2 pixel-wise norm between x and G(E(x)). However, the main drawback of using these standard reconstruction losses is that they enforce the generative model to recover too many unnecessary details of the source object x. For example, to reconstruct a bird picture we do not need an exact position of the bird on an image, but the pixel-wise loss penalizes a lot for shifted reconstructions. Recently, improved ALI model BID8 by introducing a reconstruction loss in the form of a discriminator which classifies pairs (x, x) and (x, G(E(x))). However, in such approach, the discriminator tends to detect the fake pair (x, G(E(x))) just by checking the identity of x and G(E(x)) which leads to vanishing gradients. In this paper, we propose a novel autoencoding model which matches the distributions in the data space and in the latent space independently as in BID36. To ensure good reconstructions, we introduce an augmented adversarial reconstruction loss as a discriminator which classifies pairs (x, a(x)) and (x, G(E(x))) where a(·) is a stochastic augmentation function. This enforces the DISPLAYFORM0 discriminator to take into account content invariant to the augmentation, thus making training more robust. We call this approach Pairwise Augmented Generative Adversarial Networks (PAGANs).Measuring a reconstruction quality of autoencoding models is challenging. A standard reconstruction metric RMSE does not perform the content-based comparison. To deal with this problem we propose a novel metric Reconstruction Inception Dissimilarity (RID) which is robust to content-preserving transformations (e.g., small shifts of an image). We show qualitative on common datasets such as MNIST BID19, CIFAR10 BID17 and CelebA BID21. PAGANs outperform existing VAE-GAN hybrids in Inception Score BID31 and Fréchet Inception Distance BID14 except for the recently announced method PD-WGAN BID10 on CIFAR10 dataset. Let us consider an adversarial learning framework where our goal is to match the true distribution p * (x) to the model distribution p θ (x). As it was proposed in the original paper BID12, the model distribution p θ (x) is induced by the generator G θ: z − → x where z is sampled from a prior p(z). To match the distributions p * (x) and p θ (x) in an adversarial manner, we introduce a discriminator DISPLAYFORM0 It takes an object x and predicts the probability that this object is sampled from the true distribution p * (x). The training procedure of GANs BID12 ) is based on the minimax game of two players: the generator G θ and the discriminator D ψ. This game is defined as follows DISPLAYFORM1 where V (θ, ψ) is a value function for this game. The optimal discriminator D ψ * given fixed generator G θ is DISPLAYFORM2 DISPLAYFORM3 The solution of the reconstruction matching problem ensures that reconstructions G θ (E ϕ (x)) correspond to the source object x up to defined random augmentations a(x). In PAGANs model we introduce the minimax game for training the adversarial distance between the reconstructions and augmentations of the source object x. We consider the discriminator D ψ which takes a pair (x, y) and classifies it into one of the following classes:• the real class: pairs (x, y) from the distribution p * (x)r(y|x), i.e. the object x is taken from the true distribution p * (x) and the second y is obtained from the x by the random augmentation a(x); • the fake class: pairs (x, y) from the distribution DISPLAYFORM0 i.e. x is sampled from p * (x) then z is generated from the conditional distribution q ϕ (z|x) by the encoder E ϕ (x) and y is produced by the generator G ϕ (z) from the conditional model distribution p θ (y|z).Then the minimax problem is DISPLAYFORM1 where DISPLAYFORM2 Let us prove that such minimax game will match the distributions r(y|x) and p θ,ϕ (y|x). At first, we find the optimal discriminator: Proposition 1. Given a fixed generator G θ and a fixed encoder E ϕ, the optimal discriminator D ψ * is DISPLAYFORM3 Proof. Given in Appendix A.1.Then we can prove that given an optimal discriminator the value function V (θ, ϕ, ψ) is equivalent to the expected Jensen-Shanon divergence between the distributions r(y|x) and p θ,ϕ (y|x). Proposition 2. The minimization of the value function V under an optimal discriminator D ψ * is equivalent to the minimization of the expected Jensen-Shanon divergence between r(y|x) and p θ,ϕ (y|x), i.e. DISPLAYFORM4 Proof. Given in Appendix A.2.If r(y|x) = δ x (y) then the optimal discriminator D ψ * (x, y) will learn an indicator I{x = y} as was proved in. As a consequence, the objectives of the generator and the encoder are very unstable and have vanishing gradients in practice. On the contrary, if the distribution r(y|x) is non-degenerate as in our model then the value function V (θ, ϕ, ψ) will be well-behaved and much more stable which we observed in practice. We obtain that for the generator and the encoder we should optimize the sum of two value functions:• the generator's objective: DISPLAYFORM0 • the encoder's objective: DISPLAYFORM1 Draw N samples from the dataset and the prior DISPLAYFORM2 Compute generator loss DISPLAYFORM3 Gradient update on discriminator networks DISPLAYFORM4 Gradient update on generator-encoder networks until convergenceIn practice in order to speed up the training we follow BID12 and use more stable objectives replacing log(1 − D(·)) with − log(D(·)). See Figure 1 for the description of our model and Algorithm 1 for an algorithmic illustration of the training procedure. We can straightforwardly extend the definition of PAGANs model to f -PAGANs which minimize the f -divergence and to WPAGANs which optimize the Wasserstein-1 distance. More detailed analysis of these models is placed in Appendix C. Recent papers on VAE-GAN hybrids explore different ways to build a generative model with an encoder part. One direction is to apply adversarial training in the VAE framework to match the variational posterior distribution q(z|x) and the prior distribution p(z) BID23 or to match the marginal q(z) and p(z) BID22 BID33. Another way within the VAE model is to introduce the discriminator as a part of a data likelihood BID18 BID3. Within the GANs framework, a common technique is to regularize the model with the reconstruction loss term BID5 BID30 BID34.Another principal approach is to train the generator and the encoder BID8 simultaneously in a fully adversarial way. These methods match the joint distributions p * (x)q(z|x) and p θ (x|z)p(z) by training the discriminator which classifies the pairs (x, z). ALICE model introduces an additional entropy loss for dealing with the non-identifiability issues in ALI model. approximated the entropy loss with the cycle-consistency term which is equivalent to the adversarial reconstruction loss. The model of BID27 puts ALI to the VAE framework where the same joint distributions are matched in an adversarial manner. As an alternative, BID34 train generator and encoder by optimizing the minimax game without the discriminator. Optimal transport approach is also explored, BID10 introduce an algorithm based on primal and dual formulations of an optimal transport problem. In PAGANs model the marginal distributions in the data space p * (x) and p θ (x) and in the latent space p(z) and q(z) are matched independently as in BID36. Additionally, the augmented adversarial reconstruction loss is minimized by fooling the discriminator which classifies the pairs (x, a(x)) and (x, G(E(x))). In this section, we validate our model experimentally. At first, we compare PAGAN with other similar methods that allow performing both inference and generation using Inception Score and Fréchet Inception Distance. Secondly, to measure reconstruction quality, we introduce Reconstruction Inception Dissimilarity (RID) and prove its usability. In the last two experiments we show the importance of the adversarial loss and augmentations. For the architecture choice we used deterministic DCGAN 1 generator and discriminator networks provided by pfnet-research 2, the encoder network has the same architecture as the discriminator except for the output dimension. The encoder's output is a factorized normal distribution. Thus DISPLAYFORM0 ϕ (x)I) where µ ϕ, σ ϕ are outputs of the encoder network. The discriminator D(z) architecture is chosen to be a 2 layer MLP with 512, 256 hidden units. We also used the same default hyperparameters as provided in the repository and applied a spectral normalization following BID24. For the augmentation a(x) defined in Section 3 we used a combination of reflecting 10% pad and the random crop to the same image size. The prior distribution p(z) is chosen to be a standard distribution N (0, I). To evaluate Inception Score and Fréchet Inception Distance we used the official implementation provided in tensorflow 1.10.1 BID0.To optimize objectives FORMULA2, FORMULA2, we need to have a discriminator working on pairs (x, y). This can be done using special network architectures like siam networks BID4 or via an image concatenation. The latter approach can be implemented in two concurrent ways: concatenating channel or widthwise. Empirically we found that the siam architecture does not lead to significant improvement and concatenating width wise to be the most stable. We use this configuration in all the experiments. To see whether our method provides good quality samples from the prior, we compared our model to related works that allow an inverse mapping. We performed our evaluations on CIFAR10 dataset since quantitative metrics are available there. Considering Fréchet Inception Distance (FID), our model outperforms all other methods. Inception Score shows that PAGANs significantly better than others except for recently announced PD-WGAN. Quantitative are given in TAB0. For S-VAE we report IS that is reproduced using officially provided code and hyperparameters 3. Plots with samples and reconstructions for CIFAR10 dataset are provided in Figure 2. Additional visual for more datasets can be found in Appendix E.3. BID33 87.7 4.18 ± 0.04 ALI 5.34 ± 0.04 AGE BID34 39.51 5.9 ± 0.04 ALICE 6.02 ± 0.03 S-VAE BID6 6.055 α-GANs BID30 6.2 AS-VAE BID28 6.3 PD-WGAN, λ mix = 0 BID10 33.0 6.70 ± 0.09 PAGAN (ours) 32.84 6.56 ± 0.06 The traditional approach to estimate the reconstruction quality is to compute RMSE distance from source images to reconstructed ones. However, this metric suffers from focusing on exact reconstruction and is not content aware. RMSE penalizes content-preserving transformations while allows such undesirable effect as blurriness which degrades visual quality significantly. We propose a novel metric Reconstruction Inception Dissimilarity (RID) which is based on a pre-trained classification network and is defined as follows: DISPLAYFORM0 where p(y|x) is a pre-trained classifier that estimates the label distribution given an image. Similar to BID31 we use a pre-trained Inception Network BID32 to calculate softmax outputs. Low RID indicates that the content did not change after reconstruction. To calculate standard deviations, we use the same approach as for IS and split test set on 10 equal parts 4. Moreover RID is robust to augmentations that do not change the visual content and in this sense is much better than RMSE. To compare new metric with RMSE, we train a vanilla VAE with resnet-like architecture on CI-FAR10. We compute RID for its reconstructions and real images with the augmentation (mirror 10% pad + random crop). In TAB1 we show that RMSE for VAE is better in comparison to augmented images (AUG), but we are not satisfied with its reconstructions (see Figure 12 in Appendix E.4), Figure 3 provides even more convincing . RID allows a fair comparison, for VAE it is dramatically higher (44.33) than for AUG (1.57). Value 1.57 for AUG says that KL divergence is close to zero and thus content is almost not changed. We also provide estimated RID and RMSE for AGE that was publicly available 5. From TAB1 we see that PAGANs outperform AGE which reflects that our model has better reconstruction quality. To prove the importance of an adversarial loss, we experiment replacing adversarial loss with the 4 Split is done sequentially without shuffling 5 Pretrained AGE: https://github.com/DmitryUlyanov/AGE TAB2. IS and FID suggest that our model without adversarial loss performed worse in generation. Reconstruction quality significantly dropped considering RID. Visual in Appendix E.1 confirm our quantitative findings. In ALICE model an adversarial reconstruction loss was implemented without an augmentation. As we discussed in Section 1 its absence leads to undesirable effects. Here we run an experiment to show that our model without augmentation performs worse. Quantitative provided in TAB2 illustrate that our model without an augmentation fails to recover both good reconstruction and generation properties. Visual comparisons can be found in Appendix E.2. Using the obtained from the last two experiments we conclude that adversarial reconstruction loss works significantly better with augmentation. Experiments checking augmentation effects (see Appendix B for details) conclude the following. A good augmentation: 1) is required to be non-deterministic, 2) should preserve the content of source image, 3) should be hard to use pixel-wise comparison for discriminator. In this paper, we proposed a novel framework with an augmented adversarial reconstruction loss. We introduced RID to estimate reconstructions quality for images. It was empirically shown that this metric could perform content-based comparison of reconstructed images. Using RID, we proved the value of augmentation in our experiments. We showed that the augmented adversarial loss in this framework plays a key role in getting not only good reconstructions but good generated images. Some open questions are still left for future work. More complex architectures may be used to achieve better IS and RID. The random shift augmentation may not the only possible choice, and other smart choices are also possible. A.1 PROOF OF PROPOSITION 1 (OPTIMAL DISCRIMINATOR) Proposition 1. Given a fixed generator G θ and a fixed encoder E ϕ, the optimal discriminator D ψ * is DISPLAYFORM0 Proof. For fixed generator and encoder, the value function V (ψ) with respect to the discriminator is DISPLAYFORM1 Let us introduce new variables and notations DISPLAYFORM2 Then DISPLAYFORM3 Using the of the paper BID12 we obtain DISPLAYFORM4 A.2 PROOF OF PROPOSITION 2Proposition 2. The minimization of the value function V under an optimal discriminator D ψ * is equivalent to the minimization of the expected Jensen-Shanon divergence between r(y|x) and p θ,ϕ (y|x), i.e. DISPLAYFORM5 Proof. As in the paper BID12 we rewrite the value function V (θ, ϕ) for the optimal discriminator D ψ * as follows DISPLAYFORM6 APPENDIX B CHOICE OF AUGMENTATION Augmentation choice might be problem specific. Thereby we additionally study different augmentations and provide an intuition how to choose the right transformation. Theory suggests to pick up a stochastic augmentation. The practical choice should take into account the desired properties of reconstructions. A random shift of an image by a small margin is sufficient to create good quality reconstructions. However, this shift should not be large because it may inherit augmentation artifacts. This can be spotted beforehand just looking at pairs (x, a(x)). Once these pairs are not satisfactory, model reconstructions would be bad as well. In this experiment, we investigate the effects caused by the padding and random crop augmentation. We choose different padding size (comparatively to the original image size) and plot FID, RID and IS metrics. The provide the intuition to choose padding size (see Figure 4). Padding should be chosen to maintain visual content while making impossible to compare augmented and original images by nearly element-wise comparison. Larger padding cause undesirable effects in reconstructions that are captured by RID (see Figure 5). Visual quality of samples, on the other hand is slightly better with more aggressive augmentation considering FID metric, what is explained by more robust training due to less mode collapse problem. We also checked two different augmentation types: Gaussian blur and random contrast (see Figures 6, 7). Both augmentations led to highly unstable training and did not yield satisfactory (IS was 2.15 and 4.18 respectively). Therefore we conclude that a good augmentation is better to change spatial image structure preserving content (as padding does) what will force the discriminator to take content into account. • is required to be non-deterministic • should preserve the content of source image • should be hard to use pixel-wise comparison for discriminator APPENDIX C EXTENDING PAGANS C.1 f -DIVERGENCE PAGANS f -GANs BID26 are the generalization of GAN approach. BID26 introduces the model which minimizes the f -divergence D f BID1 between the true distribution p * (x) and the model distibution p θ (x), i.e. it solves the optimization problem DISPLAYFORM0 where f: R + − → R is a convex, lower-semicontinuous function satisfying f = 0.The minimax game for f -GANs is defined as DISPLAYFORM1 where V (θ, ψ) is a value function and f * is a Fenchel conjugate of f BID25. For fixed parameters θ, the optimal T ψ * (x) is f p * (x) DISPLAYFORM2. Then the value function V (θ, ψ *) for optimal parameters ψ * equals to f -divergence between the distributions p * and p θ BID25, i.e. DISPLAYFORM3
We propose a novel autoencoding model with augmented adversarial reconstruction loss. We intoduce new metric for content-based assessment of reconstructions.
1,331
scitldr
We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables. Inspired by dropout, a popular tool for regularization and model ensemble, we assign sparse priors to the weights in deep neural networks (DNN) in order to achieve automatic “dropout” and avoid over-fitting. By alternatively sampling from posterior distribution through stochastic gradient Markov Chain Monte Carlo (SG-MCMC) and optimizing latent variables via stochastic approximation (SA), the trajectory of the target weights is proved to converge to the true posterior distribution conditioned on optimal latent variables. This ensures a stronger regularization on the over-fitted parameter space and more accurate uncertainty quantification on the decisive variables. Simulations from large-p-small-n regressions showcase the robustness of this method when applied to models with latent variables. Additionally, its application on the convolutional neural networks (CNN) leads to state-of-the-art performance on MNIST and Fashion MNIST datasets and improved resistance to adversarial attacks. Bayesian deep learning, which evolved from Bayesian neural networks (; BID4, provides an alternative to point estimation due to its close connection to both Bayesian probability theory and cutting-edge deep learning models. It has been shown of the merit to quantify uncertainty BID6, which not only increases the predictive power of DNN, but also further provides a more robust estimation to enhance AI safety. Particularly, BID5 BID3 described dropout as a variational Bayesian approximation. Through enabling dropout in the testing period, the randomly dropped neurons generate some amount of uncertainty with almost no added cost. However, the dropout Bayesian approximation is variational inference (VI) based thus it is vulnerable to underestimating uncertainty. MCMC, known for its asymptotically accurate posterior inference, has not been fully investigated in DNN due to its unscalability in dealing with big data and large models. Stochastic gradient Langevin dynamics (SGLD) , the first SG-MCMC algorithm, tackled this issue by adding noise to a standard stochastic gradient optimization, smoothing the transition between optimization and sampling. Considering the pathological curvature that causes the SGLD methods inefficient in DNN models, BID15 proposed combining adaptive preconditioners with SGLD (pSGLD) to adapt to the local geometry and obtained state-of-the-art performance on MNIST dataset. To avoid SGLD's random-walk behavior, BID3 proposed using stochastic gradient Hamiltonian Monte Carlo (SGHMC), a second-order Langevin dynamics with a large friction term, which was shown to have lower autocorrelation time and faster convergence BID2. used SGHMC with GANs BID8 ) to achieve a fully probabilistic inference and showed the Bayesian GAN model with only 100 labeled images was able to achieve 99.3% testing accuracy in MNIST dataset.;; provided theoretical interpretations of SGLD from the perspective of non-convex optimization, echoing the empirical fact that SGLD works well in practice. When the number of predictors exceeds the number of observations, applying the spike-and-slab priors is particularly powerful and efficient to avoid over-fitting by assigning less probability mass on We denote the decaying learning rate at time k by (k), the entire data by DISPLAYFORM0, where d i = (x i, y i), the log of posterior by L(β), ∇ as the gradient of any function in terms of β. The minibatch of data B is of size n with indices S = {s 1, s 2, ..., s n}, where s i ∈ {1, 2, ..., N}. Stochastic gradient ∇L(β) from a mini-batch of data B randomly sampled from D is used to approximate the true gradient ∇L(β):∇L(β) = ∇ log P(β) + N n i∈S ∇ log P(d i |β).The stochastic gradient Langevin dynamics (no momentum) is formed as follows: DISPLAYFORM1 where τ > 0 denotes the temperature, G is a positive definite matrix to precondition the dynamics. It has been shown that SGLD asymptotically converges to a stationary distribution π(β|D) ∝ e τ L(β) . As τ increases, the algorithm tends towards optimization with underestimated uncertainty. Another variant of SG-MCMC, SGHMC BID3 ), proposes to use second-order Langevin dynamics to generate samples: DISPLAYFORM2 where r is the momentum item, M is the mass,B is an estimate of the error variance from the stochastic gradient, C is a user-specified friction term to counteracts the noisy gradient. Dropout has been proven successful, as it alleviates over-fitting and provides an efficient way of making bagging practical for ensembles of countless sub-networks. Dropout can be interpreted as assigning the Gaussian mixture priors on the neurons BID7. To mimic Dropout in our Bayesian CNN models, we assign the spike-and-slab priors on the most fat-tailed weights in FC1 FIG0. From the Bayesian perspective, the proposed robust algorithm distinguishes itself from the dropout approach in treating the priors: our algorithm keeps updating the priors during posterior inference, rather than fix it. The inclusion of scaled mixture priors in deep learning models were also studied in BID1; BID15 with encouraging . However, to the best of our knowledge, none of the existing SG-MCMC methods could deal with complex posterior with latent variables. Intuitively, the Bayesian formulation with model averaging and the spike-and-slab priors is expected to obtain better predictive performance through averages from a "selective" group of "good" submodels, rather than averaging exponentially many posterior probabilities roughly. For the weight priors of the rest layers (dimension u), we just assume they follow the standard Gaussian distribution, while the biases follow improper uniform priors. Similarly to the hierarchical prior in the EM approach to variable selection (EMVS) (Rořková and), we assume the weights β ∈ R p in FC1 follow the spike-and-slab mixture prior DISPLAYFORM0 where DISPLAYFORM1 for each j and 0 < v 0 < v 1. By introducing the latent variable γ j = 0 or 1, the mixture prior is represented as DISPLAYFORM2 The interpretation is: if γ j = 0, then β j is close to 0; if γ j = 1, the effect of β j on the model is intuitively large. The likelihood of this model given a mini-batch of data {( DISPLAYFORM3 where ψ(x i ; β) can be a mapping for logistic regression or linear regression, or a mapping based on a series of nonlinearities and affine transformations in the deep neural network. In the classification formulation, y i ∈ {1, . . ., K} is the response value of the i-th example. In addition, the variance σ 2 follows an inverse gamma prior DISPLAYFORM4 The i.i.d. Bernoulli prior is used since there is no structural information in the same layer. DISPLAYFORM5 Finally, our posterior density follows DISPLAYFORM6 The EMVS approach is efficient in identifying potential sparse high posterior probability submodels on high-dimensional regression (Rořková and) and classification problem . These characteristics are helpful for large neural network computation, thus we refer the stochastic version of the EMVS algorithm as Expectation Stochastic-Maximization (ESM). Due to the existence of latent variables, optimizing π(β, σ 2, δ|B) directly is difficult. We instead iteratively optimize the "complete-data" posterior log π(β, σ 2, δ, γ|B), where the latent indicator γ is treated as "missing data".More precisely, the ESM algorithm is implemented by iteratively increasing the objective function DISPLAYFORM0 where E γ|· denotes the conditional expectation DISPLAYFORM1 ) at the k-th iteration, we first compute the expectation of Q, then alter (β, σ, δ) to optimize it. For the conjugate spike-slab hierarchical prior formulation, the objective function Q is of the form DISPLAYFORM2 where DISPLAYFORM3 spike-and-slab priors in the "sparse" layer, e.g. FC1 DISPLAYFORM4 Gaussian priors in other layers DISPLAYFORM5 and DISPLAYFORM6 The physical meaning of E γ|· γ j in Q 2 is the probability ρ j, where ρ ∈ R p, of β j having a large effect on the model. Formally, we have DISPLAYFORM0 where DISPLAYFORM1 Bernoulli prior enables us to use P(γ j = 1|δ DISPLAYFORM2 The other conditional expectation comes from a weighted average κ j, where κ ∈ R p . DISPLAYFORM3 Since there is no closed-form optimal solution for β here, to optimize Q 1 with respect to β, we use Adam , a popular algorithm with adaptive learning rates, to train the model. In order to optimize Q 1 with respect to σ, by denoting DISPLAYFORM0 as V, following the formulation in and Rořková and we have: DISPLAYFORM1 To optimize Q 2, a closed-form solution can be derived from Eq. and Eq.. DISPLAYFORM2 Algorithm 1 SGLD-SA with spike and slab priors DISPLAYFORM3 Initialize: DISPLAYFORM4 end for The EMVS algorithm is designed for linear regression models, although the idea can be extended to nonlinear models. However, when extending to nonlinear models, such as DNNs, the M-step will not have a closed-form update anymore. A trivial implementation of the M-step will likely cause a local-trap problem. To tackle this issue, we replace the E-step and the M-step by SG-MCMC with the prior hyperparameters tuned via stochastic approximation BID0: DISPLAYFORM0 where DISPLAYFORM1 is the gradient-like function in stochastic approximation (see details in Appendix A.2), g θ (·) is the mapping detailed in Eq., Eq., Eq. and Eq. to derive the optimal θ based on the current β, the step size ω (k) can be set as A(k + Z) −α with α ∈ (0.5, 1]. The interpretation of this algorithm is that we sample β (k+1) fromL(β (k), θ (k) ) and adaptively optimize θ (k+1) from the mapping g θ (k). We expect to obtain an augmented sequence as follows: DISPLAYFORM2 We show the (local) L 2 convergence rate of SGLD-SA below and present the details in Appendix B.Theorem 1 (L 2 convergence rate). For any α ∈ and any compact subset Ω ∈ R 2p+2, under assumptions in Appendix B.1, the algorithm satisfies: there exists a constant λ such that DISPLAYFORM3 where t(Ω) = inf{k : DISPLAYFORM4 Corollary 1. For any α ∈ and any compact subset Ω ∈ R 2p+2, under assumptions in Appendix B.2, the distribution of β (k) in FORMULA1 converges weakly to the invariant distribution e L(β,θ *) as → 0.The key to guaranteeing the consistency of the latent variable estimators is from stochastic approximation and the fact of DISPLAYFORM5 The non-convex optimization of SGLD , in particular the ability to escape shallow local optima , ensures the robust optimization of the latent variables. Furthermore, the mappings of Eq., Eq., Eq. and Eq. all satisfy the assumptions on g in a compact subset Ω (Appendix B), which enable us to apply theorem 1 to SGLD-SA. Because SGHMC proposed by BID3 is essentially a second-order Langevin dynamics and yields a stationary distribution given an accurate estimation of the error variance from the stochastic gradient, the property of SGLD-SA also applies to SGHMC-SA. Corollary 2. For any α ∈ and any compact subset Ω ∈ R 2p+2, under assumptions in Appendix B.2, the distribution of β (k) from SG-MCMC-SA converges weakly to the invariant distribution π(β, ρ *, κ *, σ *, δ * |D) as → 0. The posterior average given decreasing learning rates can be approximated through the weighted DISPLAYFORM0 to avoid over-emphasizing the tail end of the sequence and reduce the variance of the estimator.; BID2 showed a theoretical optimal learning rate (k) ∝ k −1/3 for SGLD and k −1/5 for SGHMC to achieve faster convergence for posterior average, which are used in Sec. 4.1 and Sec. 4.2 respectively. 4.1 SIMULATION OF LARGE-P-SMALL-N REGRESSION SGLD-SA can be applied to the (logistic) linear regression cases, as long as u = 0 in Eq.. We conduct the linear regression experiments with a dataset containing n = 100 observations and p = 1000 predictors. N p (0, Σ) is chosen to simulate the predictor values X (training set) where DISPLAYFORM0 Response values y are generated from Xβ + η, where β = (β 1, β 2, β 3, 0, 0, ..., 0) and η ∼ N n (0, 3I n). To make the simulation in Rořková and more challenging, we assume DISPLAYFORM1 We introduce some hyperparameters, but most of them are uninformative, e.g. ν ∈ {10 −3, 1, 10 3} makes little difference in the test set performance. Sensitivity analysis shows that three hyperparameters are important: v 0, a and σ, which are used to identify and regularize the over-fitted space. We fix τ = 1, λ = 1, ν = 1, v 1 = 100, δ = 0.5, b = p and set a = 1. The learning rates for SGLD-SA and SGLD are set to (k) = 0.001 × k To implement SGLD-SA, we perform stochastic gradient optimization by randomly selecting 50 observations and calculating the corresponding gradient in each iteration. We simulate 500, 000 samples from the posterior distribution and at the same time keep optimizing the latent variables. EMVS is implemented with β directly optimized each time. We also simulate a group of the test set with 1000 observations (display 50 in FIG5 (e)) in the same way as generating the training set to evaluate the generalizability of our algorithm. Tab.1 shows that EMVS frequently fails given bad initializations, while SGLD-SA is fairly robust to the hyperparameters. In addition, from, we can see SGLD-SA is the only algorithm among the three that quantifies the uncertainties of β 1, β 2 and β 3 reasonably and always gives more accurate posterior average (FIG5); by contrast, the estimated response values y from SGLD is close to the true values in the training set (FIG5), but are far away from them in the testing set (2(e)), indicating the over-fitting problem of SGLD without proper regularization. For the simulation of SGLD-SA in logistic regression to demonstrate the advantage of SGLD-SA over SGLD and ESM, we leave the in Appendix C. We implement all the algorithms in Pytorch and run the experiments on GeForce GTX 1080 GPUs. The first DNN we use is a standard 2-Conv-2-FC CNN model (FIG0) of 670K parameters (see details in Appendix D.1). The first set of experiments is to compare methods on the same model without using other complication, such as data augmentation (DA) or batch normalization (BN) BID12. We refer to the general CNN without dropout as Vanilla, with 50% dropout rate applied to the green neurons (FIG0 as Dropout. Vanilla and Dropout models are trained q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q with Adam with Pytorch default parameters (with learning rate 0.001). We use SGHMC as a benchmark method as it is also sampling-based and has a close relationship with the popular momentum based optimization approaches in DNN. SGHMC-SA differs from SGHMC in that SGHMC-SA applies the spike-and-slab priors to the FC1 layer while SGHMC just uses the standard normal priors. The hyperparameters v 1 = 1, v 0 = 1 × 10 −3 and σ = 0.1 in SGHMC-SA are used to regularize the over-fitted space, and a, b are set to p to obtain a moderate "sparsity" to resemble dropout, the step size is ω (k) = 0.1 × (k + 1000) DISPLAYFORM0. We use training batch size 1000 and a thinning factor 500 to avoid a cumbersome system, and the posterior average is applied to each Bayesian model. Temperatures are tuned to achieve better (Appendix D.2).The four CNN models are tested on the classical MNIST and the newly introduced Fashion MNIST (FMNIST) dataset. Performance of these models is shown in Tab.2. Compared with SGHMC, our SGHMC-SA outperforms SGHMC on both datasets. We notice the posterior averages from SGHMC-SA and SGHMC obtain much better performance than Vanilla and Dropout. Without using either DA or BN, SGHMC-SA achieves 99.60% which even outperforms some state-of-the-art models, such as stochastic pooling (99.53%) , Maxout Network (99.55%) BID9 and pSGLD (99.55%) BID15. In F-MNIST, SGHMC-SA obtains 93.01% accuracy, outperforming all other competing models. To further test the maximum performance of SGHMC-SA, we apply DA and BN to the following experiments (see details in Appendix D.3) and refer the datasets with DA as aMNIST and aFMNIST. All the experiments are conducted using a 2-Conv-BN-3-FC CNN of 490K parameters. Using this model, we obtain 99.75% on aMNIST (300 epochs) and 94.38% on aFMNIST (1000 epochs). The are noticeable because posterior model averaging is essentially conducted on a single Bayesian neural network. We also conduct the experiments based on the ensemble of five networks and refer them as aMNIST-5 and aFMNIST-5 in Tab. 2. We achieve 99.79% on aMNIST-5 using 5 small Bayesian neural networks each with 2 thinning samples (4 thinning samples in aFMNIST-5), which is comparable with the state-of-the-art performance . Continuing with the setup in Sec. 4.2, the third set of experiments focus on evaluating model robustness. We expect less robust models perform considerably well on a certain dataset due to over-tuning; however, as the degree of adversarial attacks increases, the performance decreases sharply. In contrast, more robust models should be less affected by these adversarial attacks. We apply the Fast Gradient Sign method BID10 to generate the adversarial examples with one single gradient step as in's study: DISPLAYFORM0 where ζ ranges from 0.1, 0.2,..., 0.5 to control the different levels of adversarial attacks. Similar to the setup in the adversarial experiments by BID16, we normalize the adversarial images by clipping to the range. As shown in FIG6 and FIG6, there is no significant difference among all the four models in the early phase. As the degree of adversarial attacks arises, the images become vaguer as shown in FIG6 and FIG6. In this scenario the performance of Vanilla decreases rapidly, reflecting its poor defense against adversarial attacks, while Dropout performs better than Vanilla. But Dropout is still significantly worse than the sampling based methods SGHMC-SA and SGHMC. The advantage of SGHMC-SA over SGHMC becomes more significant when ζ > 0.25.In the case of ζ = 0.5 in MNIST where the images are hardly recognizable, both Vanilla and Dropout models fail to identify the right images and their predictions are as worse as random guesses. However, SGHMC-SA model achieves roughly 11% higher than these two models and 1% higher than SGHMC, which demonstrates the strong robustness of our proposed SGHMC-SA. Overall, SGHMC-SA always yields the most robust performance. We propose a mixed sampling-optimization method called SG-MCMC-SA to efficiently sample from complex DNN posteriors with latent variables and prove its convergence. By adaptively searching and penalizing the over-fitted parameter space, the proposed method improves the generalizability of deep neural networks. This method is less affected by the hyperparameters, achieves higher prediction accuracy over the traditional SG-MCMC methods in both simulated examples and real applications and shows more robustness towards adversarial attacks. Interesting future directions include applying SG-MCMC-SA towards popular large deep learning models such as the residual network BID11 on CIFAR-10 and CIFAR-100, combining active learning and uncertainty quantification to learn from datasets of smaller size and proving posterior consistency and the consistency of variable selection under various shrinkage priors concretely. The Fokker-Planck equation (FPE) can be formulated from the time evolution of the conditional distribution for a stationary random process. Denoting the probability density function of the random process at time t by q(t, β), where β is the parameter, the stochastic dynamics is given by DISPLAYFORM0 Let q(β) = lim t→∞ q(t, β). If lim t→∞ ∂ t q(t, β) = 0, then DISPLAYFORM1. In other words, lim t→∞ q(t, β) ∝ e L(β), i.e. q(t, β) gradually converges to the Bayesian posterior e L(β). Robbins-Monro algorithm is the first stochastic approximation algorithm to deal with the root finding problem. Given the random output of H(θ, β) with respect to β, our goal is to find θ * such that DISPLAYFORM0 where E θ * denotes the expectation with respect to the distribution of β given parameter θ *. To implement the Robbins-Monro Algorithm, we can generate iterates of the form as follows: DISPLAYFORM1 Note that in this algorithm, H(θ, β) is the unbiased estimator of h(θ), that is DISPLAYFORM2 If there exists an antiderivative Q(θ, β) that satisfies H(θ, β) = and E θ [Q(θ, β)] is concave, it is equivalent to solving the stochastic optimization problem max θ∈Θ E θ [Q(θ, β)]. In contrast to Eq. and Eq., the general stochastic approximation algorithm is intent on solving the root of the integration equation DISPLAYFORM0 where θ ∈ Θ, β ∈ B, for a subset B ∈ B, the transition kernel Π θ (β (k), B), which converges to the invariant distribution f θ (β), satisfies that DISPLAYFORM1. The stochastic approximation algorithm is an iterative recursive algorithm consisting of two steps: DISPLAYFORM2 is not equal to 0 but decays to 0 as k → ∞.To summarise, H(θ, β) is a biased estimator of h(θ) in finite steps, but as k → ∞, the bias decreases to 0. In the SG-MCMC-SA algorithm FORMULA1, DISPLAYFORM3 B CONVERGENCE ANALYSIS The stochastic gradient Langevin Dynamics with a stochastic approximation adaptation (SGLD-SA) is a mixed half-optimization-half-sampling algorithm to handle complex Bayesian posterior with latent variables, e.g. the conjugate spike-slab hierarchical prior formulation. Each iteration of the algorithm consists of the following steps: Sample β (k+1) using SGLD based on the current θ (k), i.e. DISPLAYFORM0 Optimize θ (k+1) from the following recursion DISPLAYFORM1 where g θ (k) (·) is some mapping to derive the optimal θ based on the current β. DISPLAYFORM2. In this formulation, our target is to find θ * such that h(θ DISPLAYFORM3 as k → ∞, this algorithm falls to the category of the general stochastic approximation. To provide the local L 2 upper bound for SGLD-SA, we first lay out the following assumptions:Assumption 1 (Step size and Convexity). {ω (k) } k∈N is a positive decreasing sequence of real numbers such that DISPLAYFORM0 There exist constant δ > 0 and θ DISPLAYFORM1 with additionally DISPLAYFORM2 Then for any α ∈ and suitable A and B, a practical ω (k) can be set as DISPLAYFORM3 Assumption 2 (Existence of Markov transition kernel). For any θ ∈ Θ, in the mini-batch sampling, there exists a family of noisy Kolmogorov operators {Π θ} approximating the operator (infinitesimal) of the Ito diffusion, such that every Kolmogorov operator Π θ corresponds to a single stationary distribution f θ, and for any Borel subset A of β, we have DISPLAYFORM4 Assumption 3 (Compactness). For any compact subset Ω of Θ, we only consider θ ∈ Ω such that DISPLAYFORM5 Note that the compactness assumption of the latent variable θ is not essential, the assumption that the variable is in the compact domain is not only reasonable, but also simplifies our proof. In addition, there exists constants C 1 (Ω) and C 2 (Ω) so that DISPLAYFORM6 Assumption 4 (Solution of Poisson equation). For all θ ∈ Θ, there exists a function µ θ on β that solves the Poisson equation DISPLAYFORM7 For any compact subset Ω of Θ, there exist constants C 3 (β, Ω) and C 4 (β, Ω) such that for all θ, θ ∈ Ω, DISPLAYFORM8 Remark: For notation simplicity, we write C 1 (Ω) as C 1, C 2 (Ω) as C 2,... in the following context. Lemma 1 is a restatement of Lemma 25 (page 447) from BID0. Lemma 1. Suppose k 0 is an integer which satisfies with DISPLAYFORM9 Then for any k > k 0, the sequence {Λ K k} k=k0,...,K defined below is increasing and bounded by 2ω DISPLAYFORM10 Lemma 2 is an extension of Lemma 23 (page 245) from BID0 Lemma 2. There exist λ 0 and k 0 such that for all λ ≥ λ 0 and k ≥ k 0, the sequence u DISPLAYFORM11 From FORMULA2, we can denote a positive constant ∆ + as lim k→∞ inf 2δω DISPLAYFORM12. Then can be simplified as DISPLAYFORM13 There exist λ 0 and k 0 such that for all λ > λ 0 and k > k 0, holds. Note that in practical case when C 1 is small, finding a suitable λ 0 will not be a problem. Theorem 1 (L 2 convergence rate). Suppose that Assumptions 1-4 hold, for any compact subset Ω ∈ Θ, the algorithm satisfies: there exists a constant λ such that DISPLAYFORM14 where t(Ω) = inf{k : DISPLAYFORM15 Proof. Denote T (k) = θ (k) − θ *, with the help of FORMULA2 and Poisson equation FORMULA2, we deduce that DISPLAYFORM16 First of all, according to FORMULA1 and FORMULA2, we have DISPLAYFORM17 DISPLAYFORM18 Conduct the decomposition of D3 similar to Theorem 24 (p.g. 246) from BID0 and Lemma A.5 . DISPLAYFORM19 (ii) From FORMULA3 and FORMULA3 respectively, we deduce that DISPLAYFORM20 Thus there exists C 4 = 2C 4 C 0 such that DISPLAYFORM21 (iii) D3-3 can be further decomposed to D3-3a and D3-3b DISPLAYFORM22 ) with a constant C 3 = 4C 0 C 3 which satisfies that DISPLAYFORM23 Finally, add all the items D1, D2 and D3 together, for some C 1 = C 3 + C 4, we have DISPLAYFORM24 Moreover, from FORMULA3 and FORMULA3, there exists a constant C 5 such that DISPLAYFORM25 Lemma 3 is an extension of Lemma 26 (page 248) from BID0.Lemma 3. Let {u (k) } k≥k0 as a sequence of real numbers such that for all k ≥ k 0, some suitable constants C 1 and C 2 DISPLAYFORM26 and assume there exists such k 0 that DISPLAYFORM27 Then for all k > k 0, we have DISPLAYFORM28 Proof of Theorem 1 (Continued). From Lemma 2, we can choose λ 0 and k 0 which satisfy the conditions and DISPLAYFORM29 From Lemma 3, it follows that for all k > k 0 DISPLAYFORM30 From and the increasing property of Λ k j in Lemma 1, we have DISPLAYFORM31 Therefore, given the sequence u (k) = λ 0 ω (k) that satisfies conditions FORMULA3, FORMULA5 and Lemma 3, for any k > k 0, from FORMULA1 and FORMULA2, we have DISPLAYFORM32 where λ = λ 0 + 6C 5. In addition to the previous assumptions, we make one more assumption on the stochastic gradients to guarantee that the samples converge to the posterior conditioned on the optimal latent variables: Assumption 5 (Gradient Unbiasedness and Smoothness). For all β ∈ B and θ ∈ Θ, the mini-batch of data B, the stochastic noise ξ, which comes from∇L(β, θ) − ∇L(β, θ), is a white noise and independent with each other. In addition, there exists a constant l ≥ 2 such that the following conditions hold: DISPLAYFORM0 For all θ, θ ∈ Θ, there exists a constant M > 0 such that the gradient is M-smooth: DISPLAYFORM1 Corollary 1. For all α ∈, under assumptions 1-5, the distribution of β (k) converges weakly to the invariant distribution e L(β,θ *) as → 0.Proof. The proof framework follows from section 4 of. In the context of stochastic noise ξ (k), we ignore the subscript of and only consider the case of τ = 1. Since θ DISPLAYFORM2 converges to θ * in SGLD-SA and the gradient is M-smooth FORMULA5, we transform the stochastic gradient from ∇L(DISPLAYFORM3, therefore Eq. FORMULA2 can be written as DISPLAYFORM4 Using Eq. FORMULA3, the characteristic function of DISPLAYFORM5 Then the characteristic function of DISPLAYFORM6 Rewrite β (k+1) as β (k+), the characteristic function of β (t+) is the characteristic function of DISPLAYFORM7 With the fact exp(x) = 1 + x + O(x 2), we can get DISPLAYFORM8 Therefore, For any integrable function f, set F as the Fourier transform defined by DISPLAYFORM9 DISPLAYFORM10 The inverse Fourier transform of F[f (x)] and the l-th order derivatives of f (l) (x) is DISPLAYFORM11 Combine Eq. FORMULA5, Eq. FORMULA6 and Eq. FORMULA1, we arrive at the following simplified equation: DISPLAYFORM12 Since DISPLAYFORM13 Finally, we have proved that the distribution of β (k) converges weakly to the invariant distribution e L(β,θ *) as → 0. Now we conduct the experiments on binary logistic regression. The setup is similar as before, except n is set to 500, Σ i,j = 0.3 |i−j| and η ∼ N (0, I/2). We set the learning rates in SGLD-SA and SGLD to 0.01 × k FIG11, FIG11 and FIG11 demonstrate the posterior distribution of SGLD-SA is significantly better than that of SGLD. As shown in FIG11, SGLD-SA is the best method to regulate the over-fitting space and provides the most reasonable posterior mean. Table. 3 illustrates the predictive power of SGLD-SA is overall better than the other methods and robust to different initializations. FIG11 and FIG11 show that the over-fitting problem of SGLD when p > n in logistic regression and the algorithm fails to regulate the over-fitting space; We observe SGLD-SA is able to resist over-fitting and always yields reproducible . q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q The first DNN we use is a standard 2-Conv-2-FC CNN: it has two convolutional layers with a 2 × 2 max pooling after each layer and two fully-connected layers. The filter size in the convolutional layers is 5 × 5 and the feature maps are set to be 32 and 64, respectively BID13 ). The fully-connected layers (FC) have 200 hidden nodes and 10 outputs. We use the rectified linear unit (ReLU) as activation function between layers and employ a cross-entropy loss. The second DNN is a 2-Conv-BN-3-FC CNN: it has two convolutional layers with a 2 × 2 max pooling after each layer and three fully-connected layers with batch normalization applied to the first FC layer. The filter size in the convolutional layers is 4 × 4 and the feature maps are both set to 64. We use 256 × 64 × 10 fully-connected layers. In practice, we observe a suitable temperature setting is helpful to improve the classification accuracy. For example, by setting τ = 100 in the second DNN (see Appendix D.1) we obtain 99.70% on aMNIST. To account for the scale difference of weights in different layers, we apply different temperatures to different layers based on different standard deviations of the gradients in each layer and obtain the in Tab. 2. The MNIST dataset is augmented by randomCrop: randomly crop each image with size 28 and padding 4, random rotation: randomly rotate each image by a degree in [−15•, +15•], normalization: normalize each image with empirical mean 0.1307 and standard deviation 0.3081.The FMNIST dataset is augmented by randomCrop: same as MNIST, randomHorizontalFlip: randomly flip each image horizontally, normalization: same as MNIST.
a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables
1,332
scitldr
Activity of populations of sensory neurons carries stimulus information in both the temporal and the spatial dimensions. This poses the question of how to compactly represent all the information that the population codes carry across all these dimensions. Here, we developed an analytical method to factorize a large number of retinal ganglion cells' spike trains into a robust low-dimensional representation that captures efficiently both their spatial and temporal information. In particular, we extended previously used single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity. On data recorded from retinal ganglion cells with strong pre-stimulus baseline, we showed that in situations were the stimulus elicits a strong change in firing rate, our extensions yield a boost in stimulus decoding performance. Our thus suggest that taking into account the baseline can be important for finding a compact information-rich representation of neural activity. Populations of neurons encode sensory stimuli across the time dimension (temporal variations), the space dimension (different neuron identities), or along combinations of both dimensions BID1 BID16 BID17 BID10 BID19. Consequently, understanding the neural code requires characterizing the firing patterns along these dimensions and linking them to the stimuli BID0 BID9 BID17 BID18 BID12. There are many methods for compactly representing neural activity along their most relevant dimensions. These methods include Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Factor Analysis (FA) BID2 BID3 BID13 BID20. Recently, a particularly promising tensor decomposition method was introduced that provides a compact representation of single trial neuronal activity into spatial and temporal dimensions and their combination in the given trial. The method is based on non-negative matrix factorization (NMF) BID14 BID6 BID21 which imposes non-negativity constraints on the extracted components leading to a parts-based, low dimensional, though flexible representation of the data, only assuming non-negativity of the model components. Though space-by-time NMF yielded robust decoding performance with a small number of parameters and good biological interpretability of its basis functions on data recorded from salamander retinal ganglion cells, the method does have a potential shortcoming: it cannot explicitly discount, and is partly confounded by, baseline activity that is not relevant for the neural response to a sensory stimulus. Although these non-negative tensor factorizations performed well on salamander retinal ganglion cells, which have almost non-existent spontaneous activity, it is not clear how well the method would perform on data with considerable spontaneous activity, which might require to explicitly correct for the pre-stimulus baseline. One way to reduce the baseline would be to subtract it from the stimulus-elicited response. This, however, would in negative activities that cannot be modeled using a decomposition with full non-negativity constraints such as space-by-time NMF. In this study, we thus propose a variant of space-by-time NMF that discounts the baseline activity by subtracting the pre-stimulus baseline from each trial and then decomposes the baseline-corrected activity using a tri-factorization that finds non-negative spatial and temporal modules, and signed activation coefficients. We explored the benefits that this method provides on data recorded from mouse and pig retinal ganglion cells and showed that baseline-corrected space-by-time NMF improves decoding performance on data with non-negligible baselines and stimulus response changes. We consider data that are composed of trials of spiking activity recorded simultaneously from a population of neurons in response to sensory stimuli. Each trial thus has a temporal component (when is a neuron firing) and a spatial component (which neuron is firing). We aim to find a decomposition into spatial and temporal firing patterns and into coefficients that represent the strength of spatial and temporal firing combinations within a given trial. Before decomposing the data, we discretize neural activity by binning the spike trains into time intervals (chosen to maximize decoding performance, c.f. supplementary Fig. S1) and counting the number of spikes in each bin. We then apply two different tensor decomposition methods that separate the spatial and temporal dimensions. We describe both methods in the following sections. Following, we decomposed neural activity into spatial and temporal patterns and their activation coefficients. The decomposition of a trial s takes the following form: DISPLAYFORM0 where R s denotes the population spike count matrix on trial s across T time bins and N recorded neurons, B tem denotes a (T × P)-matrix whose columns are the temporal modules, B spa is a (L × N)-matrix whose rows are the spatial modules and H s is a (P × L)-coefficient matrix that contains the weights of each combination of spatial and temporal modules. Note that the spatial and temporal modules are trial-independent whereas the activation coefficients H s are trial-dependent. The main goal of the above decomposition is to factorize the input signal into invariant spatial and temporal patterns across trials such that the factorization minimizes the total reconstruction error. Following, we used Space-by-Time Non-negative Matrix Factorization (SbT-NMF, corresponding to the Tucker-2 tensor decomposition with non-negativity constraints) to find the factors of Eq. 1. The algorithm decomposes the input tensor R into non-negative temporal modules B tem, non-negative spatial modules B spa and non-negative activation coefficient using multiplicative update rule to minimize the Frobenius norm of the difference between the input data and the reconstruction data DISPLAYFORM1 On data recorded from salamander retinal ganglion cells the algorithm was shown to provide lowdimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. To discount baseline activity from neural data, here we propose a novel decomposition algorithm termed Baseline-Corrected Space-by-Time Non-negative Matrix Factorization (BC-SbT-NMF) that first subtracts the baseline from the neural activity and then factorizes the activity into spatial and temporal modules and activation coefficients. Contrary to the original population spike counts, the baseline-corrected data are not necessarily non-negative anymore after baseline subraction. Our decomposition method therefore faces the problem of factorizing signed data. For this purpose, our factorization algorithm decomposes the neural activity into non-negative spatial and temporal modules and signed activation coefficients, corresponding to a Tucker-2 tensor decomposition with non-negativity constrained factor matrices and unconstrained core. The method is illustrated in Figure 1. Each trial consists of spatio-temporal binned spike counts of a population of neurons. For each neuron, we subtract the pre-stimulus firing rate baseline in the same trial from its activity. We then decompose the baseline-corrected trials into spatio-temporal modules representing common firing patterns and corresponding activation coefficients representing the activity of a particular pattern in a trial. The spatio-temporal patterns in turn are factorized into spatial modules representing coactive neurons and temporal modules representing temporal population activity patterns. Figure 1: Baseline-corrected space-by-time non-negative matrix factorization. On the left, two trials with spike counts recorded from 5 neurons and 9 time points each are illustrated as a spike count word matrix. The pre-stimulus baseline is subtracted from the spike count word matrix in order to discount the baseline. Then, the signed corrected activity is decomposed into non-negative spatial modules describing coactive neurons, non-negative temporal modules describing the temporal response profile of the population and signed activation coefficient. Each pair of spatial and temporal modules can be combined to form a spatio-temporal module. The algorithm estimates modules using iterative update rules to minimize the Frobenius norm of the difference between input data and the reconstructed data. Our derivation follows the derivation of semi-NMF presented in BID7, but extends the derivation to a trial-based tri-factorization where two of the factors are non-negative and trial-independent and one factor is signed and trialdependent. Relaxation of the constraints for the activation coefficients yields spatial and temporal modules that are less sparse. To counteract this, we also included L 1 -regularization for the spatial and temporal modules in our derivation following.Our objective function takes the form: DISPLAYFORM0 where λ ≥ 0 denotes the regularization parameter. For simplicity, we used the same parameter for the spatial and the temporal modules, but one could also have separate parameters. We derived iterative update steps for B spa, B tem and H to minimize our objective function. We started with the derivation of an iterative update step for B spa. To cast the tensor factorization of the (T × N × S) tensor R as a matrix tri-factorization, we first reshaped R by concatenating all slices R s along the first tensor dimension. This yields a data matrix R spa of size (ST × N). Analogously, we represented the (P × L × S) tensor H of activation coefficients as a (SP × L) matrix H spa.This procedure allows us to formulate a single Lagrangian with regard to B spa: DISPLAYFORM1 where the Lagrangian multipliers β ij enforce non-negativity constraints [B spa] ij ≥ 0. With the zero-gradient condition, we get: DISPLAYFORM2 The complementary slackness condition then yields: DISPLAYFORM3 In this equation, the parts R spa B tem H spa and H spa B tem B tem H spa are signed. To derive update rules for the non-negative spatial modules, we therefore need to separate positive and negative parts of the equation. For a matrix A, we denoted its positive and negative parts as A DISPLAYFORM4 With this definition, the identity A = A + − A − holds. Separating the positive and negative parts of Eq. 6 by means of this identity, we obtained: DISPLAYFORM5 At convergence, we have B DISPLAYFORM6 spa. Hence, we obtain the following update step for B spa: DISPLAYFORM7 To derive the update step for B tem, we analogously reshaped R by concatenating all slices R s along the second tensor dimension to get a data matrix R tem of size (T × SN) and we represented the (P × L × S) tensor H of activation coefficients as a (P × SL) matrix H tem. The analogous derivation steps as for the spatial modules then yield the update step for B tem: DISPLAYFORM8 Finally, we updated the activation coefficients H on a trial-by-trial basis. The activation coefficients are signed. For this reason, we can easily obtain the optimal activation coefficient matrices by inverting the module matrices. For each s ∈ {1, . . ., S}, we let FORMULA0 where (·) −1 denotes the Moore-Penrose pseudo-inverse. DISPLAYFORM9 The complete baseline-corrected space-by-time NMF algorithm takes the following form:1. For each neuron i, calculate the pre-stimulus firing rate b i and subtract it from the corresponding elements of the data tensor: ∀i ∈ {1, . . ., N}: R:,i,: → R:,i,: DISPLAYFORM10 with non-negative random numbers uniformly distributed between 0 and 1, and H (P × L × S) with random numbers uniformly distributed between -1 and 1. 3. Given H, B tem and the data matrix DISPLAYFORM11 (b) Update B spa using Eq.8 4. Given H, B spa and the data matrix DISPLAYFORM12 (b) Update B tem using Eq.9.5. Given B tem and B spa:(a) For all s ∈ 1,..., S, update H s using Eq.10. is below a given tolerance, stop. Otherwise, go to step 3.We provide MATLAB code for this algorithm at the following GitHub repository (TO BE ADDED TO THE FINAL VERSION UPON ACCEPTANCE). To evaluate the performance of the factorization algorithms to extract an informative low-dimensional representation of the neural data, we used multi class linear discriminant analysis (LDA) applied to the activation coefficients H s (non-negative in the case of space-by-time NMF and signed in the case of baseline-corrected space-by-time NMF) as predictors. For each experiment, we randomly separated trials into two sets of equal size: training set and test set and then applied the decomposition methods to the training set to obtain non-negative spatial and temporal modules and related activation coefficient. For fixed spatial and temporal modules, we then also applied the decomposition methods to the corresponding test set trials to compute activation coefficient given the training set modules. Finally, for each experiment and each decomposition method, we trained LDA classifiers on the training set activation coefficients and evaluated the test set decoding performance on the test set activation coefficients. The decomposition methods have three free parameters: the number of spatial modules, the number of temporal modules and the L 1 -regularization parameter λ. We decomposed the data and evaluated decoding performance for each possible combination of the module number parameters, where we used as the maximum number of spatial module the number of recorded neurons and we used as the maximum number of temporal module the number of time bins per trial, and at first setting the regularization parameter to zero. For each experiment and each decomposition method, we then selected the pair of spatial and temporal module numbers that maximized decoding performance. Whenever we found more than one pair that reached optimal decoding performance, we selected the pair with the minimum sum of module numbers. For these optimal module numbers, we then explored the effect of λ and did not find an increase in decoding performance for any non-zero λ FIG3 ). For this reason, we report all for λ = 0. We used multi-electrode arrays to record activity from populations of retinal ganglion cells of two mouse retinas (retinas 1 and 2) and one pig retina (retina 3) in scotopic light-level conditions. The experimental procedures were in accordance with international standards for animal research and were approved by the local ethical commitee and the local authorities, with experimental procedures following what detailed in a previous publication (DETAILS OF ETHICAL APPROVAL AND CITATION WILL BE PROVIDED UPON ACCEPTANCE). We recorded simultaneously from 30, 43 and 56 neurons, respectively, using the following five stimulation protocols: Natural movies, dance and mouse (NMD, NMM): Two kinds of black and white movies were used for visual stimulation. The first one was a natural movie from the viewpoint of the mouse and the second movie was a clip showing a dancing couple. The dance movie was projected 44 times for one retina while the movie from the viewpoint of mouse was projected 44 times for the two other retinas. Full-field high and low contrast flicker (HC, LC): Screen-brightness values were picked from a Gaussian distribution with mean 30 and standard deviation of 9 for high contrast and 3.5 for low contrast flicker with a range of values between 0 and 60 for hight contrast and a range of values between 20 and 40 for low contrast flicker. Both stimuli were projected 10 times for three retina. Full-field steps of light (SoL): Homogeneous illuminated screen with the following brightness sequence was presented: gray (pixel value 30) -white (pixel value 50) -gray (pixel value 30) -black (pixel value 10) -gray (pixel value 30). This stimulus was projected 10 times for three retinas. To define stimulus classes, we cut each continuous stimulus into 1 second intervals and associated a running number with each interval. We then randomly divided each dataset by putting half of the trials of each class into a training trials to compute the decompositions modules and to train the decoder, and the other half of the trials into a test set to evaluate the decoding performance. To evaluate the benefits of taking the pre-stimulus baseline firing rate into account when looking for trial-invariant spatial and temporal firing patterns, we first discretized spike train with the time interval that gives highest decoding performance and then decoded retinal ganglion cell responses with and without explicitly taking the baseline into account. More precisely, we obtained lowdimensional representations of mammalian ganglion cell activity using space-by-time NMF and baseline-corrected space-by-time NMF and then used an LDA-classifier to decode the visual stimuli from these representations (c.f. Section Methods). We first explored the low-dimensional representations of the neural populations responses that SbT-NMF and BC-SbT-NMF found. Both methods identified non-negative spatial and temporal modules. The activation coefficients, however, were non-negative only in the case of SbT-NMF, and signed in the case of BC-SbT-NMF. FIG0 shows example modules obtained from both decomposition methods. We scaled the vector length of each module to unit length to facilitate visual comparison of the modules. The spatial modules, describing simultaneous firing of groups of neurons, show ensembles of neurons that were coactive during flicker stimulation FIG0. The temporal modules, describing temporal population activity profiles, show elongated temporal periods of population activity over the 1 second intervals relative to the onset time of the stimulus class FIG0. Notably, the SbT-NMF modules are considerably sparser than the BC-SbT-NMF modules, but otherwise have somewhat similar shapes. These matrices form compact dimensionality-reduced representations of the trials: the number of coefficients per trials (10 × 10) is much smaller than the original number of population spike counts per trial (43 × 100). Visually, one can appreciate that the two trials of a class are more similar (vertical similarity) than trials between classes (horizontal similarity). In the next section, we will make this statement more formal by training a classifier on the activation coefficients and evaluating decoding performance on a separate test set. We applied an LDA decoder on single trial activation coefficients to evaluate how much information the coefficients carried about the visual stimuli. This allowed us to understand how well the decomposition methods could identify low-dimensional representations of the retinal ganglion cell activity that preserved the relevant features. To avoid overfitting, we evaluated decoding performance on a separate test set. In addition, we also used spatiotemporal PCA, ICA and orthogonal Tucker-2 to obtain other low-dimensional representations of single trial activity subject to different constraints. However, we found that average test set decoding performances of each of these methods were below that of BC SbT-NMF and SbT-NMF (see supplementary FIG0). FIG3 and b show the test set classification performance that we obtained for SbT-NMF and BC-SbT-NMF evaluated in five stimulus conditions (NND/NMM, HC, SoL and LC, see Section Methods) on a total of three retinas. Both algorithms captured more information for the movie and high contrast stimuli (NND/NNM, HC; shown in FIG3 panel a, decoding performance greater than 50% on all datasets) than for the step of light and low contrast (SoL, LC; shown in FIG3 panel b, decoding performance below 30% on all datasets) stimuli. Within these categories, sometimes SbT-NMF achieved higher decoding performance and other times, BC-SbT-NMF performed better. To understand the conditions under which one or the other decomposition method performs better, we investigated the stimulus-elicited change in firing rate from the pre-stimulus baseline. When evaluating the decoding performance as a function of this change in firing rate, we found that BC-SbT-NMF tended to perform better for high rate changes FIG3 ). We quantified this FIG3 ) by calculating the difference in decoding performance between BC-SbT-NMF and SbT-NMF for low rate changes (change < 0.7 Hz) and for high rate changes (change ≥ 0.7 Hz). This split separated the rate changes into two clusters (see FIG3).For low rate changes, we found that SbT-NMF leaned towards performance than BC-SbT-NMF. The difference in decoding performance, however, was not significant (one-tailed t-test, p = 0.0775). For high rate changes, on the other hand, we found a significant performance increase for BC-SbT-NMF compared to SbT-NMF (one-tailed t-test, p = 0.0438). In summary, these show that the BC-SbT-NMF decomposition method can achieve a significant improvement in decoding performance when there is a substantial change from pre-stimulus baseline rate to stimulus-elicited firing rate. Here we introduced a novel computational approach to decompose single trial neural population spike trains into a small set of trial-invariant spatial and temporal firing patterns and into a set of activation coefficients that characterize single trials in terms of the identified patterns. To this end, we extended space-by-time non-negative matrix factorization to discount the neuronal pre-stimulus baseline activity. Subtraction of the baseline required the introduction of signed activation coefficients into the decomposition algorithm. This extension considerable widens the scope of applicability of the algorithm as it opens the possibility to decompose data that are inherently signed. Our method inherits many the advantages of the original space-by-time NMF decomposition such as yielding low-dimensional representations of neural activity that compactly carry stimulus information from both the spatial and temporal dimension. Using non-negativity constraints for the spatial and temporal modules, we could also retain the ability of space-by-time NMF to identify a partsbased representation of the concurrent spatial and temporal firing activity of the population. The factorization into space and time further still allows the quantification of the relative importance of these different dimensions on a trial-by-trial basis. Recently, introduced another tensor decomposition algorithm with the capacity to factorize signed data. Their algorithm differs from ours in that it introduces additional constraints for the spatial and temporal modules (cluster-NMF). Our algorithm, on the other hand, introduces no additional constraints, thereby facilitating the comparison with the original space-by-time NMF algorithm. In fact, our extension actually relaxes the non-negativity constraint for the activation coefficients without giving up the parts-based representation of the spatial and temporal modules. This made it possible to pinpoint the reason for the increase in performance as the introduction of the baseline-correction. While BC-SbT-NMF outperformed SbT-NMF overall on tasks with strong baseline activity, we also found that in a few cases, SbT-NMF performed better than BC-SbT-NMF. Previous studies showed that there is an effect of the baseline firing rate on the response BID5 BID8. In these situations, the baseline might have an advantageous effect on the representation of neural responses and could lead to better decoding performance of SbT-NMF that we observed in some cases. One possibility to take this effect into account would be to devise a joint factorization-decoding framework that explicitly introduces the baseline into the optimization framework. While this is beyond the scope of the current work, we believe that development of such a framework is a promising direction for future research. In order to evaluate decoding performance, we applied LDA classification to the single trial activation coefficients to predict the stimulus identity and also to compare decoding performance of our baseline correction extension with the original space-by-time NMF decomposition. Specifically, we could show that our baseline-corrected version of space-by-time NMF increases decoding performance significantly when the difference between pre-stimulus baseline activity and stimulus-elicited rate was moderate to high. Importantly, this rate-change criterion makes it possible to select the best decomposition method (SbT-NMF vs. BC-SbT-NMF) following a simple data screening by means of the rate change. On our data, we obtained a relative difference in decoding performance on the order of 19.18% when picking the right method in this way and comparing to the inferior method. The requirement for such a rate change to perform well can be understood when considering the baseline-corrected activity. Without a substantial change from pre-stimulus to stimulus-elicited rate, most of the baseline-corrected activity will be close to zero. The Frobenius norm that is at the core of our objective function puts emphasis on high values and will be sensitive to outliers whenever most of the activity is close to zero. In this situation, our update rules are strongly affected by noise, thereby decreasing cross-validated decoding performance. In practical terms, this new method is expected to improve decoding performance when there is a large sensory-evoked response but the differences in responses across different sensory stimuli is of the order of spontaneous activity. In that case, the discounting of the spontaneous levels of firing would help to better discriminate among different stimuli based on neural responses. While the original space-by-time NMF algorithm could in principle identify spatial and temporal modules that fully account for the implicit baseline, the performance gain of our extension suggests that in practice, the original method cannot completely do so. Additional modules increases the model complexity and the number of parameters the method needs to fit which lowers decoding performance. The discount of the baseline provides an elegant way to avoid this unnecessary complication. 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 Bc-SbT-NMF 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Spatial modules Bc-SbT-NMF FIG3: L 1 -regularization on BC-SbT-NMF spatial and temporal modules obtained from step of light stimuli (SoL). (a) Decoding performance of BC-SbT-NMF regularized with 0, 1, 10, 100, 1000, 10000 and 100000 sparsity constraints. (b-h) Examples of spatial and temporal modules identified by BC-SbT-NMF regularized with λ = 0 (b), λ = 1 (c), λ = 10 (d), λ = 100 (e), λ = 1000 (f), λ = 10000 (g) and λ = 100000 (h).
We extended single-trial space-by-time tensor decomposition based on non-negative matrix factorization to efficiently discount pre-stimulus baseline activity that improves decoding performance on data with non-negligible baselines.
1,333
scitldr
We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods. This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator. Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map. A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits. Image compression systems based on deep neural networks (DNNs), or deep compression systems for short, have become an active area of research recently. These systems (e.g. BID6 BID33) are often competitive with modern engineered codecs such as WebP (WebP), JPEG2000 BID37 ) and even BPG (Bellard) (the state-of-the-art engineered codec). Besides achieving competitive compression rates on natural images, they can be easily adapted to specific target domains such as stereo or medical images, and promise efficient processing and indexing directly from compressed representations BID41. However, deep compression systems are typically optimized for traditional distortion metrics such as peak signal-to-noise ratio (PSNR) or multi-scale structural similarity (MS-SSIM) BID44. For very low bitrates (below 0.1 bits per pixel (bpp)), where preserving the full image content becomes impossible, these distortion metrics lose significance as they favor pixel-wise preservation of local (high-entropy) structure over preserving texture and global structure. To further advance deep image compression it is therefore of great importance to develop new training objectives beyond PSNR and MS-SSIM. A promising candidate towards this goal are adversarial losses BID13 which were shown recently to capture global semantic information and local texture, yielding powerful generators that produce visually appealing high-resolution images from semantic label maps BID43.In this paper, we propose and study a generative adversarial network (GAN)-based framework for extreme image compression, targeting bitrates below 0.1 bpp. We rely on a principled GAN formulation for deep image compression that allows for different degrees of content generation. In contrast to prior works on deep image compression which applied adversarial losses to image patches for artifact suppression BID33 BID12, generation of texture details BID25, or representation learning for thumbnail images BID35, our generator/decoder operates on the full-resolution image and is trained with a multi-scale discriminator BID43.We consider two modes of operation (corresponding to unconditional and conditional GANs BID13 BID31), namely• generative compression (GC), preserving the overall image content while generating structure of different scales such as leaves of trees or windows in the facade of buildings, and • selective generative compression (SC), completely generating parts of the image from a semantic label map while preserving user-defined regions with a high degree of detail. We emphasize that GC does not require semantic label maps (neither for training, nor for deployment). A typical use case for GC are bandwidth constrained scenarios, where one wants to preserve the full image as well as possible, while falling back to synthesized content instead of blocky/blurry blobs for regions for which not sufficient bits are available to store the original pixels. SC could be applied in a video call scenario where one wants to fully preserve people in the video stream, but a visually pleasing synthesized serves the purpose as well as the true . In the GC operation mode the image is transformed into a bitstream and encoded using arithmetic coding. SC requires a semantic/instance label map of the original image which can be obtained using off-the-shelf semantic/instance segmentation networks, e.g., PSPNet and Mask R-CNN BID18, and which is stored as a vector graphic. This amounts to a small, image dimension-independent overhead in terms of coding cost. On the other hand, the size of the compressed image is reduced proportionally to the area which is generated from the semantic label map, typically leading to a significant overall reduction in storage cost. For GC, a comprehensive user study shows that our compression system yields visually considerably more appealing than BPG (Bellard) (the current state-of-the-art engineered compression algorithm) and the recently proposed autoencoder-based deep compression (AEDC) system. In particular, our GC models trained for compression of general natural images are preferred to BPG when BPG uses up to 95% and 124% more bits than those produced by our models on the Kodak (Kodak) and RAISE1K BID11 data set, respectively. When constraining the target domain to the street scene images of the Cityscapes data set BID9, the reconstructions of our GC models are preferred to BPG even when the latter uses up to 181% more bits. To the best of our knowledge, these are the first showing that a deep compression method outperforms BPG on the Kodak data set in a user study-and by large margins. In the SC operation mode, our system seamlessly combines preserved image content with synthesized content, even for regions that cross multiple object boundaries, while faithfully preserving the image semantics. By partially generating image content we achieve bitrate reductions of over 50% without notably degrading image quality. Deep image compression has recently emerged as an active area of research. The most popular DNN architectures for this task are to date auto-encoders BID6 BID2 BID41 and recurrent neural networks (RNNs) BID39. These DNNs transform the input image into a bit-stream, which is in turn losslessly compressed using entropy coding methods such as Huffman coding or arithmetic coding. To reduce coding rates, many deep compression systems rely on context models to capture the distribution of the bit stream BID6 BID40 BID33. Common loss functions to measure the distortion between the original and decompressed images are the mean-squared error (MSE) BID6 BID2 BID41, or perceptual metrics such as MS-SSIM BID40 BID33. Some authors rely on advanced techniques including multiscale decompositions BID33, progressive encoding/decoding strategies BID39, and generalized divisive normalization (GDN) layers BID6 a).Generative adversarial networks (GANs) BID13 have emerged as a popular technique for learning generative models for intractable distributions in an unsupervised manner. Despite stability issues BID34 BID3 BID28, they were shown to be capable of generating more realistic and sharper images than prior approaches and to scale to resolutions of 1024 × 1024px BID22 for some datasets. Another direction that has shown great progress are conditional GANs BID13 BID31, obtaining impressive for image-to-image translation BID43 BID27 on various datasets (e.g. maps to satellite images), reaching resolutions as high as 1024 × 2048px BID43.Arguably the most closely related work to ours is BID33, which uses an adversarial loss term to train a deep compression system. However, this loss term is applied to small image patches and its purpose is to suppress artifacts rather than to generate image content. Furthermore, it uses a non-standard GAN formulation that does not (to the best of our knowledge) have an interpretation in terms of divergences between probability distributions, as in BID13 BID32. We refer to Sec. 6.1 and Appendix A for a more detailed discussion. BID35 use a GAN framework to learn a generative model over thumbnail images, which is then used as a decoder for thumbnail image compression. Other works use adversarial training for compression artifact removal (for engineered codecs) BID12 and single image super-resolution BID25. Finally, related to our SC mode, spatially allocating bitrate based on saliency of image content has a long history in the context of engineered compression algorithms, see, e.g.,, BID36 BID15 BID16. Generative Adversarial Networks: Given a data set X, Generative Adversarial Networks (GANs) can learn to approximate its (unknown) distribution p x through a generator G(z) that tries to map samples z from a fixed prior distribution p z to the distribution p x. The generator G is trained in parallel with a discriminator D by searching (using stochastic gradient descent (SGD)) for a saddle point of a mini-max objective DISPLAYFORM0 where G and D are DNNs and f and g are scalar functions. The original paper BID13 uses the "Vanilla GAN" objective with f (y) = log(y) and g(y) = log(1 − y). This corresponds to G minimizing the Jensen-Shannon (JS) Divergence between the (empirical) distribution of x and G(z). The JS Divergence is a member of a more generic family of f -divergences, and BID32 show that for suitable choices of f and g, all such divergences can be minimized with. In particular, if one uses f (y) = (y − 1) 2 and g(y) = y 2, one obtains the LeastSquares GAN BID28 (which corresponds to the Pearson χ 2 divergence), which we adopt in this paper. We refer to the divergence minimized over G as DISPLAYFORM1 Conditional Generative Adversarial Networks: For conditional GANs (cGANs) BID13 BID31, each data point x is associated with additional information s, where (x, s) have an unknown joint distribution p x,s. We now assume that s is given and that we want to use the GAN to model the conditional distribution p x|s. In this case, both the generator G(z, s) and discriminator D(z, s) have access to the side information s, leading to the divergence DISPLAYFORM2 Deep Image Compression: To compress an image x ∈ X, we follow the formulation of BID2 where one learns an encoder E, a decoder G, and a finite quantizer q. The encoder E maps the image to a latent feature map w, whose values are then quantized to L levels {c 1, . . ., c L} ⊂ R to obtain a representationŵ = q(E(x)) that can be encoded to a bitstream. The decoder then tries to recover the image by forming a reconstructionx = G(ŵ). To be able to backpropagate through the non-differentiable q, one can use a differentiable relaxation of q, as in.The average number of bits needed to encodeŵ is measured by the entropy H(ŵ), which can be modeled with a prior BID2 or a conditional probability model. The trade-off between reconstruction quality and bitrate to be optimized is then DISPLAYFORM3 where d is a loss that measures how perceptually similarx is to x. Given a differentiable estimator of the entropy H(ŵ), the weight β controls the bitrate of the model (large β pushes the bitrate down). However, since the number of dimensions dim(ŵ) and the number of levels L are finite, the entropy is bounded by (see, e.g., BID10) DISPLAYFORM4 It is therefore also valid to set β = 0 and control the maximum bitrate through the bound (i.e., adjusting L and/or dim(ŵ) through the architecture of E). While potentially leading to suboptimal bitrates, this avoids to model the entropy explicitly as a loss term. The proposed GAN framework for extreme image compression can be viewed as a combination of (conditional) GANs and learned compression. With an encoder E and quantizer q, we encode the image x to a compressed representationŵ = q(E(x)). This representation is optionally concatenated with noise v drawn from a fixed prior p v, to form the latent vector z. The decoder/generator G then tries to generate an imagê x = G(z) that is consistent with the image distribution p x while also recovering the specific encoded image x to a certain degree (see inset Fig.). Using z = [ŵ, v], this can be expressed by our saddle-point objective for (unconditional) generative compression, DISPLAYFORM0 where λ > 0 balances the distortion term against the GAN loss and entropy terms. Using this formulation, we need to encode a real image,ŵ = E(x), to be able to sample from pŵ. However, this is not a limitation as our goal is to compress real images and not to generate completely new ones. Since the last two terms of do not depend on the discriminator D, they do not affect its optimization directly. This means that the discriminator still computes the same f divergence L GAN as in, so we can write as DISPLAYFORM1 We note that equation has completely different dynamics than a normal GAN, because the latent space z containsŵ, which stores information about a real image x. A crucial ingredient is the bitrate limitation on H(ŵ). If we allowŵ to contain arbitrarily many bits by setting β = 0 and letting L and dim(ŵ) be large enough, E and G could learn to near-losslessly recover x from G(z) = G(q(E(x))), such that the distortion term would vanish. In this case, the divergence between p x and p G(z) would also vanish and the GAN loss would have no effect. By constraining the entropy ofŵ, E and G will never be able to make d fully vanish. In this case, E, G need to balance the GAN objective L GAN and the distortion term λE[d(x, G(z))], which leads to G(z) on one hand looking "realistic", and on the other hand preserving the original image. For example, if there is a tree for which E cannot afford to store the exact texture (and make d small) G can synthesize it to satisfy L GAN, instead of showing a blurry green blob. In the extreme case where the bitrate becomes zero (i.e., H(ŵ) → 0, e.g., by setting β = ∞ or dim(ŵ) = 0),ŵ becomes deterministic. In this setting, z is random and independent of x (through the v component) and the objective reduces to a standard GAN plus the distortion term, which acts as a regularizer. We refer to the setting in as generative compression (GC), where E, G balance reconstruction and generation automatically over the image. As for the conditional GANs described in Sec. 3, we can easily extend GC to a conditional case. Here, we also consider this setting, where the additional information s for an image x is a semantic label map of the scene, but with a twist: Instead of feeding the semantics to E, G and D, we only give them to the discriminator D during training. This means that no semantics are needed to encode or decode images with the trained models (since E, G do not depend on s). We refer to this setting as GC (D +). For GC and its conditional variant described in the previous section, E, G automatically navigate the trade-off between generation and preservation over the entire image, without any guidance. Here, we consider a different setting, where we guide the network in terms of which regions should be preserved and which regions should be synthesized. We refer to this setting as selective generative compression (SC) (an overview of the network structure is given in Fig. 8 in Appendix C).For simplicity, we consider a binary setting, where we construct a single-channel binary heatmap m of the same spatial dimensions asŵ. Regions of zeros correspond to regions that should be fully synthesized, whereas regions of ones correspond to regions that should be preserved. However, since our task is compression, we constrain the fully synthesized regions to have the same semantics s as the original image x. We assume the semantics s are separately stored, and thus feed them through a feature extractor F before feeding them to the generator G. To guide the network with the semantics, we mask the (pixel-wise) distortion d, such that it is only computed over the region to be preserved. Additionally, we zero out the compressed representationŵ in the regions that should be synthesized. Provided that the heatmap m is also stored, we then only encode the entries ofŵ corresponding to the preserved regions, greatly reducing the bitrate needed to store it. At bitrates whereŵ is normally much larger than the storage cost for s and m (about 2kB per image when encoded as a vector graphic), this approach can in large bitrate savings. We consider two different training modes: Random instance (RI) which randomly selects 25% of the instances in the semantic label map and preserves these, and random box (RB) which picks an image location uniformly at random and preserves a box of random dimensions. While the RI mode is appropriate for most use cases, the RB can create more challenging situations for the generator as it needs to integrate the preserved box seamlessly into the generated content. The architecture for our encoder E and generator G is based on the global generator network proposed in BID43, which in turn is based on the architecture of BID21. We present details in Appendix C.For the entropy term βH(ŵ), we adopt the simplified approach described in Sec. 3, where we set β = 0, use L = 5 centers C = {−2, 1, 0, 1, 2}, and control the bitrate through the upper bound DISPLAYFORM0 For example, for GC, with C = 2 channels, we obtain 0.0181bpp. 2 We note that this is an upper bound; the actual entropy of H(ŵ) is generally smaller, since the learned distribution will neither be uniform nor i.i.d, which would be required for the bound to hold with equality. When encoding the channels ofŵ to a bit-stream, we use an arithmetic encoder where frequencies are stored for each channel separately and then encode them in a static (non-adaptive) 1 If we assume s is an unknown function of x, another view is that we feed additional features (s) to D. Ours, 0.035bpp, 21.8dB BPG, 0.039bpp, 26.0dB MSE bl., 0.035bpp, 24.0dB DISPLAYFORM1 Figure 2: Visual example of images produced by our GC network with C = 4 along with the corresponding for BPG, and a baseline model with the same architecture (C = 4) but trained for MSE only (MSE bl.), on Cityscapes. The reconstruction of our GC network is sharper and has more realistic texture than those of BPG and the MSE baseline, even though the latter two have higher PSNR (indicated in dB for each image) than our GC network. In particular, the MSE baseline produces blurry reconstructions even though it was trained on the Cityscapes data set, demonstrating that domain-specific training alone is not enough to obtain sharp reconstructions at low bitrates.manner, similar to BID2. In our experiments, this leads to 8.8% smaller bitrates compared to the upper bound. By using a context model and adaptive arithmetic encoding, we could reduce the bitrate further, either in a post processing step (as in BID33 BID6), or jointly during training (as in)-which led to ≈ 10% savings in these prior works. For the distortion term we adopt d(x,x) = MSE(x,x) with coefficient λ = 10. Furthermore, we adopt the feature matching and VGG perceptual losses, L FM and L VGG, as proposed in BID43 with the same weights, which improved the quality for images synthesized from semantic label maps. These losses can be viewed as a part of d(x,x). However, we do not mask them in SC, since they also help to stabilize the GAN in this operation mode (as in BID43). We refer to Appendix D for training details. Data sets: We train GC models (without semantic label maps) for compression of diverse natural images using 188k images from the Open Images data set BID24 and evaluate them on the widely used Kodak image compression data set (Kodak) as well as 20 randomly selected images from the RAISE1K data set BID11. To investigate the benefits of having a somewhat constrained application domain and semantic information at training time, we also train GC models with semantic label maps on the Cityscapes data set BID9, using 20 randomly selected images from the validation set for evaluation. To evaluate the proposed SC method (which requires semantic label maps for training and deployment) we again rely on the Cityscapes data set. Cityscapes was previously used to generate images form semantic label maps using GANs.Baselines: We compare our method to the HEVC-based image compression algorithm BPG (Bellard) (in the 4:2:2 chroma format) and to the AEDC network from. BPG is the current state-of-the-art engineered image compression codec and outperforms other recent codecs such as JPEG2000 and WebP on different data sets in terms of PSNR (see, e.g.). We train the AEDC network (with bottleneck depth C = 4) on Cityscapes exactly following the procedure in ) except that we use early stopping to prevent overfitting (note that Cityscapes is much smaller than the ImageNet dataset used in). The so-obtained model has a bitrate of 0.07 bpp and gets a slightly better MS-SSIM than BPG at the same bpp on the validation set. To investigate the effect of the GAN term in our total loss, we train a baseline model with an MSE loss only (with the same architecture as GC and the same training parameters, see Sec. D in the Appendix), referred to as "MSE baseline".Ours 0.0341bpp BPG 0.102bpp Ours 0.0339bpp BPG 0.0382bpp Figure 3: Visual example of images from RAISE1k produced by our GC network with C = 4 along with the corresponding for BPG.User study: In the extreme compression regime realized by our GC models, where texture and sometimes even more abstract image content is synthesized, common reconstruction quality measures such as PSNR and MS-SSIM arguably lose significance as they penalize changes in local structure rather than assessing preservation of the global image content (this also becomes apparent by comparing reconstructions produced by our GC model with those obtained by the MSE baseline and BPG, see Fig. 2). Indeed, measuring PSNR between synthesized and real texture patches essentially quantifies the variance of the texture rather than the visual quality of the synthesized texture. To quantitatively evaluate the perceptual quality of our GC models in comparison with BPG and AEDC (for Cityscapes) we therefore conduct a user study using Amazon Mechanical Turk (AMT). 3 We consider two GC models with C = 4, 8 trained on Open Images, three GC (D +) models with C = 2, 4, 8 trained on Cityscapes, and BPG at rates ranging from 0.045 to 0.12 bpp. Questionnaires are composed by combining the reconstructions produced by the selected GC model for all testing images with the corresponding reconstruction produced by the competing baseline model side-byside (presenting the reconstructions in random order). The original image is shown along with the reconstructions, and the pairwise comparisons are interleaved with 3 probing comparisons of an additional uncompressed image from the respective testing set with an obviously JPEG-compressed version of that image. 20 randomly selected unique users are asked to indicate their preference for each pair of reconstructions in the questionnaire, ing in a total of 480 ratings per pairing of methods for Kodak, and 400 ratings for RAISE1K and Cityscapes. For each pairing of methods, we report the mean preference score as well as the standard error (SE) of the per-user mean preference percentages. Only users correctly identifying the original image in all probing comparisons are taken into account for the mean preference percentage computation. To facilitate comparisons for future works, we will release all images used in the user studies. Semantic quality of SC models: The issues with PSNR and MS-SSIM for evaluating the quality of generated content described in the previous paragraph become even more severe for SC models as a large fraction of the image content is generated from a semantic label map. Following image translation works BID43, we therefore measure the capacity of our SC models to preserve the image semantics in the synthesized regions and plausibly blend them with the preserved regions-the objective SC models are actually trained for. Specifically, we use PSPNet BID46 and compute the mean intersection-over-union (IoU) between the label map obtained for the decompressed validation images and the ground truth label map. For reference we also report this metric for baselines that do not use semantic label maps for training and/or deployment. 6.1 GENERATIVE COMPRESSION Fig. 4 shows the mean preference percentage obtained by our GC models compared to BPG at different rates, on the Kodak and the RAISE1K data set. In addition, we report the mean preference percentage for GC models compared to BPG and AEDC on Cityscapes. Example validation images for side-by-side comparison of our method with BPG for images from the Kodak, RAISE1K, and Cityscapes data set can be found in Figs. 1, 3, and 2, respectively. Furthermore, we perform extensive visual comparisons of all our methods and the baselines, presented in Appendix F. Figure 4: User study evaluating our GC models on Kodak, RAISE1K (top) and Cityscapes (bottom). For Kodak and RAISE1K, we use GC models trained on Open Images, without any semantic label maps. For Cityscapes, we used GC (D +), using semantic label maps only for D and only during training. The standard error is computed over per-user mean preference percentages. The blue arrows visualize how many more bits BPG uses when > 50% users still prefer our . Our GC models with C = 4 are preferred to BPG even when images produced by BPG use 95% and 124% more bits than those produced by our models for Kodak and RAISE1K, respectively. Notably this is achieved even though there is a distribution shift between the training and testing set (recall that these GC models are trained on Open Images). The gains of domain-specificity and semantic label maps (for training) becomes apparent from the on Cityscapes: Our GC models with C = 2 are preferred to BPG even when the latter uses 181% more bits. For C = 4 the gains on Cityscapes are comparable to those obtained for GC on RAISE1K. For all three data sets, BPG requires between 21 and 49% more bits than our GC models with C = 8. The GC models produce images with much finer detail than BPG, which suffers from smoothed patches and blocking artifacts. In particular, the GC models convincingly reconstruct texture in natural objects such as trees, water, and sky, and is most challenged with scenes involving humans. AEDC and the MSE baseline both produce blurry images. We see that the gains of our models are maximal at extreme bitrates, with BPG needing 95-181% more bits for the C = 2, 4 models on the three datasets. For C = 8 gains are smaller but still very large (BPG needing 21-49% more bits). This is expected, since as the bitrate increases the classical compression measures (PSNR/MS-SSIM) become more meaningful-and our system does not employ the full complexity of current state-of-the-art systems, as discussed next. State-of-the-art on Kodak: We give an overview of relevant recent learned compression methods and their differences to our GC method and BPG in Table 1 in the Appendix. BID33 also used GANs (albeit a different formulation) and were state-of-the-art in MS-SSIM in 2017, while the concurrent work of is the current state-of-the-art in image compression in terms of classical metrics (PSNR and MS-SSIM) when measured on the Kodak dataset (Kodak). Notably, all methods except ours (BPG, Rippel et al., and Minnen et al.) employ adaptive arithmetic coding using context models for improved compression performance. Such models could also be implemented for our system, and have led to additional savings of 10% in. Since Rippel et al. and Minnen et al. have only released a selection of their decoded images (for 3 and 4, respectively, out of the 24 Kodak images), and at significantly higher bitrates, a comparison with a user study is not meaningful. Instead, we try to qualitatively put our into context with theirs. In Figs. 12-14 in the Appendix, we compare qualitatively to BID33. We can observe that even though BID33 use 29-179% more bits, our models produce images of comparable or better quality. In FIG1, we show a qualitative comparison of our to the images provided by the concurrent work of, as well as to BPG (Bellard) on those images. First, we see that BPG is still visually competitive with the current state-of-the-art, which is consistent with moderate 8.41% bitrate savings being reported by in terms of PSNR. Second, even though we use much fewer bits compared to the example images available from, for some of them (Figs. 15 and 16) our method can still produce images of comparable visual quality. Given the dramatic bitrate savings we achieve according to the user study (BPG needing 21-181% more bits), and the competitiveness of BPG to the most recent state-of-the-art, we conclude that our proposed system presents a significant step forward for visually pleasing compression at extreme bitrates. Sampling the compressed representations: In FIG1 we explore the representation learned by our GC models (with C = 4), by sampling the (discrete) latent space ofŵ. When we sample uniformly, and decode with our GC model into images, we obtain a "soup of image patches" which reflects the domain the models were trained on (e.g. street sign and building patches on Cityscapes). Note that we should not expect these outputs to look like normal images, since nothing forces the encoder outputŵ to be uniformly distributed over the discrete latent space. However, given the low dimensionality ofŵ (32 × 64 × 4 for 512 × 1024px Cityscape images), it would be interesting to try to learn the true distribution. To this end, we perform a simple experiment and train an improved Wasserstein GAN (WGAN-GP) BID14 onŵ extracted from Cityscapes, using default parameters and a ResNet architecture. 4 By feeding our GC model with samples from the WGAN-GP generator, we easily obtain a powerful generative model, which generates sharp 1024 × 512px images from scratch. We think this could be a promising direction for building high-resolution generative models. In FIG5 in the Appendix, we show more samples, and samples obtained by feeding the MSE baseline with uniform and learned code samples. The latter yields noisier "patch soups" and much blurrier image samples than our GC network. Figure 6: Mean IoU as a function of bpp on the Cityscapes validation set for our GC and SC networks, and for the MSE baseline. We show both SC modes: RI (inst.), RB (box). D + annotates models where instance semantic label maps are fed to the discriminator (only during training); EDG + indicates that semantic label maps are used both for training and deployment. The pix2pixHD baseline BID43 was trained from scratch for 50 epochs, using the same downsampled 1024 × 512px training images as for our method. The heatmaps in the lower left corners show the synthesized parts in gray. We show the bpp of each image as well as the relative savings due to the selective generation. In FIG2 we present example Cityscapes validation images produced by the SC network trained in the RI mode with C = 8, where different semantic classes are preserved. More visual for the SC networks trained on Cityscapes can be found in Appendix F.7, including obtained for the RB operation mode and by using semantic label maps estimated from the input image via PSPNet.Discussion: The quantitative evaluation of the semantic preservation capacity (Fig. 6) reveals that the SC networks preserve the semantics somewhat better than pix2pixHD, indicating that the SC networks faithfully generate texture from the label maps and plausibly combine generated with preserved image content. The mIoU of BPG, AEDC, and the MSE baseline is considerably lower than that obtained by our SC and GC models, which can arguably be attributed to blurring and blocking artifacts. However, it is not surprising as these baseline methods do not use label maps during training and prediction. In the SC operation mode, our networks manage to seamlessly merge preserved and generated image content both when preserving object instances and boxes crossing object boundaries (see Appendix F.7). Further, our networks lead to reductions in bpp of 50% and more compared to the same networks without synthesis, while leaving the visual quality essentially unimpaired, when objects with repetitive structure are synthesized (such as trees, streets, and sky). In some cases, the visual quality is even better than that of BPG at the same bitrate. The visual quality of more complex synthesized objects (e.g. buildings, people) is worse. However, this is a limitation of current GAN technology rather than our approach. As the visual quality of GANs improves further, SC networks will as well. Notably, the SC networks can generate entire images from the semantic label map only. Finally, the semantic label map, which requires 0.036 bpp on average for the downscaled 1024 × 512px Cityscapes images, represents a relatively large overhead compared to the storage cost of the preserved image parts. This cost vanishes as the image size increases, since the semantic mask can be stored as an image dimension-independent vector graphic. We proposed and evaluated a GAN-based framework for learned compression that significantly outperforms prior works for low bitrates in terms of visual quality, for compression of natural images. Furthermore, we demonstrated that constraining the application domain to street scene images leads to additional storage savings, and we explored combining synthesized with preserved image content with the potential to achieve even larger savings. Interesting directions for future work are to develop a mechanism for controlling spatial allocation of bits for GC (e.g. to achieve better preservation of faces; possibly using semantic label maps), and to combine SC with saliency information to determine what regions to preserve. In addition, the sampling experiments presented in Sec. 6.1 indicate that combining our GC compression approach with GANs to (unconditionally) generate compressed representations is a promising avenue to learn high-resolution generative models. When encoding the channels ofŵ to a bit-stream, we use an arithmetic encoder where frequencies are stored for each channel separately and then encode them in a static (non-adaptive) manner, similar to BID2. In our experiments, this leads to 8.8% smaller bitrates compared to the upper bound. We compress the semantic label map for SC by quantizing the coordinates in the vector graphic to the image grid and encoding coordinates relative to preceding coordinates when traversing object boundaries (rather than relative to the image frame). The so-obtained bitstream is then compressed using arithmetic coding. To ensure fair comparison, we do not count header sizes for any of the baseline methods throughout. For the GC, the encoder E convolutionally processes the image x and optionally the label map s, with spatial dimension W × H, into a feature map of size W /16 × H /16 × 960 (with 6 layers, of which four have 2-strided convolutions), which is then projected down to C channels (where C ∈ {2, 4, 8} is much smaller than 960). This in a feature map w of dimension W /16 × H /16 × C, which is quantized over L centers to obtain the discreteŵ. The generator G projectsŵ up to 960 channels, processes these with 9 residual units BID17 at dimension W /16 × H /16 × 960, and then mirrors E by convolutionally processing the features back to spatial dimensions W × H (with transposed convolutions instead of strided ones).Similar to E, the feature extractor F for SC processes the semantic map s down to the spatial dimension ofŵ, which is then concatenated toŵ for generation. In this case, we consider slightly higher bitrates and downscale by 8× instead of 16× in the encoder E, such that dim(ŵ) = W /8 × H /8 × C. The generator then first processesŵ down to W /16 × H /16 × 960 and then proceeds as for GC.For both GC and SC, we use the multi-scale architecture of BID43 for the discriminator D, which measures the divergence between p x and p G(z) both locally and globally. We adopt the notation from BID43 to describe our encoder and generator/decoder architectures and additionally use q to denote the quantization layer (see Sec. 3 for details). The output of q is encoded and stored.• Encoder GC: c7s1-60,d120,d240,d480,d960,c3s1-C,q• Encoders SC:-Semantic label map encoder: c7s1-60,d120,d240,d480,d960-Image encoder: c7s1-60,d120,d240,d480,c3s1-C,q,c3s1-480,d960The outputs of the semantic label map encoder and the image encoder are concatenated and fed to the generator/decoder.• Generator/decoder: c3s1-960,R960,R960,R960,R960,R960,R960,R960, R960, R960,u480,u240,u120,u60,c7s1- DISPLAYFORM0 Figure 8: Structure of the proposed SC network. E is the encoder for the image x and the semantic label map s. q quantizes the latent code w toŵ. The subsampled heatmap multipliesŵ (pointwise) for spatial bit allocation. G is the generator/decoder, producing the decompressed imagex, and D is the discriminator used for adversarial training. F extracts features from s. We employ the ADAM optimizer BID23 ) with a learning rate of 0.0002 and set the mini-batch size to 1. Our networks are trained for 150000 iterations on Cityscapes and for 280000 iterations on Open Images. For normalization we used instance normalization BID42, except in the second half of the Open Images training, we train the generator/decoder with fixed batch statistics (as implemented in the test mode of batch normalization BID19), since we found this reduced artifacts and color shift. To train GC models (which do not require semantic label maps, neither during training nor for deployment) for compression of diverse natural images, we use 200k images sampled randomly from the Open Images data set BID24 ) (9M images). The training images are rescaled so that the longer side has length 768px, and images for which rescaling does not in at least 1.25× downscaling as well as high saturation images (average S > 0.9 or V > 0.8 in HSV color space) are discarded (ing in an effective training set size of 188k). We evaluate these models on the Kodak image compression dataset (Kodak) (24 images, 768 × 512px), which has a long tradition in the image compression literature and is still the most frequently used dataset for comparisons of learned image compression methods. Additionally, we evaluate our GC models on 20 randomly selected images from the RAISE1K data set BID11, a real-world image dataset consisting of 8156 high-resolution RAW images (we rescale the images such that the longer side has length 768px). To investigate the benefits of having a somewhat constrained application domain and semantic labels at training time, we also train GC models with semantic label maps on the Cityscapes data set BID9 (2975 training and 500 validation images, 34 classes, 2048 × 1024px resolution) consisting of street scene images and evaluate it on 20 randomly selected validation images (without semantic labels). Both training and validation images are rescaled to 1024 × 512px resolution. To evaluate the proposed SC method (which requires semantic label maps for training and deployment) we again rely on the Cityscapes data set. Cityscapes was previously used to generate images form semantic label maps using GANs. The preprocessing for SC is the same as for GC. In the following Sections, F.1, F.2, F.3, we show the first five images of each of the three datasets we used for the user study, next to the outputs of BPG at similar bitrates. Secs. F.4 and F.5 provide visual comparisons of our GC models with BID33 and, respectively, on a subset of images form the Kodak data set. In Section F.6, we show visualizations of the latent representation of our GC models. Finally, Section F.7 presents additional visual for SC. Figure 12: Our model loses more texture but has less artifacts on the knob. Overall, it looks comparable to the output of BID33, using significantly fewer bits. Ours, 0.0651bpp Rippel et al., 0.0840bpp (+29%) Figure 13: Notice that compared to BID33, our model produces smoother lines at the jaw and a smoother hat, but proides a worse reconstruction of the eye. Ours, 0.0668bpp Rippel et al., 0.0928bpp (+39%) Figure 14: Notice that our model produces much better sky and grass textures than BID33, and also preserves the texture of the light tower more faithfully. GC (C = 4) MSE (C = 4) Figure 20: We train the same architecture with C = 4 for MSE and for generative compression on Cityscapes. When uniformly sampling the (discrete) latent spaceŵ of the models, we see stark differences between the decoded images G(ŵ). The GC model produces patches that resemble parts of Cityscapes images (street signs, buildings, etc.), whereas the MSE model outputs looks like low-frequency noise. We experiment with learning the distribution ofŵ = E(x) by training an improved Wasserstein GAN BID14. When sampling form the decoder/generator G of our model by feeding it with samples from the improved WGAN generator, we obtain much sharper images than when we do the same with an MSE model. Reconstructions obtained by our SC network using semantic label maps estimated from the input image via PSPNet. We collect here additional for the discussion with the reviewers, so that they are easily found. We will integrate these into the paper. In Table 2 we compute the PSNR on the Cityscapes test set, when varying the entropy constraint (i.e. changing C), and the two extremes (a) when MSE is only optimized and (b) when the GAN loss is only optimized. The first three rows shows as the entropy constraint is relaxed, the network can more easily optimize the distortion term leading to a higher PSNR. The fourth row shows that when optimizing for MSE only (see FIG2 for a qualitative example) we obtain superior PSNR (but at the expense of visual quality with blurry images). The last rows shows that when turning off distortion losses (λ = 0), the network does optimize reconstruction at all. Here we observe that the GAN "collapses" and outputs repetitive textures (see FIG1), suggesting the distortion losses are crucial for stability of training. In FIG2 we show the loss curves when training our GC, C = 8 model on OpenImages BID24. We note that the loss fluctuates heavily across iterations due to the small batch size (one), but the smoothed losses are stable. For all our experiments, both on Cityscapes and OpenImages, we kept the weights of the losses and ratio between discriminator/generator iterations constant and at point did our (GC and SC) models collapse during training for either dataset. Our GC, C = 2, H(ŵ) < 0.018bpp 21.46Our GC, C = 4, H(ŵ) < 0.036 23.17Our GC, C = 8, H(ŵ) < 0.072 24.93 MSE bl., C = 4, H(ŵ) < 0.036 25.91 GC, λ = 0, C = 8, H(ŵ) < 0.072 11.65 Table 2: We consider the effect of the GAN loss, the distortion losses and the entropy constraint on the PSNR of the trained model on the Cityscapes dataset. Figure 27: We show convergence plots for the distortion losses from training our GC C = 8) channel model on OpenImages.
GAN-based extreme image compression method using less than half the bits of the SOTA engineered codec while preserving visual quality
1,334
scitldr
In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users. Popular approaches learn a scoring function that scores items individually (i.e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss. The list is then sorted in the descending order of the scores. Possible interactions between items present in the same list are taken into account in the training phase at the loss level. However, during inference, items are scored individually, and possible interactions between them are not considered. In this paper, we propose a context-aware neural network model that learns item scores by applying a self-attention mechanism. The relevance of a given item is thus determined in the context of all other items present in the list, both in training and in inference. Finally, we empirically demonstrate significant performance gains of self-attention based neural architecture over Multi-Layer Perceptron baselines. This effect is consistent across popular pointwise, pairwise and listwise losses on datasets with both implicit and explicit relevance feedback. Learning to rank (LTR) is an important area of machine learning research, lying at the core of many information retrieval (IR) systems. It arises in numerous industrial applications like search engines, recommender systems, question-answering systems, and others. A typical machine learning solution to the LTR problem involves learning a scoring function, which assigns real-valued scores to each item of a given list, based on a dataset of item features and human-curated or implicit (e.g. clickthrough logs) relevance labels. Items are then sorted in the descending order of scores. Performance of the trained scoring function is usually evaluated using an Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). , Taipei, Taiwan © 2019 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn IR metric like Mean Reciprocal Rank (MRR), Normalised Discounted Cumulative Gain (NDCG) or Mean Average Precision (MAP). In contrast to other classic machine learning problems like classification or regression, the main goal of a ranking algorithm is to determine relative preference among a group of items. Scoring items individually is a proxy of the actual learning to rank task. Users' preference for a given item on a list depends on other items present in the same list: an otherwise preferable item might become less relevant in the presence of other, more relevant items. Common learning to rank algorithms attempt to model such inter-item dependencies at the loss level. That is, items in a list are still scored individually, but the effect of their interactions on evaluation metrics is accounted for in the loss function, which usually takes a form of a pairwise (RankNet, LambdaLoss ) or a listwise (ListNet, ListMLE ) objective. For example, in LambdaMART the gradient of the pairwise loss is rescaled by the change in NDCG of the list which would occur if a pair of items was swapped. Pointwise objectives, on the other hand, do not take such dependencies into account. In this work, we propose a learnable, context-aware, self-attention based scoring function, which allows for modelling of interitem dependencies not only at the loss level but also in the computation of items' scores. Self-attention is a mechanism first introduced in the context of natural language processing. Unlike RNNs, it does not process the input items sequentially but allows the model to attend to different parts of the input regardless of their distance from the currently processed item. We adapt the Transformer, a popular self-attention based neural machine translation architecture, to the ranking task. We demonstrate that the obtained ranking model significantly improves performance over Multi-Layer Perceptron (MLP) baselines across a range of pointwise, pairwise and listwise ranking losses. Evaluation is conducted on MSLR-WEB30K, the benchmark LTR dataset with multi-level relevance judgements, as well as on clickthrough data coming from Allegro.pl, a large-scale e-commerce search engine. We provide an open-source Pytorch implementation of our self-attentive context-aware ranker available at url_removed. The rest of the paper is organised as follows. In Section 2 we review related work. In Section 3 we formulate the problem solved in this work. In Section 4 we describe our self-attentive ranking model. Experimental and their discussion are presented in Section 5. In Section 6 we conduct an ablation study of various hyperparameters of our model. Finally, a summary of our work is given in Section 7. Learning to rank has been extensively studied and there is a plethora of resources available on classic pointwise, pairwise and listwise approaches. We refer the reader to for the overview of the most popular methods. What the majority of LTR methods have in common is that their scoring functions score items individually. Inter-item dependencies are (if at all) taken into account at the loss level only. Previous attempts at modelling context of other items in a list in the scoring function include: • a pairwise scoring function and Groupwise Scoring Function (GSF), which incorporates the former work as its special case. However, the proposed GSF method simply concatenates feature vectors of multiple items and passes them through an MLP. To desensitize the model to the order of concatenated items, Monte-Carlo sampling is used, which yields an unscalable algorithm, • a seq2slate model uses an RNN combined with a variant of Pointer Networks in an encoder-decoder type architecture to both encode items in a context-aware fashion and then produce the optimal list by selecting items one-by-one. Authors evaluate their approach only on clickthrough data (both real and simulated from WEB30K). A similar, simpler approach known as Deep Listwise Context Model (DLCM) was proposed in: an RNN is used to encode a set of items for re-ranking, followed by a single decoding step with attention, • in, authors attempt to capture inter-item dependencies by adding so-called delta features which represent how different given item is from items surrounding it in the list. It can be seen as a simplified version of a local self-attention mechanism. Authors evaluate their approach on proprietary search logs only, • authors of formulate the problem of re-ranking of a list of items as that of a whole-list generation. They introduce ListCVAE, a variant of Conditional Variational Auto-Encoder which learns the joint distribution of items in a list conditioned on users' relevance feedback and uses it to directly generate a ranked list of items. Authors claim NDCG unfairly favours greedy ranking methods and thus do not use that metric in their evaluation, • similarly to our approach, Pei et al. use the self-attention mechanism to model inter-item dependencies. Their approach, however, was not evaluated on a standard WEB30K dataset and the only loss functions considered was ListNet. Our proposed solution to the problem of context-aware ranking makes use of the self-attention mechanism. It was first introduced as intra-attention in and received more attention after the introduction of the Transformer architecture. Our model can be seen as a special case of the encoder part of the Transformer. We compare the proposed approach with those of the aforementioned methods which provided an evaluation on WEB30K in terms of NDCG@5. These include GSF of and DLCM of. We outperform both competing methods. In this section, we formulate problem at hand in learning to rank setting. Let X be the training set. It consists of pairs (x, y) of a list x of d f -dimensional real-valued vectors x i together with a list y of their relevance labels y i (multi-level or binary). Note that lists x in the training set may be of varying length. The goal is to find a scoring function f which maximises an IR metric of choice (e.g. NDCG) on the test set. Since IR metrics are rank based (thus, nondifferentiable), the scoring function f is trained to minimise the average of a surrogate loss l over the training data. while controlling for overfitting (e.g. by using dropout in the neural network based scoring function f or adding L 1 or L 2 penalty term to the loss function l ). Thus, two crucial choices one needs to make when proposing a learning to rank algorithm are that of a scoring function f and loss function l. As discussed earlier, typically, f scores elements x i ∈ x individually to produce scores f (x i), which are then input to loss function l together with ground truth labels y i. In subsequent sections, we describe our construction of context-aware scoring function f which is able to model interactions between items x i in a list x. Our model is generic enough to be applicable with any of standard pointwise, pairwise or listwise loss. We thus experiment with a variety of popular ranking losses l. In this section, we describe the architecture of our self-attention based ranking model. We modify the Transformer architecture to work in the ranking setting and obtain a scoring function which, when scoring a single item, takes into account all other items present in the same list. The key component of our model is the self-attention mechanism introduced in. The attention mechanism can be described as taking the query vector and pairs of key and value vectors as input and producing a vector output. The output of the attention mechanism for a given query is a weighted sum of the value vectors, where weights represent how relevant to the query is the key of the corresponding value vector. Self-attention is a variant of attention in which query, key and value vectors are all the same -in our case, they are vector representations of items in the list. The goal of the self-attention mechanism is to compute a new, higher-level representation for each item in a list, by taking a weighted sum over all items in a list according to weights representing the relevance of these items to the query item. There are many ways in which one may compute the relevance of key vectors to query vectors. We use the variant of selfattention known as Scaled Dot-Product Attention. Suppose Q is a d model -dimensional matrix representing all items (queries) in the list. Let K and V be the keys and values matrices, respectively. Then As described in, it is beneficial to perform the self-attention operation multiple times and concatenate the outputs. To avoid growing the size of the ing output vector, matrices Q, K and V are first linearly projected H times to Each of H computations of linear projection of Q, K, V, followed by self-attention mechanism is referred to as a single attention head. Note that each head has its own learnable projection matrices. The outputs of each head are concatenated and once again linearly projected, usually to the vector space of the same dimension as that of input matrix Q. Similarly to the Transformer, our model also uses multiple attention heads. Transformer architecture was designed to solve a neural machine translation (NMT) task. In NMT, the order of input tokens should be taken into account. Unlike RNNs, self-attention based encoder has no way of discerning the order of input tokens. Authors of the original Transformer paper proposed to solve the problem by using either fixed or learnable positional encodings. The ranking problem can be viewed as either ordering a set of (unordered) items or as re-ranking, where the input list has already been sorted according to a weak ranking model. In the former case, the use of positional encodings is not needed. In the latter, they may boost model's performance. We experiment with both ranking and re-ranking settings and when positional encodings are used, we test the fixed encodings variant 1. Details can be found in Section 5. We adapt the Transformer model to the ranking setting as follows. Items on a list are treated as tokens and item features as input token embeddings. We denote the length of an input list as l and the number of features as d f. Each item is first passed through a shared fully connected layer of size d f c. Next, hidden representations are passed through an encoder part of Transformer architecture with N encoder blocks, H heads and hidden dimension d h. Recall that an encoder block in the Transformer consists of a multi-head attention layer with a skip-connection to the input, followed by layer normalisation, time-distributed feed-forward layer and another skip connection followed by layer normalisation. Dropout is applied before performing summation in residual blocks. Finally, after N encoder blocks, a fully-connected layer shared across all items in the list is used to compute a score for each item. The model can be seen as an encoder part of the Transformer with extra linear projection on the input. By using self-attention in the encoder, we ensure that in the computation of a score of a given item, hidden representation of all other items were accounted for. Obtained scores, together with ground truth labels, can provide input to any ranking loss of choice. If the loss is a differentiable function of scores (and thus, of model's parameters), one can use SGD to optimise it. We thus obtain a general, context-aware model for scoring items on a list that can readily be used with any differentiable ranking loss. Learning to rank datasets come in two flavours: they can have either multi-level or binary relevance labels. Usually, multi-level relevance 1 We found learnable positional encodings to yield similar . labels are human-curated, whereas binary labels are derived from clickthrough logs and are considered implicit feedback. We evaluate our context-aware ranker on both types of data. For the first type, we use the popular WEB30K dataset, which consists of more than 30,000 queries together with lists of associated search . Every search is encoded as a 136-dimensional real-valued vector and has associated with it a relevance label on the scale from 0 (irrelevant) to 4 (most relevant). We standardise the features before inputting them into a learning algorithm. The dataset comes partitioned into five folds with roughly the same number of queries per fold. We perform 5-fold cross-validation by training our models on three folds, validating on one and testing on the final fold. All reported are averages across five folds together with the standard deviation of . Since lists in the dataset are of unequal length, we pad or subsample to equal length for training, but use full length (i.e. pad to maximum length present in the dataset) for validation and testing. For a dataset with binary labels, we use clickthrough logs of Allegro.pl, a large scale e-commerce search engine. The search engine already has a ranking model deployed, which is trained using XGBoost with rank:pairwise loss. We thus treat learning on this dataset as a re-ranking problem and use fixed positional encodings in context-aware scoring functions. This lets the models leverage items' positions returned by the base ranker. The search logs consist of 1M lists, each of length at most 60. Nearly all lists (95%) have only one relevant item with label 1; remaining items were not clicked and are deemed irrelevant (label 0). Each item in a list is represented by a 45-dimensional, real-valued vector. We do not perform cross-validation on this set, but we use the usual train, validation and test splits of the data. To evaluate the performance of the proposed context-aware ranking model, we use several popular ranking losses. Pointwise losses used are RMSE of predicted scores and ordinal loss (with minor modification to make it suitable for ranking). For pairwise losses, we use NDCGLoss 2++ (one of the losses of LambdaLoss framework) and its special cases, RankNet and LambdaRank. Listwise losses used consist of ListNet and ListMLE. Below, we briefly describe all of the losses used. For a more thorough treatment, please refer to the original papers. Throughout, X denotes the training set, x denotes an input list of items, s = f (x) is a vector of scores obtained via the ranking function f and y is the vector of ground truth relevancy labels. The simplest baseline is a pointwise loss, in which no interaction between items is taken into account. We use RMSE loss: In practice, we used sigmoid activation function on the outputs of the scoring function f and rescaled them by multiplying by maximum relevance value (e.g. 4 for WEB30K). The self-attentive scoring function was modified to return four outputs and each output was passed through a sigmoid activation function. Thus, each neuron of the output predicts a single relevancy level, but by the reformulation of ground truth, their relative order is maintained, i.e. if, say, label 2 is predicted, label 1 should be predicted as well (although it is not strictly enforced and model is allowed to predict label 2 without predicting label 1). The final loss value is the mean of binary cross-entropy losses for each relevancy level. During inference, the outputs of all output neurons are summed to produce the final score of an item. LambdaLoss, RankNet and LambdaRank. We used NDCGLoss2++ of, formulated as follows: where and H (π |s) is a hard assignment distribution of permutations, i.e. H (π |s) = 1 and H (π |s) = 0 for all π π whereπ is the permutation in which all items are sorted by decreasing scores s. Fixed parameter µ is set to 10.0. By removing the exponent in l(s, y) formula we obtain the RankNet loss function, weighing each score pair identically. Similarly, we may obtain differently weighted RankNet variants by changing the formula in the exponent. To obtain a LambdaRank formula, replace the exponent with ListNet loss is given by the following formula: In binary version, softmax of ground truth y is omitted for singleclick lists and replaced with normalisation by the number of clicks for multiple-click lists. ListMLE is given by: where and y(i) is the index of object which is ranked at position i. We train both our context-aware ranking models and MLP models on both datasets, using all loss functions discussed in Section 5.2 2. We also train XGBoost models with rank:pairwise loss similar to the production model of the e-commerce search engine for both datasets. Hyperparameters of all models (number of encoder blocks, number of attention heads, dropout, etc.) are tuned on the validation set of Fold 1 for each loss separately. MLP models are constructed to have a similar number of parameters to context-aware ranking models. For optimisation of neural network models, we use Adam optimiser with the learning rate tuned separately for each model. Details of hyperparameters used can be found in Appendix A. In Section 6 we provide an ablation study of the effect of various hyperparameters on the model's performance. On WEB30K, models' performance is evaluated using NDCG@5 3, which is the usual metric reported for this dataset. Results are reported in Table 1. On e-commerce search logs, we report a relative percentage increase in NDCG@60 over production XGBoost model, presented in Table 2. We observe consistent and significant performance improvement of the proposed self-attention based model over MLP baseline across all types of loss functions considered. In particular, for ListNet we observe a 7.9% performance improvement over MLP baseline on WEB30K. Note also that the best performing MLP model is outperformed even by the worst-performing self-attention based model on both datasets. We thus observe that incorporating context-awareness into the model architecture has a more pronounced effect on the performance of the model than varying the underlying loss function. Surprisingly, ordinal loss outperforms more established and better-studied losses like ListNet, ListMLE or NDCGLoss 2++ on multi-level relevancy data. Another surprising finding is a good performance of models trained with RMSE loss, especially as compared to models trained to optimise RankNet and ListMLE. For comparison with the current state-ofthe-art, we provide on WEB30K reported in other works in Table 3. For models with multiple variants, we cite the best reported in the original work. In all tables, boldface is the best value column-wise. All experiments on WEB30K described above were conducted in the ranking setting -input lists of items were treated as unordered, thus positional encoding was not used. To verify the effect of positional encoding on the model's performance, we conduct the following experiments on WEB30K. To avoid information leak, training data 4 is divided into five folds and five XGBoost models are trained, each on four folds. Each model predicts scores for the remaining fold, and the entire dataset is sorted according to these scores. Finally, we train the same models 5 as earlier on the sorted dataset, but use fixed positional encoding. Results are presented in Table 4. As expected, the models are able to learn positional information and demonstrates improved performance over the plain ranking setting. To gauge the effect of various hyperparameters of self-attention based ranker on its performance, we performed the following ablation study. We trained the context-aware ranker with the ordinal loss on Fold 1 of WEB30K dataset and experimented with a different number N of encoder blocks, H attention heads, length l of longest list used in training, dropout rate p dr op and size d h of hidden dimension. Results are summarised in Table 5. Baseline model (i.e. the best performing context-aware ranker trained with ordinal loss) had the following values of hyperparameters: N = 4, H = 2, l = 240, p dr op = 0.4 and d h = 512. We observe that a high value of dropout is essential to prevent overfitting. Even though it is better to use multiple attention heads as opposed to a single attention head, using too many in performance degradation. Notice that increasing hidden dimension yields better performance than one reported in Table 1, however, this comes at a price of a large increase in the number of parameters and thus longer training times. Finally, stacking multiple encoder blocks increases performance. However, we did not test the effect of stacking more than 4 encoder blocks due to GPU memory constraints. In this work, we addressed the problem of constructing a contextaware scoring function for learning to rank. We adapted the selfattention based Transformer architecture from the neural machine translation literature to propose a new type of scoring function for LTR. We demonstrated considerable performance gains of proposed neural architecture over MLP baselines across different losses and types of data, both in ranking and re-ranking setting. These experiments provide strong evidence that the gains are due to the ability of the model to score items simultaneously. As a of our empirical study, we observed the strong performance of models trained to optimise ordinal loss function. Such models outperformed models trained with well-studied losses like LambdaLoss or LambdaMART, which were previously shown to provide tight bounds on IR metrics like NDCG. On the other hand, we observed the surprisingly poor performance of models trained to optimise RankNet and ListMLE losses. In future work, we plan to investigate the reasons for both good and poor performance of the aforementioned losses, in particular, the relation between ordinal loss and NDCG. Above we provide hyperparameters used for all models reported in Table 1. Models trained on WEB30K were trained for 100 epochs with the learning rate decayed by 0.1 halfway through the training. On e-commerce search logs, we trained the models for 10 epochs and decayed the learning rate by 0.1 after 5-th epoch. The meaning of the columns in Table 6 is as follows: d f c is the dimension of the linear projection done on the input data before passing it to the context-aware ranker, N is the number of encoder blocks, H is the number of attention heads, d h is the hidden dimension used throughout computations in encoder blocks, lr is the learning rate, p drop is the dropout probability and l is the list length (lists of items were either padded or subsampled to that length). The last column shows the number of learnable parameters of the model. In Table 7, Hidden dimensions column gives dimensions of subsequent layers of MLP models. The remaining columns have the same meaning as in the previous table.
Learning to rank using the Transformer architecture.
1,335
scitldr
We propose an active learning algorithmic architecture, capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies: Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB). The learner can generalize over its experience to continuously learn new outcomes, by choosing actively what and how to learn guided by empirical measures of its own progress. In this paper, we are considering the learning of a set of interrelated complex outcomes hierarchically organized. We introduce a new framework called "procedures", which enables the autonomous discovery of how to combine previously learned skills in order to learn increasingly more complex motor policies (combinations of primitive motor policies). Our architecture can actively decide which outcome to focus on and which exploration strategy to apply. Those strategies could be autonomous exploration, or active social guidance, where it relies on the expertise of a human teacher providing demonstrations at the learner's request. We show on a simulated environment that our new architecture is capable of tackling the learning of complex motor policies, to adapt the complexity of its policies to the task at hand. We also show that our "procedures" increases the agent's capability to learn complex tasks. Recently, efforts in the robotic industry and academic field have been made for integrating robots in previously human only environments. In such a context, the ability for service robots to continuously learn new skills, autonomously or guided by their human counterparts, has become necessary. They would be needed to carry out multiple tasks, especially in open environments, which is still an ongoing challenge in robotic learning. Those tasks can be independent and self-contained but they can also be complex and interrelated, needing to combine learned skills from simpler tasks to be tackled efficiently. The range of tasks those robots need to learn can be wide and even change after the deployment of the robot, we are therefore taking inspiration from the field of developmental psychology to give the robot the ability to learn. Taking a developmental robotic approach BID13, we combine the approaches of active motor skill learning of multiple tasks, interactive learning and strategical learning into a new learning algorithm and we show its capability to learn a mapping between a continuous space of parametrized outcomes (sometimes referred to as tasks) and a space of parametrized motor policies (sometimes referred to as actions). Classical techniques based on Reinforcement Learning BID19 BID21 still need an engineer to manually design a reward function for each particular task. Intrinsic motivation, which triggers curiosity in human beings according to developmental psychology BID8, inspired knowledge-based algorithms BID17. The external reward is replaced by a surprise function: the learner is driven by unexpected outcomes BID4. However those approaches encounter limitations when the sensorimotor dimensionality increases. Another approach using competence progress measures successfully drove the learner's exploration through goal-babbling BID2 BID20.However when the dimension of the outcome space increases, these methods become less efficient BID3 due to the curse of dimensionality, or when the reachable space of the robot is small compared to its environment. To enable robots to learn a wide range of tasks, and even an infinite number of tasks defined in a continuous space, heuristics such as social guidance can help by driving its exploration towards interesting and reachable space fast. Combining intrinsically motivated learning and imitation BID16 has bootstrapped exploration by providing efficient human demonstrations of motor policies and outcomes. Also, such a learner has been shown to be more efficient if requesting actively a human for help when needed instead of being passive, both from the learner or the teacher perspective BID6. This approach is called interactive learning and it enables a learner to benefit from both local exploration and learning from demonstration. Information could be provided to the robot using external reinforcement signals , actions BID11, advice operators BID0, or disambiguation among actions BID7. Another advantage of introducing imitation learning techniques is to include non-robotic experts in the learning process BID7. One of the key element of these hybrid approaches is to choose when to request human information or learn in autonomy such as to diminish the teacher's attendance. Approaches enabling the learner to choose either what to learn (which outcome to focus on) or how to learn (which strategy to use such as imitation) are called strategic learning BID12. They aim at giving an autonomous learner the capability to self-organize its learning process. Some work has been done to enable a learner to choose on which task space to focus. The SAGG-RIAC algorithm BID2, by self-generating goal outcomes fulfills this objective. Other approaches focused on giving the learner the ability to change its strategy BID1 and showed it could be more efficient than each strategy taken alone. Fewer studies have been made to enable a learner to choose both its strategy and its target outcome. The problem was introduced and studied in BID12, and was implemented for an infinite number of outcomes and policies in continuous spaces by the SGIM-ACTS algorithm BID14. This algorithm is capable of organizing its learning process, by choosing actively both which strategy to use and which outcome to focus on. It relies on the empirical evaluation of its learning progress. It could choose among autonomous exploration driven by intrinsic motivation and low-level imitation of one of the available human teachers to learn more efficiently. It showed its potential to learn on a real high dimensional robot a set of hierarchically organized tasks BID9, which is why we consider it to learn complex motor policies. In this article, we tackle the learning of complex motor policies, which we define as combinations of simpler policies. We describe it more concretely as a sequence of primitive policies. A first approach to learning complex motor policies is to use via-points BID21. Via-points enable the definition of complex motor policies. Those via-points are in the robot motor policy space. When increasing the size of the complex policy (by chaining more primitive actions together), we can tackle more complex tasks. However, this would increase the difficulty for the learner to tackle simpler tasks which would be reachable using less complex policies. Enabling the learner to decide autonomously the complexity of the policy necessary to solve a task would allow the approach to be adaptive, and suitable to a greater number of problems. Options BID22 are a different way to tackle this problem by offering temporally abstract actions to the learner. However each option is built to reach one particular task and they have only been tested for discrete tasks and actions, in which a small number of options were used, whereas our new proposed learner is to be able to create an unlimited number of complex policies. As we aim at learning a hierarchical set of interrelated complex tasks, our algorithm could also benefit from this task hierarchy (as BID10 did for learning tool use with simple primitive policies only), and try to reuse previously acquired skills to build more complex ones. BID5 showed that building complex actions made of lower-level actions according to the task hierarchy can bootstrap exploration by reaching interesting outcomes more rapidly. In this paper, we would like to enable a robot learner to achieve a wide range of tasks that can be inter-related and complex. We allow the robot to use sequences of actions of undetermined length to achieve these tasks. We propose an algorithm for the robot to learn which sequence of actions to use to achieve any task in the infinite field of tasks. The learning algorithm has to face the problem of unlearnability of infinite task and policy spaces, and the curse of dimensionality of sequences of high-dimensionality policy spaces. Thus, we extend SGIM-ACTS so as to learn complex motor policies of unlimited size. We develop a new framework called "procedures" (see Section 2.2) which proposes to combine known policies according to their outcome. Combining these, we developed a new algorithm called Socially Guided Intrinsic Motivation with Procedure Babbling (SGIM-PB) capable of taking task hierarchy into account to learn a set of complex interrelated tasks using adapted complex policies. We will describe an experiment, on which we have tested our algorithm, and we will present and analyze the . Inspired by developmental psychology, we combine interactive learning and autonomous exploration in a strategic learner, which learning process is driven by intrinsic motivation. This learner also takes task hierarchy into account to reuse its previously learned skills while adapting the complexity of its policy to the complexity of the task at hand. In this section, we formalize our learning problem and explain the principles of SGIM-PB. In our approach, an agent can perform motions through the use of policies π θ, parametrized by θ ∈ Π, and those policies have an effect on the environment, which we call the outcome ω ∈ Ω. The agent is then to learn the mapping between Π and Ω: it learns to predict the outcome ω of each policy π θ (the forward model M), but more importantly, it learns which policy to choose for reaching any particular outcome (an inverse model L). The outcomes ω can be of different dimensionality and thus be split in subspaces Ω i ⊂ Ω. The policies do not only consist of one primitive but of a succession of primitives (each encoded by the parameters in Π) that are executed sequentially by the agent. Hence, policies also are of different dimensionality and are split in policy spaces Π i ⊂ Π (where i corresponds to the number of primitives used in each policy of Π i). Complex policies are represented by concatenating the parameters of each of its primitive policies in execution order. We take the trial and error approach, and suppose that Ω is a metric space, meaning the learner has a means of evaluating a distance between two outcomes d(ω 1, ω 2). As this algorithm tackles the learning of complex hierarchically organized tasks, exploring and exploiting this hierarchy could ease the learning of the more complex tasks. We define procedures as a way to encourage the robot to reuse previously learned skills, and chain them to build more complex ones. More formally, a procedure is built by choosing previously known outcomes (t 1, t 2, ..., t n ∈ Ω) and is noted t 1 t 1... t n.Executing a procedure t 1 t 1... t n means building the complex policy π θ corresponding to the succession of policies π θi, i ∈ 1, n (potentially complex as well) and execute it (where the π θi reach best the t i ∀i ∈ 1, n respectively). As the subtasks t i are generally unknown from the learner, the procedure is updated before execution (see Algo. 1) to subtasks t i which are the closest known by the learner according to its current skill set (by executing respectively π θ 1 to π θ n). When the agent selects a procedure to be executed, this latter is only a way to build the complex policy which will actually be executed. So the agent does not check if the subtasks are actually reached when executing a procedure. Algorithm 1 Procedure modification before execution DISPLAYFORM0 Outcome & The SGIM-PB algorithm (see Algo. 2, FIG0) learns by episodes, where an outcome ω g ∈ Ω to target and an exploration strategy σ have been selected. It is an extension of SGIM-ACTS BID14, which can perform complex motor policies and size 2 procedures (sequences of 2 subtasks only). It uses the same interest model and memory based inverse model. In an episode under the policy space exploration strategy, the learner tries to optimize the policy π θ to produce ω g by choosing between random exploration of policies and local optimization, following the SAGG-RIAC algorithm BID2 ) (Goal-Directed Policy Optimization(ω g)).Local optimization uses local linear regression. This is a slightly modified version of the SGIM-ACTS autonomous exploration strategy. In an episode under the procedural space exploration strategy, the learner builds a size 2 procedure t i t j such as to reproduce the goal outcome ω g the best (Goal-Directed Procedure Optimization(ω g)). It chooses either random exploration of procedures (which builds procedures by generating two subtasks at random) when the goal outcome is far from any previously reached one, or local procedure optimization, which optimizes a procedure using local linear regression. The procedure built is then modified and executed, following Algo. 1.In an episode under the mimicry of one policy teacher strategy, the learner requests a demonstration ξ d from the chosen teacher. ξ d is selected by the teacher as the closest from the goal outcome ω g in its demonstration repertoire. The learner then repeats the demonstrated policy (Mimic Policy(ξ d)). It is a strategy directly also available in the SGIM-ACTS algorithm. In an episode under the mimicry of one procedural teacher strategy, the learner requests a procedural demonstration of size 2 t di t dj which is built by the chosen teacher according to a preset function which depends on the target outcome ω g. Then the learner tries to reproduce the demonstrated procedure by refining and executing it, following Algo. 1 (Mimic Procedure(t di t dj)).In both autonomous exploration strategies, the learner uses a method, Goal-Directed Optimization, to optimize its input parameters (procedure for the procedure exploration and policy for the policy exploration) to reach ω g best. This generic method works similarly in both cases, by creating random inputs, if the goal outcome ω g is far from any previously reached one, or local optimization which uses linear regression. After each episode, the learner stores the policies and modified procedures executed along with their reached outcomes in its episodic memory. It computes its competence in reaching the goal outcome ω g by computing the distance d(ω r, ω g)) with the outcome ω r it actually reached. Then it updates its interest model by computing the interest interest(ω, σ) of the goal outcome and each outcome reached (including the outcome spaces reached but not targeted): DISPLAYFORM0 where K(σ) is the cost of the strategy used and the progress p(ω) is the derivate of the competence. The learning agent then uses these interest measures to partition the outcome space Ω into regions of high and low interest. For each strategey σ, the outcomes reached and the goal are added to their partition region. Over a fixed number of measures of interest in the region, it is then partitioned into 2 subregions so as maximise the difference in interest between the 2 subregions. The method used is detailed in BID15. Thus, the learning agent discovers by itself how to organise its learning process and partition its task space into unreachable regions, easy regions and difficult regions, based on empirical measures of competence and interest. The choice of strategy and goal outcome is based on the empirical progress measured in each region R n of the outcome space Ω. ω g, σ are chosen stochastically (with respectively probabilities p 1, p 2, p 3), by one of the sampling modes:• mode 1: choose σ and ω g ∈ Ω at random; • mode 2: choose an outcome region R n and a strategy σ with a probability proportional to its interest value. Then generate ω g ∈ R n at random; • mode 3: choose σ and R n like in mode 2, but generate a goal ω g ∈ R n close to the outcome with the highest measure of progress. When the learner computes nearest neighbours to select policies or procedures to optimize (when choosing local optimization in any of both autonomous exploration strategies and when refining procedures), it actually uses a performance metric which takes into account the complexity of the policy chosen: DISPLAYFORM1 where d(ω, ω g) is the normalized Euclidean distance between the target outcome ω g and the outcome ω reached by the policy, γ is a constant and n is equal to the size of the policy (the number of primitives chained).In this section, we have formalized the problem of learning an inverse model between an infinite space of outcomes and an infinite space of policies. We have introduced the framework of procedures to allow the learning agent to learn sequences of primitive policies as task compositions. We have then proposed SGIM-PB as a learning algorithm that leverages goal-babbling for autonomous exploration, sequences to learn complex policies, and social guidance to bootstrap the learning. SGIM-PB learns to reach an ensemble of outcomes, by mapping them with policies, but also with subgoal outcomes. The formalization and algorithmic architecture proposed are general and can apply to a high number of problems. The requirements for an experimental setup are:• to define the primitive policies of the robot• to define the different outcomes the user is interested in by defining the variables from the sensors needed and a rough range of their values (we do not need a precise estimation as the algorithm is robust to overestimations of these ranges, see Nguyen & Oudeyer FORMULA2).• a measure for the robot to assess its own performance such as a distance, as in all intrinsic motivation based algorithms. This measure is used as an internal reward function. Contrarily to classical reinforcement learning problems, this reward function is not fine tuned to the specific goal at hand, but is a generic function for all the goals in the outcome space.• the environment and robot can reset to an initial state, as in most reinforcement learning algorithms. In this study, we designed an experiment with a simulated robotic arm, which can move in its environment and interact with objects in it. It can learn an infinite number of tasks, organized as 6 hierarchically organized types of tasks. The robot is capable of performing complex policies of unrestricted size (i.e. consisting of any number of primitives), with primitive policies highly redundant and of high dimensionality. The FIG1 shows environmental setup (contained in a cube delimited by (x, y, z) ∈ [−1; 1] 3 ). The learning agent is a planar robotic arm of 3 joints with the base centred on the horizontal plane, able to rotate freely around the vertical axis (each link has a length of 0.33) and change its vertical position. The robot can grab objects in this environment, by hovering its arm tip (blue in the FIG1 close to them, which position is noted (x 0, y 0, z 0). The robot can interact with:• Floor (below z = 0.0): limits the motions of the robot, slightly elastic which enable the robot to go down to z = −0.2 by forcing on it;• Pen: can be moved around and draw on the floor, broken if forcing too much on the floor (when z <= −0.3);• Joystick 1 (the left one on the figure): can be moved inside a cube-shaped area (automatically released otherwise, position normalized for this area), its x-axis position control a video-game character x position on the screen when grabbed by the robot; DISPLAYFORM0 Figure 2: Experimental setup: a robotic arm, can interact with the different objects in its environment (a pen and two joysticks). Both joysticks enable to control a video-game character (represented in top-right corner). A grey floor limits its motions and can be drawn upon using the pen (a possible drawing is represented).• Joystick 2 (the right one on the figure): can be moved inside a cube-shaped area (automatically released otherwise, position normalized for this area), its y-axis position control a video-game character y position on the screen when grabbed by the robot;• Video-game character: can be moved on the screen by using the two joysticks, its position is refreshed only at the end of a primitive policy execution for the manipulated joystick. The robot grabber can only handle one object. When it touches a second object, it breaks, releasing both objects. The robot always starts from the same position before executing a policy, and primitives are executed sequentially without getting back to this initial position. Whole complex policies are recorded with their outcomes, but each step of the complex policy execution is recorded. The motions of each of the three joints of the robot are encoded using a one-dimensional Dynamic Movement Primitive BID18, defined by the system: DISPLAYFORM0 where x and v are the position and velocity of the system; s is the phase of the motion; x 0 and g are the starting and end position of the motion; τ is a factor used to temporally scale the system (set to fix the length of a primitive execution); K and D are the spring constant and damping term fixed for the whole experiment; α is also a constant fixed for the experiment; and f is a non-linear term used to shape the trajectory called the forcing term. This forcing term is defined as: DISPLAYFORM1 where ψ i (s) = exp(−h i (s − c i) 2 ) with centers c i and widths h i fixed for all primitives. There are 3 weights ω i per DMP.The weights of the forcing term and the end positions are the only parameters of the DMP used by the robot. The starting position of a primitive is set by either the initial position of the robot (if it is starting a new complex policy) or the end position of the preceding primitive. The robot can also set its position on the vertical axis z for every primitive. Therefore a primitive policy π θ is parametrized by: DISPLAYFORM2 where DISPLAYFORM3 ) corresponds to the DMP parameters of the joint i, ordered from base to tip, and z is the fixed vertical position. When combining two or more primitive policies (π θ0, π θ1, ...), in a complex policies π θ, the parameters (θ 0, θ 1, ...) are simply concatenated together from the first primitive to the last. The outcome subspaces the robot learns to reach are hierarchically organized and defined as:• Ω 0: the position (x 0, y 0, z 0) of the end effector of the robot in Cartesian coordinates at the end of a policy execution; • Ω 1: the position (x 1, y 1, z 1) of the pen at the end of a policy execution if the pen is grabbed by the robot; • Ω 2: the first (x a, y a) and last (x b, y b) points of the last drawn continuous line on the floor if the pen is functional (x a, y a, x b, y b); • Ω 3: the position (x 3, y 3, z 3) of the first joystick at the end of a policy execution if it is grabbed by the robot; • Ω 4: the position (x 4, y 4, z 4) of the second joystick at the end of a policy execution if it is grabbed by the robot; • Ω 5: the position (x 5, y 5) of the video-game character at the end of a policy execution if moved. To help the SGIM-PB learner, procedural teachers were available so as to provide procedures for every outcome subspaces but Ω 0. Each teacher was only giving procedures useful for its own outcome space, and was aware of the task representation. They all had a cost of 5. The rules used to provide procedures are the following:• ProceduralTeacher1 (Ω 1): t 1 t 0 with t 1 ∈ Ω 1 corresponding to the pen initial position and t 0 ∈ Ω 0 corresponding to the desired final pen position; • ProceduralTeacher2 (Ω 2): t 1 t 0 with t 1 ∈ Ω 1 corresponding to the point on the z = 1.0 plane above the first point of the desired drawing, and t 0 ∈ Ω 0 corresponding to the desired final drawing point; • ProceduralTeacher3 (Ω 3): t 3 t 0 with t 3 ∈ Ω 3 and t 3 =, t 0 ∈ Ω 0 corresponding to the end effector position corresponding to the desired final position of the first joystick; • ProceduralTeacher4 (Ω 4): t 4 t 0 with t 4 ∈ Ω 4 and t 4 =, t 0 ∈ Ω 0 corresponding to the end effector position corresponding to the desired final position of the second joystick;• ProceduralTeacher5 (Ω 5): t 3 t 4 with t 3 ∈ Ω 3 and t 3 = (x, 0, 0) with x corresponding to the desired x-position of the video-game character, t 4 ∈ Ω 4 and t 4 = (0, y, 0) with y corresponding to the desired y-position of the video-game character. We also added policy teachers corresponding to the same outcome spaces to bootstrap the robot early learning process. The strategy attached to each teacher has a cost of 10. Each teacher was capable to provide demonstrations (as policies executable by the robot) linearly distributed in its outcome space: To evaluate our algorithm, we created a benchmark dataset for each outcome space Ω i, linearly distributed across the outcome space dimensions, for a total of 27,600 points. The evaluation consists in computing the normalized Euclidean distance between each of the benchmark outcome and their nearest neighbour in the learner dataset. Then we compute the mean distance to benchmark for each outcome space. The global evaluation is the mean evaluation for the 6 outcome spaces. This process is then repeated across the learning process at predefined and regularly distributed timestamps. DISPLAYFORM0 Then to asses our algorithm efficiency, we compare its with 3 other algorithms:• SAGG-RIAC: performs autonomous exploration of the policy space Π guided by intrinsic motivation; • SGIM-ACTS: interactive learner driven by intrinsic motivation. Choosing between autonomous exploration of the policy space Π and mimicry of one of the available policy teachers; • IM-PB: performs both autonomous exploration of the procedural space and the policy space, guided by intrinsic motivation; • SGIM-PB: interactive learner driven by intrinsic motivation. Choosing between autonomous exploration strategies (either of the policy space or the procedural space) and mimicry of one of the available teachers (either policy or procedural teachers).Each algorithm was run 5 times on this setup. Each run, we let the algorithm performs 25,000 iterations (complex policies executions). The value of γ for this experiment is 1.2. The probabilities to choose either of the sampling mode of SGIM-PB are p 1 = 0.15, p 2 = 0.65, p 3 = 0.2. The code run for this experiment can be found here. The Fig. 3 shows the global evaluation of all the tested algorithms, which corresponds to the mean error made by each algorithm to reproduce the benchmarks with respect to the number of complete complex policies tried. The algorithms capable of performing procedures (IM-PB and SGIM-PB) have errors that drop to levels lower than the their non-procedure equivalents (respectively SAGG-RIAC and SGIM-ACTS), moreover since the beginning of the learning process (shown on Fig. 3). It seems that the procedures bootstrap the exploration, enabling the learner to progress further. Indeed, the autonomous learner SAGG-RIAC has significantly better performance when it can use procedures and is thus upgraded to IM-PB.We can also see that the SGIM-PB algorithm has a very quick improvement in global evaluation owing to the bootstrapping effect of the different teachers. It goes lower to the final evaluation of SAGG-RIAC (0.17) after only 500 iterations. This bootstrapping effect comes from the mimicry teachers, as it is also observed for SGIM-ACTS which shares the same mimicry teachers. If we look at the evaluation on each individual outcome space (Fig. 4), we can see that the learners with demonstrations (SGIM-PB and SGIM-ACTS) outperforms the other algorithms, except for the outcome space Ω 5 where IM-PB is better, due to the fact that IM-PB practiced much more on this outcome space (500 iterations where the goal was in Ω 5 against 160 for SGIM-PB) on this outcome space. SGIM-PB and SGIM-ACTS are much better than the other algorithms on the two joysticks outcome spaces (Ω 3 and Ω 4). This is not surprising given the fact that those outcome spaces require precise policies. Indeed, if the end-effector gets out of the area where it can control the joystick, the latter is released, thus potentially ruining the attempt. So on these outcome spaces working directly on carefully crafted policies can alleviate this problem, while using procedures might be tricky, as the outcomes used don't take into account the motion trajectory but merely its final state. SGIM-PB was provided with such policies by the policy teachers. Also if we compare the of the autonomous learner without procedures (SAGG-RIAC) with the one with procedures (IM-PB), we can see that it learn less on any outcome space but Ω 0 (which was the only outcome space reachable using only single primitive policies and that could not benefit from using the task hierarchy to be learned) and especially for Ω 1, Ω 2 and Ω 5 which were the most hierarchical in this setup. More generally, it seems than on this highly hierarchical Ω 5, the learners with procedures were better. So the procedures helped when learning any potentially hierarchical task in this experiment. We further analyzed the of our SGIM-PB learner. We looked in its learning process to see which pairs of teachers and target outcomes it has chosen (Fig. 5). It was capable to request demonstrations from the relevant teachers depending on the task at hand, except for the outcome space Ω 0 which had no human teachers and therefore could not find a better teacher to help it. Indeed, for the outcome space Ω 2, the procedural teacher (ProceduralTeacher2) specially built for this outcome space was greatly chosen. Figure 5: Choices of teachers and target outcomes of the SGIM-PB learnerWe wanted to see if our SGIM-PB learner adapts the complexity of its policies to the working task. We draw 1,000,000 goal outcomes for each of the Ω 0, Ω 1 and Ω 2 subspaces (chosen because they are increasingly complex) and we let the learner choose the known policy that would reach the closest outcome. Fig. 6 shows the of this analysis. Figure 6: Number of policies selected per policy size for three increasingly more complex outcome spaces by the SGIM-PB learnerAs we can see on those three interrelated outcome subspaces (Fig. 6), the learner is capable to adapt the complexity of its policies to the outcome at hand. It chooses longer policies for the Ω 1 subspace (policies of size 2 and 3 while using mostly policies of size 1 and 2 for Ω 0) and even longer for the Ω 2 subspace (using far more policies of size 3 than for the others). It shows that our learner is capable to correctly limit the complexity of its policies instead of being stuck into always trying longer and longer policies. With this experiment, we show the capability of SGIM-PB to tackle the learning of a set of multiple interrelated complex tasks. It successfully discovers the hierarchy between tasks and uses complex motor policies to learn a wider range of tasks. It is capable to correctly choose the most adapted teachers to the target outcome when available. Though it is not limited in the size of policies it could execute, the learner shows it could adapt the complexity of its policies to the task at hand. The procedures greatly improved the learning capability of autonomous learners, as shows the difference between IM-PB and SAGG-RIAC. Our SGIM-PB shows it is capable to use procedures to discover the task hierarchy and exploit the inverse model of previously learned skills. More importantly, it shows it can successfully combine the ability of SGIM-ACTS to progress quickly in the beginning (owing to the mimicry teachers) and the ability of IM-PB to progress further on highly hierarchical tasks (owing to the procedure framework).In this article, we aimed to enable a robot to learn sequences of actions of undetermined length to achieve a field of outcomes. To tackle this high-dimensionality learning between a continuous high-dimensional space of outcomes and a continuous infinite dimensionality space of sequences of actions, we used techniques that have proven efficient in previous studies: goal-babbling, social guidance and strategic learning based on intrinsic motivation. We extended them with the procedures framework and proposed SGIM-PB algorithm, allowing the robot to babble in the procedure space and to imitate procedural teachers. We showed that SGIM-PB can discover the hierarchy between tasks, learn to reach complex tasks while adapting the complexity of the policy. The study shows that:• procedures allow the learner to learn complex tasks, and adapt the length of sequences of actions to the complexity of the task • social guidance bootstraps the learning owing to demonstrations of primitive policy in the beginning, and then to demonstrations of procedures to learn how to compose tasks into sequences of actions • intrinsic motivation can be used as a common criteria for active learning for the robot to choose both its exploration strategy, its goal outcomes and the goal-oriented procedures. However a precise analysis of the impact of each of the different strategies used by our learning algorithm could give us more insight in the roles of the teachers and procedures framework. Also, we aim to illustrate the potency of our SGIM-PB learner on a real-world application. We are currently designing such an experiment with a real robotic platform. Besides, the procedures are defined as combinations of any number of subtasks but are used in the illustration experiment as only combinations of two subtasks. It could be a next step to see if the learning algorithm can handle the curse of dimensionality of a larger procedure space, and explore combinations of any number of subtasks. Moreover, the algorithm can be extended to allow the robot learner to decide on how to execute a procedure. In the current version, we have proposed the "refinement process" to infer the best policy. We could make this refinement process more recursive, by allowing the algorithm to select, not only policies, but also lower-level procedures as one of the policy components.
The paper describes a strategic intrinsically motivated learning algorithm which tackles the learning of complex motor policies.
1,336
scitldr
Monte Carlo Tree Search (MCTS) has achieved impressive on a range of discrete environments, such as Go, Mario and Arcade games, but it has not yet fulfilled its true potential in continuous domains. In this work, we introduceTPO, a tree search based policy optimization method for continuous environments. TPO takes a hybrid approach to policy optimization. Building the MCTS tree in a continuous action space and updating the policy gradient using off-policy MCTS trajectories are non-trivial. To overcome these challenges, we propose limiting tree search branching factor by drawing only few action samples from the policy distribution and define a new loss function based on the trajectories’ mean and standard deviations. Our approach led to some non-intuitive findings. MCTS training generally requires a large number of samples and simulations. However, we observed that bootstrappingtree search with a pre-trained policy allows us to achieve high quality with a low MCTS branching factor and few number of simulations. Without the proposed policy bootstrapping, continuous MCTS would require a much larger branching factor and simulation count, rendering it computationally and prohibitively expensive. In our experiments, we use PPO as our baseline policy optimization algorithm. TPO significantly improves the policy on nearly all of our benchmarks. For example, in complex environments such as Humanoid, we achieve a 2.5×improvement over the baseline algorithm. Fueled by advances in neural representation learning, the field of model-free reinforcement learning has rapidly evolved over the past few years. These advances are due in part to the advent of algorithms capable of navigating larger action spaces and longer time horizons, as well as the distribution of data collection and training across massive-scale computing resources. While learning algorithms have been continuously improving, it is undeniable that tree search methods have played a large role in some of the most successful applications of RL (e.g., AlphaZero, Mario and Arcade games ). Tree search methods enable powerful explorations of the action space in a way which is guided by the topology of the search space, focusing on branches (actions) that are more promising. Although tree search methods have achieved impressive on a range of discrete domains, they have not yet fulfilled their true potential in continuous domains. Given that the number of actions is inherently unbounded in continuous domains, traditional approaches to building the search tree become intractable from a computational perspective. In this paper, we introduce TPO, a Tree Search Policy Optimization for environments with continuous action spaces. We address the challenges of building the tree and running simulations by adopting a hybrid method, in which we first train a policy using existing model-free RL methods, and then use the pre-trained policy distribution to draw actions with which to build the tree. Once the tree has been constructed, we run simulations to generate experiences using an Upper Confidence Bounds for Trees (UCT) approach. Populating the tree with the action samples drawn from a pre-trained policy enables us to perform a computationally feasible search. TPO is a variation of the policy iteration method. Broadly, in these methods, the behavior of policy is iteratively updated using the trajectories generated by an expert policy. Then, the newly updated policy in return guides the expert to generate higher quality samples. In TPO, we use tree search as an expert to generate high quality trajectories. Later, we employ the updated policy to re-populate the tree search. For tree search, we use the Monte Carlo Tree Search (MCTS) expansion and selection methods. However, it is challenging to directly infer the probability of selected actions for rollout; unlike in discrete domains where all actions can be exhaustively explored, in continuous domains, we cannot sample more than a subset of the effectively innumerable continuous action space. Furthermore, to use the trajectories generated by MCTS, we must perform off-policy optimization. To address this challenge, we define a new loss function that uses the weighted mean and standard deviation of the tree search statistics to update the pre-trained policy. For ease of implementation and scalability, we use Proximal Policy Optimization (PPO) and choose as our policy optimization baseline. In phase 1, we perform a policy gradient based optimization training to build a target policy. In phase 2, we iteratively build an MCTS tree using the pre-trained target policy and update the target policy using roll-out trajectories from MCTS. Both training and data collection are done in a distributed manner. Our approach led to some non-intuitive findings. MCTS training generally requires a large number of branches and simulations. For example, AlphaGo uses 1600 simulations per tree search and a branching factor of up to 362. However, we observed that if we pre-train the policy, we require far fewer simulations to generate high quality trajectories. While we do benefit from exploring a greater number of branches, especially for higher dimensional action spaces (e.g. Humanoid), we observed diminishing returns after only a small number of branches (e.g., 32) across all of the evaluated environments. Furthermore, performance quickly plateaued as we increased the number of simulations past 32. This property did not hold when we initialized tree search with an untrained policy. This is a critical advantage of our method as it would otherwise be computationally infeasible to generate high quality trajectories using tree search. The main contributions of TPO are summarized as follows: 1. Tree search policy optimization for continuous action spaces. TPO is one of the very first techniques that integrates tree search into policy optimization for continuous action spaces. This unique integration of tree search into policy optimization yields a superior performance compared to baseline policy optimization techniques for continuous action spaces. 2. Policy bootstrapping. We propose a policy bootstrapping technique that significantly improves the sample efficiency of the tree search and enables us to discretize continuous action spaces into only a few number of highly probable actions. More specifically, TPO only performs 32 tree searches compared to substantially larger number of tree searches (1600, 50× more) in AlphaGo. In addition, TPO narrows down the number of tree expansion (actions) compared to discretization techniques such as Tang et al. which requires 7-11 bins per action dimension. This number of bins translates to a prohibitively large number of actions even in discrete domain for complex environments such as Humanoid which has a 17 dimensional action space. In contrast, TPO only samples 32 actions at each simulation step across all the environments. 3. Infrastructure and . On the infrastructure side, we developed a distributed system (shown in Figure 1), in which both policy optimization and data collection are performed on separate distributed platforms. The policy optimization is done on a TPU-v2 using multiple cores, and MCTS search is performed on a rack of CPU nodes. A synchronous policy update and data collection approach is used to train the policy and generate trajectories. TPO readily extends to challenging and high-dimensional tasks, such the Humanoid benchmark. Our empirical indicate that TPO significantly improves the performance of the baseline policy optimization algorithm, achieving up to 2.5× improvement. Policy iterations methods have shown promising in reinforcement learning. AlphaZero, a major breakthrough in reinforcement learning, uses MCTS as an expert to train a discrete policy model for the game of Go. EXiT established theoretical grounds for policy iterations using MCTS in discrete domains. A recent work proposes some theoretical extensions to AlphaZero's MCTS method for environments with continuous action spaces. However, the authors only evaluate their method on the Pendulum OpenAI Gym environment which has only a one dimensional action space. It is not obvious how the approach can scale to more complex environments with much higher dimensions. While TPO also belongs to the category of policy iteration methods, it studies a rather less explored territory of using MCTS as expert for complex environments with continuous action spaces. Policy optimization falls into two major categories: on-policy and off-policy optimization. In on-policy methods, the policy is optimized using samples generated from the current policy, whereas with off-policy algorithms, we can also train the policy using experiences from other sources, such as an expert. Although on-policy reinforcement learning methods, such as TRPO, PPO, PGQL, and A3C have been extremely successful, they are known to be sample inefficient, as they must generate fresh trajectories between each gradient update and cannot reuse these trajectories. This effect becomes more pronounced as the complexity of the task grows, requiring millions of steps to learn a policy. On the other hand, off-policy methods (e.g., DQN, DDPG ) are more sample efficient, as each example is used multiple times for training. However, these methods require more extensive hyper-parameter tuning, suffering from instability and slow convergence. A few recent methods have been developed to embed differentiable planning procedures in computation graphs. MCTSnets is a deep learning model that incorporates simulation-based search with a vector embedding for expansion, evaluation, and backup. The three policies are jointly optimized with an end-to-end gradient based approach. The authors only report on the classic planning benchmark, Sokoban, which is a simple discrete optimization environment. Extension of this approach to continuous domains is non-obvious. SoRB is another approach that combines planning with policy optimization. However, the target problem space of SoRB is only goal-reaching tasks. Although there has been previous work on continuous MCTS, none has been conducted in the context of deep reinforcement learning. More sophisticated approaches to node expansion improve the quality of search, but TPO only uses a simple scheme in which a fixed number of actions are sampled from the policy. We found our approach both simple to implement and effective, without the need for extensive fine-tuning or hyper-parameter optimization. For example, we found that across the evaluated OpenAI Gym environments, a sample size of 32 yields strong . In our experiments, we chose PPO, which has been shown to yield superior performance in various large-scale problems such as Hide and Seek and Dota, to pre-train the policy used to populate the MCTS tree. However, our framework is independent of the specific choice of baseline policy and can be readily applied to other model-free RL algorithms, such as SAC or TD3. Preliminaries and notations. Policy optimization in continuous environments can be well modeled by a Markov Decision Process (MDP), defined by the tuple (S,A,p,r,γ), where S(A) is a set of states (actions) in continuous space. The state transition probability p(s t+1,s t,a k) represents the probability desnsity of observing the next state in time, s t+1 ∈ S, given the current s t ∈ S and the current action a k ∈ A. Taking action a k at state s t, the environment moves to a new state s t+1 and returns a bounded reward r(s t,a k): S ×A → [r min,r max]. Finally, γ ∈ represents the discount factor, used to calculate the discounted cumulative expected reward. We use π θ (a k |s t) to represent the neural network policy with trainable parameters θ. We use v φ (s t) to represent the value from a neural network with parameters φ. Previous work on Monte Carlo Tree Search (MCTS) was mostly focused on zero-sum games with discrete action spaces. In this section, we will highlight our adaptation of MCTS for environments with continuous action spaces and non-zero immediate rewards. For the complete definition of MCTS, we invite the readers to see. To make MCTS possible, we assume the environment can be set to an arbitrary state. This assumption can be relaxed using a learned model of the environment (See §6). In this work, we use a variant of Polynomial Upper Confident Tree (PUCT) algorithm in TPO for MCTS. Nodes of the Monte Carlo tree are states of the environment s t ∈ S and edges are actions, a k ∈ A. We store a set of statistics for each edge of Monte Carlo tree: the number of visits N, the cumulative sum of action value W, and the average of the action value Q. We update these statistics after each MCTS simulation for the traversed edges. Each MCTS simulation starts from a root state and consists of four steps: select, expand, evaluate, and backup. Here we only overview the select and expand steps, as they are the contributions of TPO to enable tree search for continuous action spaces. 1 Select. We follow a variant of Polynomial Upper Confident Tree (PUCT) algorithm to select actions. For each node, s t, we select an action: where c puct is a constant exploration coefficient and π θ (a k |s t) represents the prior probability assigned to each action by the policy network. To further promote exploration, we add a Dirichlet noise, Dir(α) 1, to the prior probabilities of the actions at the root node. To support environments with immediate rewards, we propose to use the following equation for estimating the value of an action: where W (s t,a k) is the cumulative sum over the backed up values from its children. 2 Expand. In discrete environments such as the game of Go or Chess/Shogi, all possible actions are considered in MCTS expansion. However, in environments with continuous action spaces, the set of possible actions is innumerable. Hence, enumerating all the possible actions for each node is impossible. To circumvent this challenge, we propose to draw samples from the policy distribution. These sampled actions will become the edges of the leaf node. After a fixed number of simulations, we select an action with the largest visit counts from the current root node and advances the environment. We use Q(s t,a k)+U (s t,a k) as a tie-breaker between the actions with the same number of visit counts. We also defineμ T S (s t) as an unbiased estimate for the mean value of the tree policy at a given node computed across each action space dimension: Later in Section 4, we use this unbiased mean estimate to optimize the policy distribution. All MCTS trajectories are added to a replay buffer, which is then used to optimize the policy and value networks. Neural network architecture. Both value, v φ , and policy networks, π θ , are represented by a fully-connected multi-layer perceptron (MLP) with two hidden layers of 64 units. The output of each layer is passed through a tanh(·) non-linearity function. The weights of value and policy networks are not shared. The value network outputs a single scalar value v(s t), whereas the policy network outputs a vector of means µ(s t), and variances σ(s t) for a number of independent Normal distributions for each action dimension. Loss functions. We first describe the loss functions for the on-policy phase and then delve into the loss functions of the proposed off-policy tree search optimization. For ease of implementation, we used PPO for on-policy optimization. The policy loss function of PPO is: where r t (θ) is the probability ratio between the new policy, π θt (a k |s t), and the old policy, π θ old (a k |s t), PPO clips the probability ratio to be within the range [1−, 1+], which penalizes large updates to the policy. Â t is the truncated generalized advantage estimate, computed for a trajectory of length T. To update the value network, we use a variant of PPO 2 that also applies clipping to the value function: 1 Dir(α) returns a noise vector of {η1,η2,...,ηn} where n i=1 ηi = 1. α ∈ R + is Dirichlet noise scaling parameter that controls the shape of the generated noise vector. Roll out trajectories {τ1, τ2, ..., τκ} for T timesteps each using πc Finally, the total loss combines the policy surrogate and value function terms with a regularization coefficients c 1: In the off-policy phase, we use the same value function as PPO for the value updates, as shown in Eq. 5 and Eq. 6. For the policy updates, we define a new loss function, derived from the collected MCTS statistics: L PF (θ) is the sum of the squared-error loss between the unbiased mean estimate and the standard deviation estimate, calculated from the MCTS trajectories, and the mean and standard deviation of the policy distribution at a given state s t. The intuition for choosing this loss term for policy updates is to incentivize moving the mean of the policy distribution towards an unbiased estimate computed from the tree search. Finally, we combine the defined TPO policy loss and value loss to obtain the following objective, which is maximized at each training step: If the policy and value networks do not share parameters, the regularization coefficients c 1 and c 3 become irrelevant. In neither L On-π or L Off-π losses, we do not use any entropy bonus term. Algorithm. Algorithm 1 fleshes out the proposed TPO algorithm, which uses a combination of policy gradient and MCTS. In the first phase of training, we use the policy network to roll out trajectories and optimize the PPO loss function (See Eq. 7). In the second phase, we switch to an off-policy mode, in which we only use trajectories from MCTS to optimize the policy. That is, we construct the loss (See Eq. 9.) on the roll out trajectories generated by the tree search. We treat ρ, which defines the switching ratio between policy gradient and tree search, as a hyperparameter of the TPO algorithm. The intuition behind bootstrapping tree search with a pre-trained policy is that continuous action spaces require a relatively stable policy to encourage the exploration of trajectories with higher returns, and an untrained policy cannot adequately guide the tree search. Our experiments support this hypothesis; if we start from a lightly trained policy (i.e. small switching ratio, ρ), the final performance is severely diminished. The same reasoning applies to choosing to start with policy gradient optimization in the first phase of training followed by tree search policy optimization. We evaluate TPO on a variety of continuous environments from OpenAI Gym (Table 1). Table 2 show the hyperparameters used for training the value and policy networks and those used for tree search. Our policy training is done on a TPU-v2 platform with eight cores, and MCTS is performed on CPU. We chose PPO for the on-policy optimization update. For each environment, we run TPO for four million time steps to estimate the gradients. To efficiently perform MCTS sampling for trajectory rollouts, we use BatchEnv that extends the OpenAI Gym interface to multiple parallel environments. Using BatchEnv, we evaluate up to 32 actions in parallel and obtain their associated rewards. We use 32 Intel CPUs, each assigned to one environment instance, in order to parallelize simulation. We found that 32 is a reasonable choice for the maximum number of MCTS simulations per each action selection, as it strikes a good balance between search latency and policy performance. Using our parallel infrastructure, the latency of action selection is somewhere between 0.2-0.3 sec when we use 32 MCTS simulations per action. Number of MCTS simulations 32 Switching ratio (ρ) 0. Unless stated otherwise, in all of the experiments, we report the average return of the last 100 episodes across five runs (each with a different random seed). To make a fair comparison with the baseline algorithm (PPO), we reproduce the reported in using our infrastructure. In all the environments, we use the best obtained for PPO using our infrastructure, which in some environments (HalfCheetah-v2 and Walker2d-v2) outperform the reported in. In all the experiments, we use Adam as our optimizer and run training for 10 epochs per policy update. Summary of . Table 3 depicts the performance comparison between TPO and PPO across six environments with continuous action spaces. In all the environments, TPO yields a better total return compared to PPO, generally by a significant margin. Using TPO, we observe a maximum improvement of roughly 2,900 in Ant-v2 and a minimum improvement of only 7 in Swimmer-v2. Swimmer-v2 is the simplest environment in the set of evaluated benchmarks with an action space of only two dimensions. On the other hand, Ant-v2 is a much more complex benchmark with an action-space of eight dimensions. The delta in performance improvement suggests that TPO is better able to leverage tree search exploration to improve policies in complex environments with large action spaces. Figure 2 shows the training progress of TPO and PPO across six environments. To conduct this experiment, we start by pre-training with PPO. At ∼ 2M time steps, TPO restores checkpoints from PPO training and continues training using tree search. As such, the total return curve for both TPO and PPO overlap before reaching 2M time steps. In all the environments, we observe a sharp spike in the total return curve after switching to tree search, which can be attributed to the effectiveness of the proposed tree search method for policy optimization. For complex environments, such as Humanoid-v2, we observe continuing improvement after switching to MCTS-based policy optimization. Ablation Studies To better understand the impact of key components of TPO, we perform a set of ablation studies. Concretely, we show the for Number of Timesteps three environments with different levels of action space complexity, namely Hopper-v2, Ant-v2, and Humanoid-v2 with action spaces of three, eight, and 17, respectively. Switching ratio. First, we study the impact of varying the switching ratio (ρ in Algorithm 1) between 10%, 50% (default value in TPO), and 90% across three tasks. As shown in Figure 3, prematurely switching (ρ = 10%, blue line in Figure 3) to tree search optimization severely harms policy performance, showing no further improvement after switching to MCTS optimization. This behavior can be attributed to the fact that in tree search policy optimization for environments with continuous action spaces, it is of utmost importance to have a reasonably well-trained policy to guide the tree search toward trajectories with potentially higher return values. Switching to tree search at the very end of training (ρ = 10%, green line in Figure 3) can be damaging as well. While switching to tree search at the 90% of the total training steps in Hopper-v2 does not improve the average return compared to switching after 50% of the total training steps, it slightly improves the average return in Ant-v2. That is, the point at which to switch to tree search is a hyperparameter that should be adjusted carefully for better policy performance. Number of actions per node. space, Humanoid-v2, has significant benefit for policy performance. As shown in Figure 4c, changing the number of actions per node from eight to 32 increases the average total return by ∼1100. Number of MCTS simulations. Figure 5 shows the impact of the number of MCTS simulations (See Section 3) on the performance of the policy. While increasing the number of MCTS simulations generally improves performance, we found that increasing the branching factor (number of actions per node) had more impact. For example, for Humanoid-v2, the performance gap between 8 and 32 simulations (shown in Figure 5c), is less pronounced than the improvement from increasing the number of actions per node from 8 and 32 (shown in Figure 4c). These studies suggest that it is more beneficial to devote the tree search budget to exploring more actions per node, especially for more complex environments, rather than conducting a larger number of tree search simulations. In this paper, we have studied Monte Carlo Tree Search in continuous space for improving the performance of a baseline on-policy algorithm. Our show that MCTS policy optimization can indeed improve the quality of policy in choosing better actions during policy evaluation at the cost of more samples during MCTS rollout. We show that bootstrapping tree search with a pretrained policy enables us to achieve high performance with a low MCTS branching factor and few simulations. On the other hand, without pre-training, we require a much larger branching factor and simulation count, rendering MCTS computationally infeasible. One of the future research direction is to explore techniques for improving the sample efficiency and removing the need for having a reset-able environment. To achieve these goals, we can use a trained model of the environment similar to model-based reinforcement learning approaches, instead of interacting directly with the environment in MCTS. Recently, MBPO showed that they can train a model of Mujoco environments that is accurate enough for nearly 200-step rollouts in terms of accumulated rewards. This level of accuracy horizon is more than enough for the shallow MCTS simulations (32 simulations) that is employed in TPO. As mentioned earlier, TPO assumes access to an environment that can be restarted from an arbitrary state for MCTS simulations. While this assumption can be readily satisfied in some RL problems such as playing games, it may be harder to achieve for physical RL problems like Robotics. This assumption can also be relaxed using a modeled environment to replace the interactions with the real environment during MCTS simulations.
We use MCTS to further optimize a bootstrapped policy for continuous action spaces under a policy iteration setting.
1,337
scitldr
The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature. In this work, we investigate the reason why unsupervised learning of controllable representations fails for text. We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer. Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods. High-dimensional data, such as images and text, are often causally generated through the interaction of many complex factors, such as lighting and pose in images or style and content in texts. Recently, VAEs and other unsupervised generative models have found successes in modelling the manifold of natural images (; ;). These models often discover controllable latent factors that allow manipulation of the images through conditional generation from interpolated or extrapolated latent codes, often with impressive quality. On the other hand, while various attributes of text such as sentiment and topic can be discovered in an unsupervised way, manipulating the text by changing these learned factors have not been possible with unsupervised generative models to the best of our knowledge. Cífka et al.; observed that text manipulation is generally more challenging compared to images, and the successes of these models cannot be directly transferred to texts. Controllable text generation aims at generating realistic text with control over various attributes including sentiment, topic and other high-level properties. Besides being a scientific curiosity, the possibility of unsupervised controllable text generation could help in a wide range of application, e.g., dialogues systems . Existing promising progress (; ; ;) all relies on supervised learning from annotated attributes to generate the text in a controllable fashion. The high cost of labelling large training corpora with attributes of interest limits the usage of these models, as pre-existing annotations often do not align with some downstream goal. Even if cheap labels are available, for example, review scores as a proxy for sentiment, the control is limited to the variation defined by the attributes. In this work, we examine the obstacles that prevent sequence VAEs from performing well in unsupervised controllable text generation. We empirically discover that manipulating the latent factors for typical semantic variations often leads to latent codes that reside in some low-density region of the aggregated posterior distribution. In other words, there are vacant regions in the latent code space not being considered by the decoding network, at least not at convergence. As a , the decoding network is unable to process such manipulated latent codes, yielding unpredictable generation of low quality. In order to mitigate the latent vacancy problem, we propose to constrain the posterior mean to a learned probability simplex and only perform manipulation within the probability simplex. Two regularizers are added to the original objective of VAE. The first enforces an orthogonal structure of the learned probability simplex; the other encourages this simplex to be filled without holes. Besides confirming that latent vacancy is indeed a cause of failure in previous sequence VAEs', it is also the first successful attempt towards unsupervised learning of controllable representations for text to the best of our knowledge. Experimental on text style transfer show that our approach significantly outperforms unsupervised baselines, and is competitive with strong supervised approaches across a wide range of evaluation metrics. Our proposed framework also enables finer-grained and more flexible control over text generation. In particular, we can switch the topic in the middle of sentence generation, and the model will often still find a way to complete the sentence in a natural way. The variational autoencoder (VAE) is a generative model defined by a prior p(z z z) and a conditional distribution p θ θ θ (x x x|z z z). The VAE is trained to optimize a tractable variational lower bound of log p θ θ θ (x x x): where q φ φ φ (z z z|x x x) is a variational distribution parameterized by an encoding network with parameters φ φ φ, and p θ θ θ (x x x|z z z) denotes the decoding network with parameters θ θ θ. This objective tries to minimize the reconstruction error to generate the data, and at the same time regularizes q φ φ φ (z z z|x x x) towards the prior p(z z z). In this paper, p(z z z) is chosen as N (0, I 0, I 0, I). For text modelling, the input x x x is some observed text. Both the encoding and decoding network are usually recurrent neural networks. Note that during learning, the decoding network p θ θ θ only learns to decode conditioned on z z z that are sampled from q φ φ φ (z z z|x x x). In other words, the decoding network only learns to process z z z sampled from the aggregated posterior distribution q φ φ φ (z z z) = E x x x∼p d (x x x) q φ φ φ (z z z|x x x), where p d (x x x) is the data distribution. If q φ φ φ (z z z) has regions of low density, there is no guarantee that p θ θ θ would decode well in such regions. This is an important intuition that will become central to our analysis in Sec. 3. In this section, we take a deeper look into the aggregated posterior latent space of sequence VAE trained on text, and provide justification for the alternative solution we propose in Section 4. As pointed out by , one of the motivations to apply VAEs on text is to allow generation of the sentences conditioned on extrinsic features by controlling the latent codes. Without annotated labels, no previous methods have successfully learned controllable latent factors as mentioned in Sec. 1. To understand what is missing, we conduct exploratory experiments to use VAE for unsupervised sentiment manipulation. We use the Yelp restaurant reviews dataset and the same data split following. We train a β-VAE 1 with a latent space of 80 dimensions, an LSTM encoder, and an LSTM decoder. Details about this experiment are described in Appendix A.1. By inspecting the accuracy on the validation set, we find that there exists one dimension of latent code achieving higher than 90% sentiment classification accuracy by its value alone, while other latent codes get accuracy around 50%. Further details can be found in Appendix A.2. It means that this latent dimension is an effective sentiment indicator. Similar phenomena have been observed in large-scale language models . However, the direct influence on the generative process of the model observed in does not apply on the VAE. When we try to perform sentiment manipulation by modifying this latent dimension 2, the decoding network fails to generate the desired outputs most of the time, as evidenced by the poor quantitative evaluation in Table. 1, and poor samples shown in Appendix A.3. One possible reason for the failure is that the decoding network is never trained on codes like the manipulated ones. This is the case if the aggregated posterior has holes or regions of low density, and the manipulated codes fall into such vacant regions. Supposing the aggregated posterior latent space possesses a shape as shown in Fig. 1, the direct manipulated latent codes will fall out of the aggregated posterior latent space for most input samples. Such latent codes are never seen by the model during training and possess a low density under the aggregated posterior distribution, leading to unpredictable behaviours during decoding. To verify our hypothesis demonstrated in Fig. 1, we empirically estimate the density of sentimentmanipulated codes under the aggregated posterior distribution of our trained VAE. Here, we approximate the data distribution p d (x x x) with the empirical distribution over all the training samples. As a , the estimated aggregated posterior distribution is a large mixture of Gaussian distribution. For all 1000 test samples, we move the dimension of code capturing sentiment from µ − 2σ to µ + 2σ where µ and σ are the mean and the standard deviation estimated on all the training samples and measure the averaged negative log-likelihood (NLL) under the aggregated posterior distribution. As depicted in Fig. 3 (a), the NLL plotted in blue dot curve rises sharply when moving away from µ even if there is only one dimension of code is changing, indicating the existence of the vacancy in the aggregated posterior latent space. In addition, we draw the histogram of all the test samples' NLL considering their original latent codes and modified ones in Fig. 3 (b). The histogram shows that there is a large divergence in NLL between the original latent codes and the modified ones. Also, the modified latent codes have two separate modes, confirming the irregular shape of the aggregated posterior latent space. In order to resolve this issue, the approach proposed in this work is to constrain the posterior in a way that the manipulation only happens in a learned simplex, as depicted in Fig. 2. In this constrained subspace, the phenomenom of low density holes of aggregated posterior is significantly reduced, as Fig. 3 (a) and (c) empirically show that there is little change in NLL of original versus modified codes. The details of our method is presented in the next section. The experiments conducted in Sec. 3 validates the existence of vacancy in the aggregated posterior latent space. One potential way to resolve the problem is to better match the aggregated posterior with the prior (; ;). However, in terms of unsupervised learning of controllable representation for text, these previous methods have not shown successes; only attempted supervised text style transfer, and also reported negative from the AAE . Another way to resolve the vacancy issue is to directly enforce that the aggregated posterior itself has no vacant region anywhere where we would like to perform latent code manipulation. We propose to map the posterior Gaussian mean to a constrained space, more specifically a learned probability simplex, where we can encourage the constrained latent space to be filled without vacancy, and perform manipulation to be within this simplex. As illustrated in Fig. 2, we add an additional mapping function as part of the encoding network which maps the mean of the Gaussian posterior to a constrained space. Two regularization terms are introduced later to ensure the learned simplex is not degenerate and that this subspace is well filled. In addition, we separately model the relevant factors that we wish to control and the irrelevant factors by splitting z z z into two parts, z z z and z z z, following prior work . The first part captures the relevant factors that are dominant in the data without an inductive bias from external signals, while the second part learns to encode the remaining local information that is useful for reconstructing the source sentences. As a , q φ φ φ (z z z|x x x) is decomposed into q φ φ φ1 (z z z |x x x)q φ φ φ2 (z z z |x x x) where φ φ φ = φ φ φ 1 ∪ φ φ φ 2. With diagonal covariances the KL divergence term in Eq. 1 splits into two separate KL terms. In practice, we use a MLP encoding network to parametrize z z z with some sentence representations as the input (e.g., averaging GloVe embeddings over the input tokens) and a LSTM encoding network to parametrize z z z. We only constrain the posterior of z z z and z z z is optimized the same way as the traditional VAE. We now describe how to map the mean µ µ µ of the Gaussian posterior for z z z ∈ R N to a constrained latent space. We would like to constrain the mean µ µ µ to have a structure as follows: e e e i, e e e j = 0, where e e e i are vectors representing the relevant factors, p i is the proportion of ith relevant factor encoded in z z z and K is a hyperparameter indicating the number of relevant factors to discover. In other words, the mean of the Gaussian posterior of z z z is constrained to be inside a K-dimension probability simplex in R N whose vertices are represented by the orthogonal basis vectors e e e i, i = 1,..., K. Given the outputs of the MLP encoder h h h and log σ σ σ 2, we learn an additional mapping function π which maps h h h to the constrained posterior space, which can be treated as part of the encoding network: where E E E = [e e e 1, . . ., e e e K] is a learnable embedding matrix representing the bases, W W W is the learnable weight matrix, and b b b is the learnable bias vector. As a , the constrained posterior is parametrized by µ µ µ and log σ σ σ 2 as a Gaussian distribution N (µ µ µ, diag(σ σ σ 2)). With the mapping function alone, the proposed VAE suffers from posterior collapse , a well-known problem where the model ignores the latent code z z z during the training. Further complicating matters is the fact that there is an abundance of signals for predicting the next token in the text, but the signals indicating high-level semantics are quite sparse. It is thus unlikely that the VAEs can capture useful relevant factors from raw text without collapse. For these reasons, we enforce orthogonality in the learnt basis vectors as defined in Eq. 2, which introduces a natural recipe to prevent posterior collapse for z z z. Note that the KL divergence between q φ φ φ1 (z z z |x x x) and p(z z z ) is With orthogonality in the basis vectors, the first term in the above equation can be factorized into i e e e i e e e i. To encourage orthogonality in the basis vectors, a regularization term is added to the objective function: where I I I is the identity matrix and α is a hyperparamter. When L REG = 0, e e e i e e e i = α. In this case, µ µ µ µ µ µ = α i p 2 i reaches its minimum α K when p p p is a uniform distribution. The proof can be found in Appendix C. In practice, L REG will quickly decrease to around 0, ensuring that the KL term will never fully collapse with the structural constraint. When it comes to controlled generation, one can choose a vertex or any desired point in the probability simplex, as illustrated in Fig. 2. Note that the constrained posterior also means that the aggregated posterior can never match the isotropic Gaussian prior. In other word, we achieve good controlled text generation potentially at the cost of poor uncontrolled generation from the prior, but such is not the focus of this current work, and could potentially be resolved by selecting or learning a better prior as in. Constraining the posterior inside a certain space does not guarantee that this space will be filled after training. In order to prevent this, we want the probability distribution over the relevant factors p p p to cover as much of the constrained latent space as possible. We introduce a reconstruction error of the structured latent code in order to push p p p away from a uniform distribution. For each input sentence, we randomly sample m sentences from the training data as negative samples. By applying the same encoding process, we get the structured latent code µ µ µ (−) i for each negative sample. Our goal is to make the raw latent code h h h similar to the restructured latent code µ µ µ while different from latent codes µ µ µ (−) i of the negative samples, so that p p p is generally different for each input sample. The structured reconstruction loss is formulated as a margin loss as follows: Our final objective function is defined as follows: 5 RELATED WORK Learning disentangled representations is an important step towards better representation learning which can be useful for (semi-)supervised learning of downstream tasks, transfer and few-shot learning . VAEs have achieved promising for unsupervised learning of disentangled representations. Several variations of VAEs have been proposed to achieve better disentanglement (; ; ;). However, most recent progress in this direction has been restricted to the domain of images. In order to perform controllable text generation, previous methods either assume annotated attributes or multiple text datasets with different known styles (; ; ; ; ; ; ;). The requirement of labelled data largely restricts the capabilities and the applications of these models. Instead, all our proposed framework needs is raw text without any annotated attribute. The dominant underlying relevant factors in the given corpus will be discovered and disentangled by our unsupervised method, which can in turn be used for controlled generation. To demonstrate the effectiveness of our approach, we compare it to unsupervised baselines with traditional VAEs, considering the density under the aggregated posterior distribution and the performance on sentiment manipulation. Following evaluation protocols in text style transfer, we also compare our method to strong supervised approaches. Furthermore, we showcase the ability of finer-grained style discovery and transition possessed by our system, which has not been attempted in the literature. In this section, our proposed framework is referred as CP-VAE (Constrained Posterior VAE). Detailed configurations including the hyperparameters, model architecture, training regimes, and decoding strategy are found in Appendix B. Experimental setup: We use the same experimental setting and dataset as mentioned in Sec. 3. The 80D latent code is split into 16 and 64 dimensions for z z z and z z z respectively. The sentence representations used for z z z is the averaged GloVe embeddings over the input tokens and K is chosen as 3. To decide which basis vector corresponds to which sentiment, we sample 10 positive and 10 negative sentences respectively in the development set, pass them to the encoder, and choose the basis vector with the highest average p i in p p p = softmax(W W Wh h h + b b b), yielding v p as the positive basis and v n as the negative basis. If v p and v n are chosen to be the same vector, we choose the index with the second highest p i for v p. To perform sentiment manipulation, we fix z z z to be the chosen basis vector; that is, v p or v n. Comparisons on density under the aggregated posterior distribution: First, we do linear interpolation between the two discovered basis vectors v p and v n and estimate the averaged NLL under the aggregated posterior distribution the same way as introduced in Sec. 3. The green solid curve in Fig. 3 (a) shows that the NLL of CP-VAE is relatively stable for the whole range of the interpolation. In Fig. 3 (c), the original latent codes and the modified ones largely overlap with each other. Both observations validate the effectiveness of CP-VAE in resolving the latent vacancy problem, leading to significant improvements on unsupervised sentiment manipulation, as seen later. Comparsions with metrics on text style transfer: For quantitative evaluation, we adopt automatic evaluation metrics used in text style transfer including classification accuracy (AC), BLEU score (BL), GLEU score (GL) and language model perplexity (PL), whose definitions are elaborated in the next section. We also report D KL (q φ φ φ1 (z z z |x x x) p(z z z)) (KL) for z z z of CP-VAE. As shown in Tab. 1, CP-VAE performs significantly better than β-VAE in terms of accuracy, BLEU and GLEU. The lower perplexity of β-VAE is due to mode collapse, which produces very short pivot sentences such as "great!". The match our observations from the experiments on density under the aggregated posterior distribution, confirming that latent vacancy prevents effective manipulation of the latent codes. We also conduct an ablation study by removing L REG and L S-REC from the objective. The demonstrate that both terms are crucial to the success of CP-VAE. Without L REG, CP-VAE experiences posterior collapse for z z z. As a , v p and v n collide with each other, leading to failure in disentangled representation learning. Since we choose K as 3, it is convenient to visualize the samples during training with p p p in the learnt probability simplex, as shown in Fig. 4. We can see that the whole simplex is mostly covered with samples with the help of L S-REC. Without L S-REC, the decoding network fails to recognize the basis vectors due to the poor coverage of the probability simplex, causing the model to lose most of its transferring ability. , denoted as CP-G(loVe) and CP-B(ert) respectively. The remaining settings are as described in the above section. On the two datasets, we compare to three adversarially trained models: StyleEmbedding (SE) , MultiDecoder (MD) , CrossAligned (CA) and two state-of-the-art models based on a "delete, transform, and generate" framework: DeleteAndRetrieve (D&R) and Blind-GenerativeStyleTransformer (B-GST) . Evaluation protocols: Four different automatic evaluation metrics are used to measure the different perspectives of the transferring quality, following. To measure transferring ability, we use pre-trained CNN based classifiers achieving 98% and 84% accuracies on the test sets of Yelp and Amazon respectively. To measure content preservation, we use the BLEU score between the transferred sentences and the source sentences. To measure fluency, we finetune OpenAI GPT-2 with 345 million parameters on the same trainingdev-test split to obtain the perplexity of generated sentences. The fine-tuned language models achieve perplexities of 26.6 and 34.5 on the test sets of Yelp and Amazon respectively. In addition, argued that the Generalized Language Evaluation Understanding Metric (GLEU) has a better correlation with the human judgement. Here, we use the implementation of GLEU 3 provided by to calculate the GLEU score. Result Analysis: As observed by and , accuracy, BLEU score and perplexity do not correlate well with human evaluations. Therefore, it is important to not consider them in isolation. Tab. 2 shows that our proposed approaches get similar scores on these metrics with human reference sentences on the second row, indicating that the generated sentences of our proposed approaches is reasonable considering the combination of these metrics. As seen by and verified in Sec. 6.1, GLEU strike a balance between target style match and content retention and correlate well with the human evaluations. From Tab. 2, CP-VAE consistently outperforms the three adversarially trained models on GLEU by a noticeable margin and achieve competitive as compared to the recent state-of-the-art models. By checking the samples generated from the models as shown in Tab. 3, B-GST, the current state-of-the-art, is more they are cheerios, afterall, and we love the original kind. HUMAN because it is made of cast iron, scorching is maximized. they are cheerios, and we love them. B-GST because it s cheaply made of cast iron, is useless. they are sturdy, afterall, sturdy and we love the original. because it s made of cast iron, vomitting. they are ripe, tastier, and we love them. because it s made of cast iron, limp. they are divine, fluffier, and we love them. consistent to the source sentence, which can be expected, since it only makes necessary edits to flip the sentiment. CP-VAE tends to generate more diverse contents which may not be relevant sometimes, but the overall quality is reasonable considering it is trained without the label information. More samples can be found in Appendix E. Table 4: Two pairs of samples generated without and with topic transition. The first sentence in the pair is generated with a topic fixed throughout the generation; while the second sentence is generated with topic transition, the generated outputs after switching are marked as bold. World throughout A federal judge on Friday ordered a federal appeals court to overturn a federal appeals court ruling that the Visa and MasterCard credit card associations violated federal antitrust law by barring the names of the state. A federal judge on Friday ordered a federal appeals court to overturn a decision by the Supreme Court to overturn a decision by the Federal Communications Commission to block the company's antitrust case against Microsoft Corp. Sports throughout NEW YORK (Reuters) -Roger Federer, the world's No. 1 player, will miss the rest of the season because of a sore quadriceps. Sports to Business NEW YORK (Reuters) -Roger Federer, the world's No. 1 player, will miss the rest of the year because of a bid-rigging scandal. To further explore the potential of CP-VAE, we conduct the following exploratory experiments. We use the AG news dataset constructed by , which contains four topic categories which are World, Sports, Business and Sci/Tech, with the title and description fields. Here, we drop the title and just use the description field to train CP-VAE and set K = 10. All four topics are automatically discovered by CP-VAE and identified as described in Sec. 6.1. We also compare the of our identified topics to standard baselines for unsupervised topic modelling, the details can be found in Appendix D. We choose a basis vector discovered by our model and generate a few tokens. Then, we switch the basis vector and continue the generation until the end-of-seq token is generated. Generated samples are shown in Table 4. We see that our model learns to transition from one topic to another in a natural and fluent way within the same sentence. Several observations can be made based on these samples: it is good at detecting name entities and replacing them with the name entities related to the chosen topic; there is no hard restriction on when to switch the topic; the model will determine an appropriate way to do the transition by itself. Such observations confirm that CP-VAE possesses a filled constrained latent space which make the latent code robust to manipulation across different time steps, which can be effectively reflected in the generation process. Due to space limitations, we put more samples in Appendix F. In this work, we investigate latent vacancy as an important problem in unsupervised learning of controllable representations when modelling text with VAEs. To mitigate this, we propose to constrain the posterior within a learned probability simplex, achieving the first success towards controlled text generation without supervision. For the β-VAE we used for the exploratory experiments, we use a LSTM encoding network and a LSTM decoding network. For the encoding network, the input size is 256, and the hidden size is 1,024. For the decoding network, the input size is 256, the hidden size is 1,024, and dropouts with probability 0.5 are applied on after the embedding layer and the LSTM layer in the decoding network. β is chosen as 0.35, the dimension for the latent code is 80, and the batch size is 32. We use SGD with learning rate 1.0 to update the parameters for both the encoding and the decoding network. We train the model until the reconstruction loss stops decreasing. First, we normalize the value of each latent code by subtracting the mean estimated over all the training samples. Then we use the polarity of each latent code to classify the sentiment in the validation set. The one with the highest accuracy is identified as the latent factor indicating the sentiment. A.3 SAMPLES GENERATED FROM β-VAE Table 5: Samples of generated sentences from β-VAE on Yelp. Positive to Negative Negative to Positive SRC this place is super yummy! but it probably sucks too! β-VAE this place is perfect for all of us or so long and over priced! thank you! SRC i will be going back and enjoying this great place there is definitely not enough room in that part of the venue. β-VAE i will be going back and recommending this place to anyone who lives in the valley! there is great. Following manipulation strategies have been attempted: fixing the relevant factor to µ + 2σ and µ − 2σ; fixing the relevant factor to µ − σ and µ − σ; fixing the relevant factor to the maximum value and the minimum value of the relevant factor appearing in the training samples; calculating a latent vector based on 10 manually constructed parallel sentences with opposite sentiment while keeping other factors unchanged. However, none of these four strategies is effective considering the generation . We report the with the first strategy in the paper, since it performs the best considering the accuracy and the BLEU score. Across all the datasets, we use Adam with learning rate 0.001 to update the parameters for the encoding network, while SGD with learning rate 1.0 to update the parameters for the decoding network. The batch size is chosen to be 32. Dropouts with drop probability 0.5 are applied on applied on after the embedding layer and the LSTM layer in the decoding network. We train the model until the reconstruction loss stops decreasing. For the structured part z z z, we use β-VAE setting β as 0.2 across all the datasets. For the unstructured part z z z, different strategies are employed for each dataset: • Yelp: β-VAE setting β as 0.35. • Amazon: β-VAE setting β as 0.35. • AG-News: KL annealing, from 0.1 to 1.0 in 10 epochs. The hyperparameters are chosen by checking L VAE, KL, and the generated outputs on the development set for Yelp and AG-News. Amazon follows the same setting as Yelp without extra tuning. For decoding, we use beam search with a beam size of 5. C PROOF OF MINIMALIZATION OF EQ. 5 The problem can be formulated as an optimization problem as follows: By introducing a Lagrange multiplier λ, the Lagrange function is defined as In order to find the optimal point, we require that which shows that all p i are equal. By using the constraint i p i = 1, we find p i = Experimental setup: We use the AG news dataset for this task constructed by . It contains four topic categories which are World, Sports, Business and Sci/Tech, with the title and description fields. For each category, there are 30, 000 training samples and 1, 900 test samples. In this paper, we drop the title and just use the description field. We compare our approach to two standard baselines for unsupervised topic modelling: LDA , a standard implementation of LDA is used for this baseline 4; k-means. To show the power of our approach beyond the pre-trained sentence representations, we perform k-means clustering directly on the sentence representations. Following , we assign each inferred topic to one of the gold-standard topics with the optimal mapping and report the precision (a.k.a. purity), recall (a.k.a. collocation) and F 1 score. The number of topics is chosen to be 10. The reported for the baselines and our model are the average over 10 runs. Quantitative : The are shown in Table 7. We can see that our approach achieves comparable to LDA while significantly outperforming k-means in all four categories, indicating that our approach can go beyond just clustering on pre-trained sentence representations. Table 8: Sentiment manipulation from positive to negative SRC this was the best i have ever had! B-GST this was the worst place i have ever had! CP-G this was the worst pizza i have ever had! CP-B this was the worst i have ever had! SRC friendly and welcoming with a fun atmosphere and terrific food. B-GST the hummus is ridiculously bland and bland. CP-G rude and unorganized with a terrible atmosphere and coffee. CP-B the hummus is ridiculously greasy and tasteless. SRC i ordered the carne asada steak and it was cooked perfectly! B-GST i ordered the carne asada steak and it was just as bad! CP-G i ordered the carne asada steak and it was n't cooked and it was lacking. CP-B i ordered the carne asada burrito and it was mediocre. SRC the owner is a hoot and the facility is very accommodating. B-GST the owner is a jerk and the facility is very outdated. CP-G the owner is a hoot and the facility is empty and the layout is empty. the owner is a riot and the facility is very clean. SRC i will be going back and enjoying this great place! B-GST i wo n't be going back and this place is horrible! CP-G i will be going back and eat this pizza hut elsewhere. CP-B i will be going back and hated the worst dining experience. Table 9: Sentiment manipulation from negative to positive SRC there is definitely not enough room in that part of the venue. B-GST there is plenty enough seating in that part of the venue. there is definitely an authentic dinner in that part. there is definitely a nice theatre in that part. SRC but it probably sucks too! B-GST but it tastes great too! CP-G but it's truly fun and insanely delicious. CP-B but it's probably wonderful when u! SRC always rude in their tone and always have shitty customer service! B-GST always in tune with their tone and have great customer service. CP-G always great with their birthdays and always excellent music. CP-B always accommodating and my dog is always on family. SRC i was very sick the night after. B-GST i was very happy the night after. CP-G i was very pleased with the night. CP-B i was very happy with the night. SRC this is a horrible venue. B-GST this is a wonderful venue. CP-G this is a great place for celebrating friends. this is a great place for beginners. Table 10: Sentiment manipulation from positive to negative SRC most pizza wheels that i ve seen are much smaller. B-GST most pizza dough that i ve seen are much better. CP-G most pizza wheels that i ve seen are much more good and are much quality. CP-B most pizza wheels that i ve seen are much better than are much better SRC however, this is an example of how rosle got it right. B-GST however, this game is an example of how rosle loves it. CP-G however, this is an example of how toxic... sad... obviously. however, this is an example of how cheap. similar. cheap advice. cheap advice. similar. SRC auto shut off after num num hours, which is a good feature. B-GST auto shuts off after num num hours, which is a shame. CP-G whipped mask off after num num hours, which is slimy, which is disgusting. auto shut off after num num hours, which is a stupid idea, which seems to be bad. SRC that said, the mic did pic up everything it could. B-GST that said, the game took up everything it could. CP-G that said, the shampoo did nt smell him well. stopped cleaning everything. ended up smelling sick CP-B that said, the mic did not fit everything on well, let me down it weren t cleaning SRC i also prefered tha blade weight and thickness of the wustof! B-GST i also like the blade weight and of the wustof. CP-G i also disliked the blade weight and thickness of the materials. i also slammed the blade weight and thickness of the wide. Table 11: Sentiment manipulation from negative to positive SRC the quality is declined quickly by heat exposure. B-GST the water is quickly drained by head exposure. CP-G the quality is utilitarian so grinding or sandwiches. the quality is priceless quickly by heat rises. SRC the directions were easy to follow but the quality of the easel was pathetic. B-GST the directions were easy to follow but the quality of the product was excellent. CP-G the directions were easy to follow but the quality is good for the quality and is CP-B the directions were easy to follow but the quality is what the quality is like the best quality of SRC multiplayer is just as bad, though thankfully not worse. B-GST quality is just as good, though thankfully not perfect. besides it is just good, though. those.. usually... usually... multiplayer is just as bad, though somebody s also so far not so far but no problem. SRC another energy product that simply wastes our money. B-GST another energy product that simply saves our money. another energy product that simply glides your pasta. another energy product that simply wastes this money. SRC i received the wrong color and it shreds easily. B-GST i received the color and it works easily. CP-G i low the new color and it closes easily. CP-B i received the wrong color and it pours easily from dishwasher and dries easily on garlic easily. said on Tuesday it had developed a prototype micro fuel cell aimed at protecting the world's largest oil producer.
why previous VAEs on text cannot learn controllable latent representation as on images, as well as a fix to enable the first success towards controlled text generation without supervision
1,338
scitldr
In this paper we developed a hierarchical network model, called Hierarchical Prediction Network (HPNet) to understand how spatiotemporal memories might be learned and encoded in a representational hierarchy for predicting future video frames. The model is inspired by the feedforward, feedback and lateral recurrent circuits in the mammalian hierarchical visual system. It assumes that spatiotemporal memories are encoded in the recurrent connections within each level and between different levels of the hierarchy. The model contains a feed-forward path that computes and encodes spatiotemporal features of successive complexity and a feedback path that projects interpretation from a higher level to the level below. Within each level, the feed-forward path and the feedback path intersect in a recurrent gated circuit that integrates their signals as well as the circuit's internal memory states to generate a prediction of the incoming signals. The network learns by comparing the incoming signals with its prediction, updating its internal model of the world by minimizing the prediction errors at each level of the hierarchy in the style of {\em predictive self-supervised learning}. The network processes data in blocks of video frames rather than a frame-to-frame basis. This allows it to learn relationships among movement patterns, yielding state-of-the-art performance in long range video sequence predictions in benchmark datasets. We observed that hierarchical interaction in the network introduces sensitivity to memories of global movement patterns even in the population representation of the units in the earliest level. Finally, we provided neurophysiological evidence, showing that neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity and behaviors. These findings suggest that predictive self-supervised learning might be an important principle for representational learning in the visual cortex. While the hippocampus is known to play a critical role in encoding episodic memories, the storage of these memories might ultimately rest in the sensory areas of the neocortex BID27. Indeed, a number of neurophysiological studies suggest that neurons throughout the hierarchical visual cortex, including those in the early visual areas such as V1 and V2, might be encoding memories of object images and of visual sequences in cell assemblies BID54 BID14 BID52 BID2 BID21. As specific priors, these memories, together with the generic statistical priors encoded in receptive fields and connectivity of neurons, serve as internal models of the world for predicting incoming visual experiences. In fact, learning to predict incoming visual signals has also been proposed as an objective that drives representation learning in a recurrent neural network in a self-supervised learning paradigm, where the discrepancy between the model's prediction and the incoming signals can be used to train the network using backpropagation, without the need of labeled data BID9 BID46 BID42 BID34 BID21.In computer vision, a number of hierarchical recurrent neural network models, notably PredNet BID24 and PredRNN++, have been developed for video prediction with state-of-the-art performance. PredNet, in particular, was inspired by the neuroscience principle of predictive coding BID31 BID39 BID21 BID7 BID11. It learns a LSTM (long short-term memory) model at each level to predict the prediction errors made in an earlier level of the hierarchical visual system. Because the error representations are sparse, the computation of PredNet is very efficient. However, the model builds a hierarchical representation to model and predict its own errors, rather than learning a hierarchy of features of successive complexities and scales to model the world. The lack of a compositional feature hierarchy hampers its ability in long range video predictions. Here, we proposed an alternative hierarchical network architecture. The proposed model, HPNet (Hierarchical Prediction Network), contains a fast feedforward path, instantiated currently by a fast deep convolutional neural network (DCNN) that learns a representational hierarchy of features of successive complexity, and a feedback path that brings a higher order interpretation to influence the computation a level below. The two paths intersect at each level through a gated recurrent circuit to generate a hypothetical interpretation of the current state of the world and make a prediction to explain the bottom-up input. The gated recurrent circuit, currently implemented in the form of LSTM, performs this prediction by integrating top-down, bottom-up, and horizontal information. The discrepancy between this prediction and the bottom-up input at each level is called prediction error, which is fed back to influence the interpretation of the gated recurrent circuits at the same level as well as the level above. To facilitate the learning of relationships between movement patterns, HPNet processes data in the unit of a spatiotemporal block that is composed of a sequence of video frames, rather than frame by frame, as in PredNet and PredRNN++. We used a 3D convolutional LSTM at each level of the hierarchy to process these spatiotemporal blocks of signals BID1, which is a key factor underlying HPNet's better performance in long range video prediction. In the paper, we will first demonstrate HPNet's effectiveness in predictive learning and its competency in long range video prediction. Then we will provide neurophysiological evidence showing that neurons in the early visual cortex of the primate visual system exhibit the same sensitivity to memories of global movement patterns as units in the lowest modules of HPNet. Our suggest that predictive self-supervised learning might indeed be an important strategy for representation learning in the visual cortex, and that HPNet is a viable computational model for understanding the computation in the visual cortical circuits. Our objective is to develop a hierarchical cortical model for predictive learning of spatiotemporal memories that is competitive both for video prediction, and for understanding the learning principles and the computational mechanisms of the hierarchical visual system. In this regard, our model is similar conceptually to Ullman's counter-stream model , Mumford's analysis by synthesis framework BID32, and Hawkin's hierarchical spatiotemporal memory model (HTM) BID15 for hierarchical cortical processing. At a conceptual level, it can also be considered as a deep learning implementation of hierarchical Bayesian inference model of the visual cortex BID22 BID5 BID18.HPNet integrates ideas of predictive coding BID32 BID39 BID24 and associative coding BID28 BID13. It differs from the predictive coding models BID39 BID24 in that it learns a hierarchy of feature representations in the feedforward path to model features in the world as in normal deep convolutional neural networks (DCNN). PredNet, on the other hand, builds a hierarchy to model successive prediction errors of its own prediction of the world. PredNet is efficient because its convolution is operated on sparse prediction error codes, but we believe lacking a hierarchical feature representation limits its ability to model relationships among more global and abstract movement concepts for longer range video prediction. We believe having a fast bottom-up hierarchy of spatiotemporal features of successive scale and abstraction will allow the system to see further into the future and make better prediction. A key difference between the genre of predictive learning models (HPNet, PredNet) and the earlier predictive coding models implemented by Kalman filters BID39 or associative coding models implemented by interactive activation BID28 BID13 is that the synthesis of expectation is not done simply by the feedback path, via weight matrix multiplication, but by local gated recurrent circuits at each level. This key feature makes this genre of predictive learning models more powerful and competent in solving real computer vision problems. The idea of predictive learning, using incoming video frames as self-supervising teaching labels to train recurrent networks, can be traced back to BID9. Recently, there has been active exploration of self-supervised learning in computer vision BID35 BID34 BID12 BID42 BID37 BID47, particularly in the area of video prediction research BID17 BID43 BID53 BID33 BID46 BID51. The large variety of models can be roughly grouped into three categories: autoencoders, DCNN, and hierarchy of LSTMs. Some models also involve feedforward and feedback paths, where the feedback paths have been implemented by deconvolution, autoencoder networks, LSTM or adversary networks BID10 BID24 BID48 BID11. Some other models, such as variational autoencoders, allowed multiple hypotheses to be sampled BID0 BID6.PredRNN++ is the state-of-the-art hierarchical model for video prediction. It consists of a stack of LSTM, with the LSTM at one level providing feedforward input directly to the LSTM at the next level, and ultimately predicting the next video frame at its top level. Thus, its hierarchical representation is more similar to an autoencoder, with the intermediate layers modeling the most abstract and global spatiotemporal memories of movement patterns and the subsequent layers representing the unfolding of the feedback path into a feedforward network with its top-layer's output providing the prediction of the next frame. PredRNN++ does not claim neural plausibility, but it offers state-of-the-art performance for benchmark performance evaluation, with documented comparisons to other approaches. Recent single-unit recording experiments in the inferotemporal cortex (IT) of monkeys have shown that neurons responded significantly less to predictable sequences than to novel sequences BID29 BID30 BID38, suggesting that neural activities might signal prediction errors. The novel neurophysiolgical experiment we presented here demonstrated similar prediction suppression effects in the early visual cortex of monkeys for well-learned videos, suggesting neuronal sensitivity to memories of global movement patterns and scene context in the earliest visual areas. This is consistent with other recent studies that showed neurons in mouse V1 might be able to encode some forms of spatiotemporal memories in their recurrent circuits BID14 BID52 BID3. HPNet is composed of a stack of Cortical Modules (CM). Each CM can be considered as a visual area along the ventral stream of the primate visual system, such as V1, V2, V4 and IT. We used four Cortical Modules in our experiment. The network contains a feedforward path that is realized in a deep convolutional neural network (DCNN), a stack of Long Short Term Memory (LSTM) modules that link the feedforward path and the feedback path together. Figure 1 (a) shows two CMs stacked on top of each other. The feedforward path performs convolution (indicated by) on the input spatiotemporal block I l with a kernel to produce R l, where l indicates CM level. R l is then down-sampled to provide the input I l+1 for CM l+1 for another round of convolution in the feedforward path. I l+1 also goes into LSTM l+1 (Lhe STM in CMt l+1). In each CM l level, the bottom-up input I l is compared with the prediction P l generated from the interpretation output H l of LSTM l. The prediction error signal is transformed by a convolution into E l, which is fed back to both LSTM l and LSTM l+1 to influence their generation of new hypotheses H l and H l+1. To make the timing relationship between the different interacting variables more explicit, we now use k to indicate time step or, equivalently, the video input frame. LSTM l at step k integrates the bottom-up feature input R to recover the representation at the current frame R k l. This allows the network to maintain a full higher order representation R at all times in the next layer while enjoying the benefit of fast computation on sparse input. In their scheme BID36, the first frame I k=0 was convolved with a set of dense convolution kernels and then the subsequent frames were convolved with a set of sparse convolution kernels. For parsimony and neural plausibility, we used the same set of sparse kernels for processing both the first full frame and the subsequent temporal-difference frames, at the expense of incurring some inaccuracy in our prediction of the first few frames. The input data of our network model is a sequence of video frames or a spatiotemporal block. For our implementation, each block contains 5 video frames. If we consider that each frame corresponds roughly to 25 ms, this would translate into 125 ms, in the range of the length of temporal kernel of a cortical neuron. Our convolution kernel is in three dimension, processing the video by spatiotemporal blocks. The block could slide in time with a temporal stride of one frame or a stride as large as the length of the block d. The LSTM is a 3D convolutional LSTM BID1 because of 3D convolution and spatiotemporal blocks. Convolution LSTM BID41, in which Hadamard product in LSTM is replaced by a convolution, has greatly improved the performance of LSTM in many applications. Earlier video prediction models (e.g. PredNet, PredRNN) processed video sequences frame by frame, as shown in FIG1 (d). We experimented with different data units and approaches. In the Frame-to-Frame (F-F) approach, an input frame is used to generate one predicted future frame FIG1 ). In the Block-to-Frame (B-F) approach FIG1 ), a block of input frames is used to generate one predicted future frame. This approach is time consuming, but provides more accurate near-range predictions. For longer-range predictions, we found using a spatiotemporal block to predict a spatiotemporal block, i.e. the Block-to-Block (B-B) approach FIG1 ), to be the most effective, because the LSTM learns the relationship between movement segments in the sequences. The details of our algorithm of the 3D convolutional LSTM is specified in Appendix A. The entire network is trained by minimizing a loss function which is the weighted sum of all the prediction errors, with the following algorithm, DISPLAYFORM0 DISPLAYFORM1 where x t is the input sequence, H k l is the output of LSTM, P k l is the prediction, SATLU is a saturating non-linearity set at the maximum pixel value: SATLU(x; p max):= min(p max, x), spconv is sparse convolution, λ k and λ l are weighting factors by time and CM level, respectively, and n l is the number of units in the lth CM level, and d is the number of frames in each spatiotemporal block. The full algorithm is shown in Algorithm 1. In this section, we first evaluate the performance of our model in video prediction using two benchmark datasets: synthetic sequences of the Moving-MNIST database and the KTH 1 real world human movement database. We then investigate the representations in the model to understand how recurrent network structures have impacted on the feedforward representation. We finally compare the temporal activities of neurons in the network model with that of neurons in the visual cortex of monkeys, in video sequence learning, to evaluate the plausibility of HPNet. Since for video prediction, PredNet is the most neurally plausible model and PredRNN++ provides state-of-the-art computer vision performance, we will compare HPNet's performance with these two network models. Because these two models work on frame-to-frame basis, we implemented three versions of our network for comparison: Frame-to-Frame (F-F), where we set our data spatiotemporal block size to one frame and used 2D convLSTM instead of 3D convLSTM to predict the next frame based on the current frame; Block-to-Frame (B-F), where we used a sliding block window to predict the next frame based on the current block of frames; Block-to-Block (B-B), where the next spatiotemporal block was predicted from the current spatiotemporal block FIG1 ).We trained all five networks using 40-frame sequences extracted from the two databases in the same way as described in BID24. We then compared their performance in predicting the next 20 frames when only the first 20 frames were given. The test sequences were drawn from the same dataset but not in the training set. The common practice in PreNet and PredRNN++ for predicting future frames when input is no longer available is to make the prediction of the last time step the next input and use that to generate prediction of the next time step. All models tested have four modules (layers). All three versions of our model and PredNet used the same number of feature channels in each layer, optimized by grid search, i.e. for the and for the KTH dataset. For PredRNN++, we used Algorithm 1 The algorithm of our model DISPLAYFORM0 end if 8: end for 9:for l = 1 to L do Bottom-up procedure 10:if l = 1 then 11: DISPLAYFORM1 end for 18: end for the same architecture and feature channel numbers provided by. All kernel sizes are either 3×3 (for F-F) or 3×3×3 (for B-F and B-B) for all five models. The input image frame's spatial resolution is 64×64.The models were trained and tested on GeForce GTX TITAN X GPUs. We evaluated the prediction performance based on two quantitative metrics: Mean-Squared Error (MSE) and the standard Structural Similarity Index Measure (SSIM) BID50 of the last 20 frames between the predicted frames and the actual frames. The values of SSIM range from -1 to 1, with a larger value indicating greater similarity between the predicted frames and the actual future frames. We randomly chose subsets of digits in the Moving MNIST 2 dataset in which the video sequences contain two handwritten digits bouncing inside a frame of 64×64 pixels. We extracted 40-frame sequences at random starting frame position in the video in the same way as in BID42 (followed by PredNet and PredRNN++). This extraction process is repeated 15000 times, ing in a training set of 10000 sequences, a validation set of 2000 sequences, and a testing set of 3000 sequences. FIG2 and TAB0 compare the of different models on the Moving-MNIST dataset. There are 40 frames in total and we show the every two frames. The yellow vertical line in the middle represents the border between the first 20 and the last 20 predicted frames by various models. We can see B-F achieves better performance than B-B in short term prediction task when actual input frames are provided, but B-B outperforms B-F in the longer range prediction, reflecting learning of the relationships at the movement levels by the 3D convLSTM. B-F doing better than F-F confirmed that the spatiotemporal block data structure provides additional information for modeling movement tendency. Finally, we found that even F-F achieved better prediction than PredNet, suggesting that a feature hierarchy might be more useful than a hierarchy of predicted errors. Finally, our B-B network outperformed the state-of-the-art PredRNN++. BID40 introduced the KTH video database which contains 2391 sequences of six human actions: walking, jogging, running, boxing, hand waving, and hand clapping, performed by 25 subjects in four different scenarios. We divided video clips across all 6 action categories into a training set of 108717 sequences (persons #1-16) and a test set of 4086 sequences (persons #17- -block (B-B), block-toframe (B-F), frame-to-frame (F-F)), PredNet, and PredRNN++, respectively. k=1 to k=19 are predicted frames of the models when the input frames were available. k=21 to k=39 are the "deadreckoning" predicted frames of the model when there are no input. 25) as was done in, except we extracted 40-frame sequences. We center-cropped each frame to a 120×120 square and then re-sized it to input frame size of 64×64. TAB1 compared the of the different models on the KTH dataset, essentially reproducing all the observations we made based on the Moving-MINST dataset FIG2. B-B outperformed all tested models in the long range video prediction task. FIG5 (a) and (b) compared the video prediction performance of the different models in terms of the "dead-reckoning frames" to be predicted when only the first twenty frames were provided for the two datasets. The show that, in both cases, B-B is far more effective than B-F in long range video prediction. Hierarchical feedback in HPNet endows the representations in the earliest Cortical Modules with sensitivity to global movement and image patterns, despite these units' very localized receptive fields, particularly R in the feedforward path (FIG7). Could the neurons in the early visual areas of the mammalian hierarchical visual systems behave in a similar way, becoming sensitive to the memory of global movement patterns of familiar movies?We found this to be the case in a series of neurophysiological experiments that we have performed to study the effect of unsupervised learning of video sequences on the early visual cortical representations. Two monkeys, implanted with Gray-Matter semi-chronic multielectrode arrays (SC32 and SC96) over the V1 operculum with access to neurons in V1 and V2, participated in the experiment. Each experiment lasted for at least seven daily recording sessions. In each recording session, the monkey was required to fixate on a red dot on the screen for a water reward while a set of 40 video clips of natural scenes with global movement patterns was presented. One clip was presented per trial. Each clip lasted for 800 ms. A total of 40 clips were presented once each in a random interleaved fashion in a block of trials, and each block was repeated 20-25 times each day 3. Among the 40 movie clips tested every day, twenty of these were the same each day, designated as "Predicted set". Twenty of them were different each day, designated as "Unpredicted set". Each set consisted of 20 movies. The rationale for the experimental design is as follows. Given that we were recording from 30+ neurons in each session, even though the neurons have different stimulus preferences in their local receptive fields, each neuron would experience about 400 movie frames for the Predicted movie set, as well as for each of the Unpredicted movie sets. When we averaged the temporal responses of all the neurons to each of the 20-movie sets, they should be roughly the same. In the first two days of the experiment, the clips in the Predicted set were still unpredicted, hence there should have been no difference between the population averaged responses to the Predicted set and the Unpredicted set. This was indeed the case as shown in FIG8 (top row) which compared the averaged temporal responses of the neurons to the Predicted set and to the Unpredicted set for the first two days of training in one experiment. DISPLAYFORM0 Interestingly, we found that after only three days of unsupervised training, with 20-25 exposures of each familiar movie per day, the neurons started to respond significantly less to predicted movies than to novel movies in the later part of their responses, starting around 100 ms post-stimulus onset, as shown in FIG8 (b) (bottom row). The evolution of daily mean of all neurons' familiarity suppression index over days is shown as the magenta curve. As the neurons became more and more familiar with the Predicted set, the prediction suppression effect gradually increased and saturated at around the sixth and seventh days. We repeated the experiments six times in two monkeys and obtained fairly consistent . Note that the movie clips were shown in a 8 o aperture during the experiment. Given that the V1 and V2 neurons being studied have very local and small receptive fields (0.5 o to 2 o), it is rather improbable that the neurons would have remembered or adapted to the local movement patterns of the Predicted set within their receptive fields, as they would be experiencing millions of such local spatiotemporal patterns in their daily experience. Indeed, when the video clips were shown to the neurons through a smaller 3 o diameter aperture, the prediction suppression effects were much attenuated, suggesting that the neurons had indeed became sensitive to the global context of movement patterns! To check whether neurons in our network behave in the same way, we performed a similar experiment on our network, pretrained with the KTH dataset. We randomly extracted 20 sequences from the BAIR dataset BID8, resized the sequence length to 40 frames and each frame size to 64×64. We separated the 20 video sequences into two sets -the Predicted set and the Unpredicted set. We averaged the responses to the two movie sets respectively of each type of neurons in the network (E (prediction error units), P (prediction units), and R (representation units)) in each CM within the center 8×8 hypercolumns. Before training, the responses of each type of neurons are indeed the same for both movie sets (not shown, but similar to FIG8 . Then, we trained the network with the Predicted set for 2000 epochs. After training, all three types of units in each CM exhibited the prediction suppression effect as shown in FIG8 (c)-(h) (full details in Appendix C). We observed the prediction suppression effect in all three types of neurons in all the modules in the hierarchy, with the higher modules showing a stronger effect. It is not surprising that the prediction error neurons E would decrease their responses as the network learns to predict the familiar movies better. It is rather interesting to find the representation neurons R and the prediction neurons P also exhibit prediction suppression, even though these neurons represent features rather than prediction errors. The precise reasons remain to be determined, but the fact that all neuron types in the model exhibited the prediction suppression effect might explain why the prediction suppression effects were commonly observed in most of the randomly sampled neurons in the visual cortex (see FIG8). These findings suggest that predictive self-supervised learning might indeed be an important principle and mechanism by which the visual cortex learns its representations, and the neurophysiological observations on prediction suppression in IT (see Appendix D) and now in the early visual cortex might be explained by this class of hierarchical cortical models. In this paper, we developed a hierarchical prediction network model (HPNet), with a fast DCNN feedforward path, a feedback path and local recurrent LSTM circuits for modeling the counterstream / analysis-by-synthesis architecture of the mammalian hierarchical visual systems. HPNet utilizes predictive self-supervised learning as in PredNet and PredRNN++, but integrates additional neural constraints or theoretical neuroscience ideas on spatiotemporal processing, counter-stream architecture, feature hierarchy, prediction evaluation and sparse convolution into a new model that delivers the state-of-the-art performance in long range video prediction. Most importantly, we found that the hierarchical interaction in HPNet introduces sensitivity to global movement patterns in the representational units of the earliest module in the network and that real cortical neurons in the early visual cortex of awake monkeys exhibit very similar sensitivity to memories of global movement patterns, despite their very local receptive fields. These findings support predictive self-supervised learning as an important principle for representation learning in the visual cortex and suggest that HPNet might be a viable computational model for understanding the cortical circuits in the hierarchical visual system at the functional level. Further evaluations are needed to determine definitively whether PredNet or HPNet is a better fit to the biological reality. APPENDIX A 3D CONVOLUTIONAL LSTM Because our data are in the unit of spatitemporal block, we have to use a 3D form of the 2D convolutional LSTM. 3D convolutional LSTM has been used by BID1 in the stereo setting. The dimensions of the input video or the various representations (I, E and H) in any module are c×d×h×w, where c is the number of channels, d is the number of adjacent frames, h and w specify the spatial dimensions of the frame. The 3D spatiotemporal convolution kernel is m × k × k in size, where m is kernel temporal depth and k is kernel spatial size. The spatial stride of the convolution is 1. The size of the output with n kernels is n × d × h × w. We define the inputs as X 1,..., X t, the cell states as C 1,..., C t, the outputs as H 1,..., H t, and the gates as i t, f t, o t. Our 3D convolutional LSTM is specified by the equations below, where the function of 3D convolution is indicated by and the Hadamard product is indicated by •. FIG7 of the main text of the paper. The figures demonstrate that as more higher order modules are stacked up in the hierarchy, the semantic clustering into the six movement classes become more pronounced even in the early modules, suggesting that the hierarchical interaction has steered the feature representation into semantic clusters even in the early modules. Module 4-1 means representation of module 1 in a 4-module network. DISPLAYFORM0 We use linear decoding (multi-class SVM) to assess the distinctiveness of the semantiuc clusters in the representation of the different modules in the different networks. The decoding in TAB2 shows that the decoding accuracy based on the reprsentation of module 1 has improved from chance (16%) to 26%, an improvement of 60% between a 1-module HPNet and a 4-module HPNet, and that the representation of module 4 of a 4-module HPNet can achieve a 63% accuracy in classifying the six movement classes, suggesting that the network only needs to learn to predict unlabelled video sequences, and it automatically learns reasonable semantic representations for recognition. For comparison, we also performed decoding on the output representations of each LSTM layer in the PredRNN++ and PredNet to study their representations of the six movement patterns. The shown below indicate that the semantic clustering of the six movements is not very strong in the PredRNN++ hierarchy. We realized that this might be because the PredRNN++ behaves essentially like an autoencoder. The four-layer network effectively only has two layers of feature abstraction, with layer 2 being the most semantic in the hierarchy and layers 3 and 4 representing the unfolding of the feedback path. Decoding indicate that the hierarchical representation based on the output of the LSTM at every layer in PredNet, which serve to predict errors of prediction errors of the previous layer, does not contain semantic information about the global movement patterns. Figure 8: Results of video sequence learning experiments showing prediction suppression can be observed in E, P, and R units in every module along the hierarchical network. The abscissa is time after stimulus onset -where we set each video frame to be 25 ms for comparison with neural data. The ordinate is the normalized averaged temporal response of all the units within the center 8×8 hypercolumns, averaged across all neurons and across the 20 movies in the Predicted set (blue) and the Unpredicted set (red) respectively. Prediction suppression can be observed in all types of units, though more pronounced in the E and P units. HPNet readily reproduces the prediction suppression effects observed in IT neurons. BID29 trained monkeys to image pairs in a fixed order for over 800 trials for each 8 pair images, and then compared the responses of the neurons to these images in the trained order against the responses of the neurons to the same images but in novel pairings. FIG11 shows the mean responses of 81 IT neurons during testing stage for predicted pairs and unpredicted pairs. All the stimuli are presented in both pairs. They found that neural responses to the expected second images in a familiar sequence order is much weaker than the neural responses to the image in an unfamiliar or unexpected sequence order. To evaluate whether HPNet can produce the same effect, we performed exactly the same experiments with 2000 epochs of training on the image pairs, with a gap of 2 frames, and our model produced the same , with lower responses for the predicted second stimulus relative to the unpredicted second stimulus. Each stimulus sequence was presented first with 5 gray frames, followed by 10 frames of the first image in the pair, then 2 gray frames as gap, then 10 frames of the second image in the pair. The responses of the units to the trained set and the untrained set are the same prior to training. After training, the images when arranged in the trained order responded much less after the initial responses than the same images but arranged in unpredicted pairs. The shown in FIG1 duplicated the observations in BID29, the average neural response of E unit is lower than the unpredicted pairs. All three types of units of NPNet exhibit prediction suppression though the effect is much weaker for the R units (see FIG1. BID25 also tested the prediction suppression effect, but their model couldn't allow any gap between the stimuli as in the experiment. Our model can handle gap because of our model is processing information in spatiotemporal blocks.
A new hierarchical cortical model for encoding spatiotemporal memory and video prediction
1,339
scitldr
Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning (RL) agents. However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments. Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool. Saliency map methods are a popular visualization technique that produce heatmap-like output highlighting the importance of different regions of some visual input. They are frequently used to explain how deep networks classify images in computer vision applications (; ; ; ; ; ; ; ; ;) and to explain how agents choose actions in reinforcement learning (RL) applications (; ; ; ; ; ; ;). Saliency methods in computer vision and reinforcement learning use similar procedures to generate these maps. However, the temporal and interactive nature of RL systems presents a unique set of opportunities and challenges. Deep models in reinforcement learning select sequential actions whose effects can interact over long time periods. This contrasts strongly with visual classification tasks, in which deep models merely map from images to labels. For RL systems, saliency maps are often used to assess an agent's internal representations and behavior over multiple frames in the environment, rather than to assess the importance of specific pixels in classifying images. Despite their common use to explain agent behavior, it is unclear whether saliency maps provide useful explanations of the behavior of deep RL agents. Some prior work has evaluated the applicability of saliency maps for explaining the behavior of image classifiers (; ;), but there is not a corresponding literature evaluating the applicability of saliency maps for explaining RL agent behavior. In this work, we develop a methodology grounded in counterfactual reasoning to empirically evaluate the explanations generated using saliency maps in deep RL. Specifically, we: C1 Survey the ways in which saliency maps have been used as evidence in explanations of deep RL agents. C2 Describe a new interventional method to evaluate the inferences made from saliency maps. C3 Experimentally evaluate how well the pixel-level inferences of saliency maps correspond to the semantic-level inferences of humans. (a) (b) (c) Figure 1: (a) A perturbation saliency map from a frame in Breakout, (b) a saliency map from the same model and frame with the brick pattern reflected across the vertical axis, and (c) a saliency map from the same model and frame with the ball, paddle and brick pattern reflected across the vertical axis. The blue and red regions represent their importance in action selection and reward estimation from the current state, respectively. The pattern and intensity of saliency around the channel is not symmetric in either reflection intervention. Consider the saliency maps generated from a deep RL agent trained to play the Atari game Breakout. The goal of Breakout is to use the paddle to keep the ball in play so it hits bricks, eliminating them from the screen. Figure 1a shows a sample frame with its corresponding saliency. The red regions represent the importance of these pixels for action selection. Note the high salience on the missing section of bricks ("tunnel") in Figure 1a. Creating a tunnel to target bricks at the top layers is one of the most high-profile examples of agent behavior being explained according to semantic, human-understandable concepts . Given the intensity of saliency on the tunnel in 1a, it may seem reasonable to infer that this saliency map provides evidence that the agent has learned to aim at tunnels. If this is the case, moving the horizontal position of the tunnel should lead to similar saliency patterns on the new tunnel. However, Figures 1b and 1c show that the salience pattern is not preserved. Neither the presence of the tunnel, nor the relative positioning of the ball, paddle, and tunnel, are responsible for the intensity of the saliency observed in Figure 1a. Examining how some of the technical details of reinforcement learning interact with saliency maps can help understand both the potential utility and the potential pitfalls of interpreting saliency maps. RL methods enable agents to learn how to act effectively within an environment by repeated interaction with that environment. Certain states in the environment give the agent positive or negative reward. The agent learns a policy, a mapping between states and actions according to these reward signals. The goal is to learn a policy that maximizes the discounted sum of rewards received while acting in the environment . Deep reinforcement learning uses deep neural networks to represent policies. These models enable interaction with environments requiring high-dimensional state inputs (e.g., Atari games). Consider the graphical model in Figure 2a representing the deep RL system for a vision-based game environment. The environment maintains some (usually latent) game state. Some function F produces a high-dimensional pixel representation of game state ("Pixels"). The learned network takes this pixel image and produces logits used to select an action. Temporally extended sequences of this action selection procedure in observed agent behavior. Saliency maps are produced by performing some kind of intervention M on this system and calculating the difference in logits produced by the original and modified images. The interventions used to calculate saliency for deep RL are performed at the pixel level (red node and arrow in Figure 2a). These interventions change the conditional probability distribution of "Pixels" by giving it another parent . Functionally, this can be accomplished through a variety of means, including changing the color of the pixel , adding a gray mask , blurring a small region , or masking objects with the color . The interventions M are used to simulate the effect of the absence of the pixel(s) on the network's output. Note however that these interventions change the image in a way that is inconsistent with the generative process F. They are not "naturalistic" modifications. This type of intervention produces images for which the learned network function may not be well-defined. To form explanations of agent behavior, human observers combine information from saliency maps, agent behavior, and semantic concepts. Figure 2b shows a system diagram of how these components interact. Hypotheses about the semantic features identified by the learned policy are proposed by reasoning backwards about what representation might jointly produce the observed saliency pattern and agent behavior. Counterfactual reasoning has been identified as a particularly effective way to present explanations of the decision boundaries of deep models . Humans use counterfactuals to reason about the enabling conditions of particular outcomes, as well as to identify situations where the outcome would have occurred even in the absence of some action or condition . Saliency maps provide a kind of pixel-level counterfactual, but if the goal is to explain agent behavior according to semantic concepts, interventions at the pixel level may not be sufficient. Since many semantic concepts may map to the same set of pixels, it may be difficult to identify the functional relationship between changes in pixels and changes in network output according to semantic concepts or game state . Researchers may be interpreting differences in network outputs as evidence of differences in semantic concepts. However, changes in pixels do not guarantee changes in semantic concepts or game state. In terms of changes to pixels, semantic concepts, and game state, we distinguish among three classes of interventions: distortion, semantics-preserving, and fat-hand (see Table 1). Semantics-preserving and fat-hand interventions are defined with respect to a specific set of semantic concepts. Fat-hand interventions change game state in such a way that the semantic concepts of interest are also altered. The pixel-level manipulations used to produce saliency maps primarily in distortion interventions, though some saliency methods (e.g., object-based) may conceivably produce semanticspreserving or fat-hand interventions as well. As pixel-level interventions are not guaranteed to produce changes in semantic concepts, counterfactual evaluations applying semantics-preserving interventions may be a more appropriate approach for precisely testing hypotheses of behavior. To assess how saliency maps are used to make inferences regarding agent behavior in practice, we surveyed recent conference papers in deep RL. We focused our pool of papers on those that use saliency maps to generate explanations or make claims regarding agent behavior. Our search criteria consisted of examining papers that cited work that first described any of the following four types of saliency maps: Jacobian Saliency. extended gradient-based saliency maps to deep RL by computing the Jacobian of the output logits with respect to a stack of input images. Perturbation Saliency. generate saliency maps by perturbing the original input image using a Gaussian blur of the image and measure changes in policy from removing information from a region. Object Saliency. use template matching, a common computer vision technique , to detect (template) objects within an input image and measure salience through changes in Q-values for masked and unmasked objects. Attention Saliency. Most recently, attention-based saliency mapping methods have been proposed to generate interpretable saliency maps . From a set of 90 papers, we found 46 claims drawn from 11 papers that cited and used saliency maps as evidence in their explanations of agent behavior. The full set of claims are given in Appendix C. We found three categories of saliency map usage, summarized in Table 2. First, all claims interpret salient areas as a proxy for agent focus. For example, a claim about a Breakout agent notes that the network is focusing on the paddle and little else . Second, 87% of the claims in our survey propose hypotheses about the features of the learned policy by reasoning backwards about what representation might jointly produce the observed saliency pattern and agent behavior. These types of claims either develop an a priori explanation of behavior and verify it using saliency, or they propose an ad hoc explanation after observing saliency to reason about how the agent is using salient areas. One a priori claim notes that the displayed score is the only differing factor between two states and verify from saliency that the agent is focusing on these pixels . An ad hoc claim about a racing game notes that the agent is recognizing a time-of-day cue from the color and acting to prepare for a new race . Finally, only 7% (3 out of 46) of the claims attempt to empirically verify the explanations generated from saliency maps. One of these attempts to corroborate the interpreted saliency behavior by obtaining additional saliency samples from multiple runs of the game. The other two attempt to manipulate semantics in the pixel input to assess the agent's response by, for example, adding an additional object to verify a hypothesis about memorization . In the course of the survey, we also observed several more qualitative characteristics of how saliency maps are routinely used. Subjectivity. Recent critiques of machine learning have already noted a worrying tendency to conflate speculation and explanation . Saliency methods are not designed for formalizing an abstract human-understandable concept such as "aiming" in Breakout, and they do not provide a means to quantitatively compare semantically meaningful consequences of agent behavior. This leads to subjectivity in the drawn from saliency maps. Unfalsiability. One hallmark of a scientific hypothesis or claim is that it is falsifiable . If a claim is false, its falsehood should be identifiable from some conceivable experiment or observation. One of the most disconcerting practices identified in the survey is the presentation of unfalsifiable interpretations of saliency map patterns. An example: "A diver is noticed in the saliency map but misunderstood as an enemy and being shot at" . It is unclear how we might falsify an abstract concept such as "misunderstanding". Assessment of Learned Representations. Evidence indicates that humans rely on causal relationships between objects and their categorization into abstract classes to learn complex processes, such as video games . Our survey suggests researchers infer that: salient regions map to learned representations of semantic concepts (e.g., ball, paddle), and the relationships between the salient regions map to high-level behaviors (e.g., channel-building, aiming). The researchers expectations impose a strong bias on both the existence and nature of these mappings. Our survey indicates that many researchers use saliency maps as an explanatory tool to infer the representations and processes behind an agent's behavior. However, the extent to which such inferences are valid has not been empirically evaluated under controlled conditions. In this section, we show how to generate falsifiable hypotheses from saliency maps and propose an intervention-based approach to verify the hypotheses generated from saliency maps. We intervene on game state to produce counterfactual semantic conditions. This provides a medium through which we might assess the relationship between saliency and learned semantic representations. Building Falsifiable Hypotheses from Saliency Maps. Though saliency maps may not relate directly to semantic concepts, they may still be an effective tool for exploring hypotheses about agent behavior. Claims or explanations informed by saliency maps have three components: semantic concepts, learned representations, and behavior. Let X be a subset of the semantic concepts that can be inferred from the input image. Semantic concepts are often identified visually from the pixel output as the game state is typically latent. Let B represent behavior, or aggregate actions, over temporally extended sequences of frames, and let R be a representation that is a function of some pixels that the agent learns during training. As noted in Section 3, researchers often attempt to infer representations from saliency patterns and agent behavior. To create scientific claims from saliency maps, we recommend using a relatively standard pattern which facilitates objectivity and falsifiability: Concept set X is salient =⇒ Agent has learned representation R ing in behavior B. Consider the Breakout brick reflection example presented in Section 2. The hypothesis introduced ("the agent has learned to aim at tunnels") can be reconstructed as: bricks are salient =⇒ agent has learned to identify a partially complete tunnel ing in maneuvering the paddle to hit the ball toward that region. Forming hypotheses in this format constructs falsifiable claims amenable to empirical analysis. Counterfactual Evaluation of Claims. As previously established in Figure 2, the learned representation and pixel input share a causal relationship with saliency maps generated over a sequence of frames. Given that the representation learned is static, the relationship between the learned representation and saliency should be invariant under different manipulations of pixel input. We use this property to assess saliency under counterfactual conditions. We generate counterfactual conditions by intervening on the RL environment. Prior work has focused on manipulating the pixel input. However, this does not modify the underlying latent game state. Instead, we intervene directly on game state. In the do-calculus formalism , this shifts the intervention node in Figure 2a to game state, which leaves the generative process of the pixel image F intact. We employ TOYBOX, a set of fully parameterized implementation of Atari games , to generate interventional data under counterfactual conditions. The interventions are dependent on the mapping between semantic concepts and learned representations in the hypotheses. Given a mapping between concept set X and a learned representation R, any intervention would require meaningfully manipulating the state in which X resides to assess the saliency on X under the semantic treatment applied. Saliency on x ∈ X is defined as the average saliency over a bounding-box around x. 1 Since the learned policies should be semantically invariant under manipulations of the RL environment, by intervening on state, we can verify whether the counterfactual states produce expected patterns of saliency on the associated concept set X. If the counterfactual saliency maps reflect similar saliency patterns, this provides stronger evidence that the observed saliency indicates the agent has learned representation R corresponding to semantic concept set X. We conduct three case studies to evaluate hypotheses about the relationship between semantic concepts and semantic processes formed from saliency maps. Each case study uses observed saliency maps to identify hypotheses in the format described in Section 4. The hypotheses were generated by watching multiple episodes and noting atypical, interesting or popular behaviors from saliency maps. Using TOYBOX allows us to produce counterfactual states and to generate saliency maps in these altered states. The case studies are conducted on two Atari games, Breakout and Amidar. 2 The deterministic nature of both games allows some stability in the way we interpret the network's action selection. Each map is produced from an agent trained with A2C using a CNN-based OpenAI Baselines implementation with default hyperparameters (see Appendix B for more details). Our choice of model is arbitrary. The emphasis of this work is on methods of explanation, not the explanations themselves. Case Study 1: Breakout Brick Translation. Here we evaluate the behavior from Section 2: Hypothesis 1: bricks are salient =⇒ agent has learned to identify a partially complete tunnel ing in maneuvering the paddle to hit the ball toward that region We intervene on the state by translating the brick configurations horizontally. We expect salience will be nearly invariant to the horizontal translation of the brick configuration. Figure 3a depicts saliency after intervention. Salience on the tunnel is less pronounced under left translation, and more pronounced under right translation. Since the paddle appears on the right, we additionally move the ball and paddle to the far left (Figure 3b). Temporal association (e.g. formation of a tunnel followed by higher saliency) does not generally imply causal dependence. In this case at least, tunnel formation and salience appear to be confounded by location or, at least, the dependence of these phenomena are highly dependent on location. Case Study 2: Amidar Score. Amidar is a Pac-Man-like game in which an agent attempts to completely traverse a series of passages while avoiding enemies. The yellow sprite that indicates the location of the agent is almost always salient in Amidar. Surprisingly, the displayed score is salient as often as the yellow sprite throughout the episode with varying levels of intensity. This can lead to multiple hypotheses about the agent's learned representation: the agent has learned to associate increasing score with higher reward; due to the deterministic nature of Amidar, the agent has created a lookup table that associates its score and its actions. We can summarize these hypotheses as follows: Hypothesis 2: score is salient =⇒ agent has learned to {use score as a guide to traverse the board} ing in {successfully following similar paths in games}. To evaluate hypothesis 2, we designed four interventions on score: • intermittent reset: modify the score to 0 every x ∈ timesteps. • random varying: modify the score to a random number between timesteps. • fixed: select a score from and fix it for the whole game. • decremented: modify score to be 3000 initially and decrement score by d ∈ at every timestep. Figures 4a and 4b show the of intervening on displayed score on reward and saliency intensity, measured as the average saliency over a 25x15 bounding box, respectively for the first 1000 timesteps of an episode. The mean is calculated over 50 samples. If an agent died before 1000 timesteps, the last reward was extended for the remainder of the timesteps and saliency was set to zero. Using reward as a summary of agent behavior, different interventions on score produce different agent behavior. Total accumulated reward differs over time for all interventions, typically due to early agent death. However, salience intensity patterns of all interventions follow the original trajectory very closely. Different interventions on displayed score cause differing degrees of degraded performance (Figure 4a) despite producing similar saliency maps (Figure 4b), indicating that agent behavior is underdetermined by salience. Specifically, the salience intensity patterns are similar for the Interventions on displayed score in differing levels of degraded performance but produce similar saliency maps, suggesting that agent behavior as measured by rewards is underdetermined by salience. control, fixed, and decremented scores, while the non-ordered score interventions in degraded performance. Figure 4c indicates only very weak correlations between the difference-in-reward and difference-in-saliency-under-intervention as compared to the original trajectory. Correlation coefficients range from 0.041 to 0.274, yielding insignificant p-values for all but one intervention. See full in Appendix E.1, Table 6. Similar trends are noted for Jacobian and perturbation saliency methods in Appendix E.1. The existence of a high correlation between two processes (e.g., incrementing score and persistence of saliency) does not imply causation. Interventions can be useful in identifying the common cause leading to the high correlation. Case Study 3: Amidar Enemy Distance. Enemies are salient in Amidar at varying times. From visual inspection, we observe that enemies close to the player tend to have higher saliency. Accordingly, we generate the following hypothesis: Hypothesis 3: enemy is salient =⇒ agent has learned to {look for enemies close to it} ing in {successful avoidance of enemy collision}. Without directly intervening on the game state, we can first identify whether the player-enemy distance and enemy saliency is correlated using observational data. We collect 1000 frames of an episode of Amidar and record the Manhattan distance between the midpoints of the player and enemies, represented by 7x7 bounding boxes, along with the object salience of each enemy. Figure 5a shows the distance of each enemy to the player over time with saliency intensity represented by the shaded region. Figure 5b shows the correlation between the distance to each enemy and the corresponding saliency. Correlation coefficients and significance values are reported in Table 3. It is clear that there is no correlation between saliency and distance of each enemy to the player. Given that statistical dependence is almost always a necessary pre-condition for causation, we expect that there will not be any causal dependence. To further examine this, we intervene on enemy positions of salient enemies at each timestep by moving the enemy closer and farther away from the player. Figure 5c contains these . Given Hypothesis 3, we would expect to see an increasing trend in saliency for enemies closer to the player. However, the size of the effect is close to 0 (see Table 3). In addition, we find no correlation in the enemy distance experiments for the Jacobian or perturbation saliency methods (included in Appendix E.2). Conclusion. Spurious correlations, or misinterpretations of existing correlation, can occur between two processes (e.g. correlation between player-enemy distance and saliency), and human observers are susceptible to identifying spurious correlations . Spurious correlations can sometimes be identified from observational analysis without requiring interventional analysis. Thinking counterfactually about the explanations generated from saliency maps facilitates empirical evaluation of those explanations. The experiments above show some of the difficulties in drawing from saliency maps. These include the tendency of human observers to incorrectly infer association between observed processes, the potential for experimental evidence to contradict seemingly obvious observational , and the challenges of potential confounding in temporal processes. One of the main from this evaluation is that saliency maps are an exploratory tool rather than an explanatory tool. Saliency maps alone cannot be reliably used to infer explanations and instead require other supporting tools. This can include combining evidence from saliency maps with other explanation methods or employing a more experimental approach to evaluation of saliency maps such as the approach demonstrated in the case studies above. The framework for generating falsifiable hypotheses suggested in Section 4 can assist with designing more specific and falsifiable explanations. The distinction between the components of an explanation, particularly the semantic concept set X, learned representation R and observed behavior B, can further assist in experimental evaluation. Generalization of Proposed Methodology. The methodology presented in this work can be easily extended to other vision-based domains in deep RL. Particularly, the framework of the graphical model introduced in Figure 2a applies to all domains where the input to the network is image data. An extended version of the model for Breakout can be found in Appendix 7. We propose intervention-based experimentation as a primary tool to evaluate the hypotheses generated from saliency maps. Yet, alternative methods can identify a false hypothesis even earlier. For instance, evaluating statistical dependence alone can help identify some situations in which causation is absent (e.g., Case Study 3). We also employ TOYBOX in this work. However, limited forms of evaluation may be possible in non-intervenable environments, though they may be more tedious to implement. For instance, each of the interventions conducted in Case Study 1 can be produced in an observation-only environment by manipulating the pixel input . Developing more experimental systems for evaluating explanations is an open area of research. This work analyzes explanations generated from feed-forward deep RL agents. Yet, given that the proposed methodology is not model dependent, aspects of the approach will carry over to recurrent deep RL agents. The proposed methodology would not work for repeated interventions on recurrent deep RL agents due to their capacity for memorization. Prior work has proposed alternatives to the use of saliency maps to support explanation of deep RL agents. Toybox was developed to support behavioral experiments for deep RL systems . Evaluation and Critiques of Saliency Maps. Prior work in the deep network literature has evaluated and critiqued saliency maps. and demonstrate the utility of saliency maps by adding random variance in input. provide a theoretical justification of saliency and hypothesize that there exists a correlation between gradients-based saliency methods and model interpretation. and present evaluation of existing saliency methods for image classification. Others have critiqued network attention as a means of explanation ). We conduct a survey of uses of saliency maps, propose a methodology to evaluate saliency maps, and examine the extent to which the agent's learned representations can be inferred from saliency maps. We investigate how well the pixel-level inferences of saliency maps correspond to the semantic concept-level inferences of human-level interventions. Our show saliency maps cannot be trusted to reflect causal relationships between semantic concepts and agent behavior. We recommend saliency maps to be used as an exploratory tool, not explanatory tool. A SALIENCY METHODS Figure 6 shows example saliency maps for Amidar of the three saliency methods evaluated in this work, namely perturbation, object and Jacobian. Figure 6: Examples of (a) perturbation saliency method ; (b) object saliency method ; and (c) Jacobian saliency method , on Amidar. We use the OpenAI Baselines' implementation of an A2C model to train the RL agents on Breakout and Amidar. The model uses the CNN architecture proposed by. Each agent is trained for 40 million iterations using RMSProp with default hyperparameters (We conducted a survey of recent literature to assess how saliency maps are used to interpret agent behavior in deep RL. We began our search by focusing on work citing the following four types of saliency maps: Jacobian , perturbation , object and attention . Papers were selected if they employed saliency maps to create explanations regarding agent behavior. This ed in selecting 46 claims from 11 papers. These 11 papers have appeared at ICML, NeurIPS, AAAI, ArXiv, OpenReview and a thesis. There are several model-specific saliency mapping methods that we excluded from our survey. Perturbation Y N "Notice that both actor and critic tend to monitor the area above the ship. This may be useful for determining whether the ship is protected from enemy fire or has a clear shot at enemies." Perturbation Y N "We found that the agent enters and exits a'tunneling mode' over the course of a single frame. Once the tunneling location becomes salient, it remains so until the tunnel is finished." "Once the agent finished the first screen it is presented with another one, distinguished only by the score that was accumulated in the first screen. Therefore, an agent might encounter problems with generalizing to the new screen if it over-fits the score pixels. Figure 15 shows the saliency maps of different games supporting our claim that DQN is basing its estimates using these pixels. We suggest to further investigate this, for example, we suggest to train an agent that does not receive those pixels as input." There is a very notable difference between the policy saliency between the two models, where the former one only pays limited attention to the road and almost no attention to the engine indicator, the opposite from fA3C-LSTM. Explicitly,it means masking any regions from the input does not cause much perturbation to the policy when trained with continuous space as targets, likely because the real consequence from a small change in action, e.g. no braking (a3= 0) versus braking (a3= 0.3), can be very substantial but numerically too subtle for the network to capture during optimization on the continuous spectrum. " "Analyzing the visualizations on Seaquest, we make an interesting observation. When maximizing the Q-value for the actions, in many samples we see a low or very low oxygen meter. In these cases the submarine would need to ascend to the surface to avoid suffocation. Although the up action is the only sensible choice in this case, we also obtain visualized low oxygen states for all other actions. This implies that the agent has not understood the importance of resurfacing when the oxygen is low. We then run several roll outs of the agent and see that the major cause of death is indeed suffocation and not collision with enemies. " The most dominant pattern we observe is that the model learns to attend to task-relevant things in the scene. In most ATARI games that usually means that the player is one of the foci of attention, as well as enemies, power-ups and the score itself (which is an important factor in the calculating the value function). " N N " Figure 4 shows a examples of this in Ms Pacman and Alien in the both games the model scans through possible paths, making sure there are no enemies or ghosts ahead. We observe that when it does see a ghost, another path is produced or executed in order to avoid it." Pacman Attention Y N "In many games we observe that the agent learns to place trip-wires at strategic points in space such that if a game object crosses them a specific action is taken. For example, in Space Invaders two such trip wires are following the player ship on both sides such that if a bullet crosses one of them the agent immediately evades them by moving towards the opposite direction." Space Invaders Attention Y N "Another example is Breakout where we can see it working in two stages. First the attention is spread out around the general area of the ball, then focuses into a localized line. Once the ball crosses that line the agent moves towards the ball." Breakout Attention Y N "As can be seen, the system uses the two modes to make its decisions, some of the heads are content specific looking for opponent cars. Some are mixed, scanning the horizon for incoming cars and when found, tracking them, and some are location based queries, scanning the area right in front of the player for anything the crosses its path (a trip-wire which moves with the player)." Comparing the attention agent to the baseline agent, we see that the attention agent is sensitive to more focused areas along the possible future trajectory. The baseline agent is more focused on the area immediately in front of the player (for the policy saliency) and on the score, while the attention agent focuses more specifically on the path the agent will follow (for the policy) and on possible future longer term paths (for the value). " Note how it focuses attention on the upper part of the screen after destroying multiple bricks from the top. This attention does not go away after the ball moves elsewhere (not shown in the images).We speculate that this is how the agent models tunneling:rather than having a high-level concept of digging a tunnel,it simply strikes wherever it has managed to strike already." The causal graphical model in Figure 2a can be extended to different domains in RL where the input to the model is an image. This requires extending game state to include the underlying state variables. Figure 7 shows the extension for Breakout using an A2C model. Note, game state includes the underlying state variables for Breakout and logits were split into action logits and value as outputted by A2C. Table 6: Numeric representing Pearson's correlation coefficient and p-value for the differences in reward and saliency (object, perturbation and Jacobian) from the original trajectory for each intervention. Results show small correlation coefficients (r) suggesting that there is a weak relationship between the differences in reward and saliency for the interventions. We also evaluated Pearson's correlation between the differences in reward and saliency with the original trajectory for all three methods (see Table 6). The support the correlation plots in Figures 4c, 8c and 9c. We further evaluated the relationship between player-enemy distance and saliency (Section 5) in Amidar on perturbation and Jacobian saliency maps. These are presented in Figures 10 and 11, respectively. Jacobian saliency performed the worst for the intervention-based experiment, suggesting that there is no impact of player-enemy distance on saliency. Regression analysis between distance and perturbation and Jacobian saliency can be found in Tables 7 and 8 respectively. The support the lack of correlation between the observational and interventional distributions. Note, enemy 1 is more salient throughout the game compared to the other four enemies ing in a larger interventional sample size. Figures 11b and c. The indicate a very small strength of effect (slope) for both observational and interventional data and a small correlation coefficient (r), suggesting that there is, at best, only a very weak causal dependence of saliency on distance-to-player.
Proposing a new counterfactual-based methodology to evaluate the hypotheses generated from saliency maps about deep RL agent behavior.
1,340
scitldr
One of the unresolved questions in deep learning is the nature of the solutions that are being discovered. We investigate the collection of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches. These solutions are shown to be rather similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all. Surprisingly, all the network instances seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples which are learned in roughly the same order. When extending the investigation to heterogeneous collections of neural network architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy. This pattern of remains true even when the composition of classes in the test set is unrelated to the train set, for example, when using out of sample natural images or even artificial images. To show the robustness of these phenomena we provide an extensive summary of our empirical study, which includes hundreds of graphs describing tens of thousands of networks with varying NN architectures, hyper-parameters and domains. We also discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent. Rather, the observed pattern of similarity is characteristic of learning complex problems with big networks. Finally, we show that this pattern of similarity seems to be strongly correlated with effective generalization. The recent success of deep networks in solving a variety of classification problems effectively, in some cases reaching human-level precision, is not well understood. One baffling is the incredible robustness of the learned models: using variants of Stochastic Gradient Descent (SGD), with random weight initialization and random sampling of mini-batches, different solutions are obtained. While these solutions typically correspond to different parameter values and possibly different local minima of the loss function, nevertheless they demonstrate similar performance reliably. To advance our understating of this issue, we are required to compare different network instances. Most comparison approaches (briefly reviewed in Appendix A) are based on deciphering the internal representations of the learned models (see ; ; ; ;). We propose a simpler and more direct approachcomparing networks by their classifications of the data. To this end, we represent each network instance by 2 binary vectors which capture the train and test classification accuracy. Each vector's dimension corresponds to the size of the train/test dataset; each element is assigned 1 if the network classifies the corresponding data point correctly, and 0 otherwise. Recall the aforementioned empirical observation -different neural network instances, obtained by repeatedly training the same architecture with SGD while randomly sampling its initial weights, achieve similar accuracy. At the very least, this observation predicts that the test-based vector representation of different networks should have similar L 1 /L 2 norms. But there is more: it has been recently shown that features of deep networks capture perceptual similarity reliably and consistently, similarly across different instances and different architectures . These seem to suggest that our proposed representation vectors may not only have a similar norm, but should also be quite similar as individual vectors. But similar in what way? In this paper, we analyze collections of deep neural networks classifiers, where the only constraint is that the instances are trained on the same classification problem, and investigate the similarity between them. Using the representation discussed above, we measure this similarity by two scores, consistency score and consensus score, as defined in §2. Like other comparison approaches (see Appendix A), our analysis reveals a high level of similarity between trained networks. Interestingly, it reveals a stronger sense of similarity than previously appreciated: not only is the accuracy of all the networks in the collection similar, but so is the pattern of classification. Specifically, at each time point during the learning process (or in each epoch), most of the data points in both the train and test sets are either classified correctly by all the networks, or by none at all. As shown in §3, these are independent of choices such as optimization method, hyperparameter values, the detailed architecture, or the particular dataset. They can be replicated for a fixed test set even when each instance in the collection sees a different train set, as long as the training data is sampled from the same distribution. Moreover, the same pattern of similarity is observed for a wide range of test data, including out-of-sample images of new classes, randomly generated images, or even artificial images generated by StyleGAN . These are also reproduce-able across domains, and were reproduced using BiLSTM with attention for text classification. We may therefore conclude that different network instances compute similar classification functions, even when being trained with different training samples. It is in the dynamic of learning, where the of our analysis seem to go significantly beyond what has been shown before, revealing an even more intriguing pattern of similarity between trained NN instances. Since deep NNs are almost always trained using gradient descent, each network can be represented by a time series of train-based and test-based representation vectors, one per epoch. We find that network instances in the collection do not only show the same pattern of classification at the end of the training, but they also evolve in the same way across time and epochs, gradually learning to correctly or incorrectly classify the same examples in the same order. When considering bigger classification problems such as the classification of ImageNet with big modern CNN architectures, a more intricate pattern of dynamics is evident: to begin with, all networks wrongly classify most of the examples, and correctly classify a minority of the examples. The learning process is revealed by examples moving from one end (100% false classification) to the other end (100% correct classification), which implies two things: (i) the networks learn to correctly classify examples in the same order; (ii) the networks agree on the examples they misclassify throughout. As shown in §4, these hold regardless of the network' architecture. To drive this point home we compare a variety of public domain architectures such as VGG19 , AlexNet , DenseNet and ResNet-50 . In all cases, different architectures may learn at a different pace and achieve different generalization accuracy, but they still learn in the same order. Thus all networks start by learning roughly the same examples, but the more powerful networks may continue to learn additional examples as learning proceeds. A related phenomenon is observed when extending the analysis to simpler learning paradigms, such as deep linear networks, SVM, and KNN classifiers. Our empirical study extends to cases where these robust patterns of similarity break down, see §5. For example, when randomly shuffling the labels in a known benchmark , the agreement between different classifiers disappear. This stands in agreement with , where it is shown that networks that generalize are more similar than those that memorize. Nevertheless, the similarity in learning dynamic is not an artifact of learnability, or the fact that the networks have converged to solutions with similar accuracy. To see this we constructed a test case where shallow CNNs are trained to discriminate an artificial dataset of images of Gabor patches (see Appendix C). Here it is no longer true that different network instances learn in the same order; rather, each network instance follows its own path while converging to the final model. The similarity in learning dynamic is likewise not an artifact of using gradient descent. To see this we use SGD to train linear classifiers to discriminate vectors sampled from two largely overlapping Gaussian distributions. Once again, each classifier follows its own path while converging to the same optimal solution. Given some neural network architecture f, and a labeled dataset denotes a single data point and y i ∈ [K] its corresponding label, we define and analyze the consistency of f when repeatedly trained on X from scratch. Let S E denote the set of different extents (total epochs of X) used to train f, where |S E | = E. ∀e ∈ S E we create a collection of N instances of f, denoted F e = {f e 1, ..., f e N}. Each instance f e i is initialized independently using Xavier initialization , then trained with SGD on randomly sampled mini-batches for e epochs. We measure the consistency of architecture f by comparing the predictions of the different instances in each collection F e, and analyze this consistency throughout the entire learning process as it changes with e. Thus for epoch e, we define the consistency score of an example (x, y) as follows: The consistency score c e (x, y) measures the classifiers' agreement when x is correctly classified. However, it does not take into account the classifiers' agreement when it is not. We therefore define in addition the consensus score, a complementary score that measures the consensus of the classifiers -the largest number of classifiers that classify each example by the same label: We say that example e (x j, y j). We say that example (x i, y i) is learned at least as fast as example (x j, y j) if from some epoch e and onward, example (x i, y i) is easier than example (x j, y j). This is formally given by: ∃e ∈ S E: ∀e > e c e (x i, y i) ≥ c e (x j, y j). When all the classifiers in F e are identical, the consistency score of each example is either 0 or 1, and its consensus is 1. This in a perfect bi-modal distribution of consistency scores; we quantify bi-modality using the following measure suggested by Pearson for an RV X: kurtusis(X) − skewness 2 (X) − 1; the lower this measure is, the more bi-modal X is. On the other hand, when all classifiers independently choose class labels, the distribution of both scores is expected to resemble a Gaussian 1, centered around the average accuracy of the classifiers for the consistency score, and a slightly higher value for the mean consensus score. It follows that the higher the mean consensus is, and the more bi-modal-like the distribution of consistency scores is around 0 and 1, the more similar the set of classifiers is. Throughout this paper we empirically investigate the distribution of the two scores defined above, using different architectures and datasets. We also examine the effects of other factors, such as resampling the train data, or classifying out-of-sample test data. For the most part, we focus on CNNs trained on visual classification tasks (although the are reproduced in other domains), and analyze the distribution of consistency scores throughout the entire learning process. In this section, we investigate collections of classifiers obtained from a single NN architecture f. Same training set. We start with the simplest condition, where all instances in collection F are obtained by training with the same training set X, with different initial conditions and with independently sampled mini-batches. When using datasets of natural images, during learning the consistency among all networks in F is high, see Figs. 1a, b and Fig. 2. Upon initialization, all networks are effectively i.i.d random variables, and therefore the distribution of consistency scores is approximately normal around random chance 1 -1 K for K labels. After a few epochs (in many cases a single epoch is enough, see Appendix D), the consistency distribution changes dramatically, transforming to a bi-modal distribution peaking around 0 and 1. This abrupt distribution change is robust, and rather striking: from a state where most of the examples are being classified correctly by 1 K of the networks, now most of the examples are being misclassified by all the networks, while a small fraction is being correctly classified by all the networks. When learning proceeds, the improvement in accuracy affects a shift of points from the peak at 0 to the peak at 1, and the distribution remains bi-modal, see Figs. 1a,b and Fig. 2. The data is learned in a specific order which is insensitive to the network initialization and the sampling of the mini-batches. This is true for both the train and test sets. It indicates that the networks capture similar functions in corresponding epochs: they classify correctly the same examples, and also consistently misclassify the same examples. Had the learning of the different network instances progressed independently, we would have seen a different dynamic: as illustrated in Fig. 1c, the distribution of consistency scores of independent learners remains Gaussian in all epochs, where the Gaussian's mean slowly shifts while tracking the improved accuracy. The consistency score is not affected by how similar the misclassifications are. For this purpose we have the consensus score, which measures consistency regardless of whether the label is true or false: a consensus score of 1 indicates that all networks have classified the datapoint in the same way, regardless of whether it is correct or incorrect. Fig. 3a shows the distribution of consensus scores for the cases depicted in Fig. 1, showing that indeed all the network instances classify examples in almost the same way, even when they misclassify. Had the learning of the different network instances progressed independently, the dynamic would have been different as shown in We investigated a variety of public domain architectures, including AlexNet, DenseNet and ResNet-50 for ImageNet, VGG19 and a stripped version of VGG (denoted st-VGG) for CIFAR-10 and CIFAR-100, and several different handcrafted networks for other data sets (see details in Appendix B). The can be replicated when changing the following hyper-parameters: learning rate, optimizer, batch size, dropout, L2-regularization, width, length and depth of layers, number of layers, number and size of kernels, and activation functions. These hyper-parameters differ across the experiments detailed both in the main paper and in Appendix D. We now extend the analysis of the previous section to include networks instances which are generated by different architectures. In §4.3 we discuss comparisons with other classifiers. We start by directly extending the previous analysis to two collections generated by two different architectures. Each architecture is characterized by its own learning pace, therefore it makes little sense to compare consistency epoch-wise. Instead, we first match epochs between the two collections: in a matched pair of epochs [e 1, e 2], by definition the mean error of the first collection in epoch e 1 is equivalent to the mean error of the second collection in epoch e 2 4. For each pair of epochs, we merge the corresponding collections of the two architectures and compute the consistency of the merged collection. We call this score cross-consistency. In The distribution of the cross-consistency score in Fig. 6a,b implies that the two architectures, ResNet-50 and AlexNet, learn the data in the same order when trained on the same dataset. We wish to measure this implication directly. To this end, we measure for a given point and a given collection, how fast the point has been learned by all the instances in the collection. Note that the sooner a point is being learned by all network instances, the higher its average consistency should be when computed over all epochs. We therefore call a point's average consistency its accessibility score, and correlate this score across two collections to compare the order of learning. We start by correlating the accessibility score of two collections generated by the same architecture. When comparing two collections of ResNet-50, the correlation is almost 1 (r = 0.99, p ≤ 10 (Fig. 25). Convolutional Neural Networks where the internal operations are limited to linear operators define an important class of CNNs, as their linearity is often exploited in the theoretical investigation of deep learning. It is natural to wonder, therefore, whether the bi-modal behavior observed in general CNNs also occurs in this case. The answer is in the affirmative. We train 100 st-VGG networks on the small-mammals dataset (see Appendix C). By replacing all the activation layers by the identity operator, and changing the max-pooling into average-pooling, a linear CNN architecture is obtained. The performance of these linear networks is weaker (0.43 average accuracy) than the original non-linear networks (0.56 average accuracy). Still, the distribution of the consistency scores throughout the entire learning process is bi-modal (maximum Pearson: 0.055), and this bi-modality is even more pronounced than the bi-modality in the non-linear case (maximum Pearson: 0.22). The bi-modal dynamics of st-VGG can be seen in the top row of Linear networks converge in just a few epochs, hence ruling the cross-consistency score uninformative in this case. Nevertheless, we still observe that linear and non-linear networks learn examples in roughly the same order, as shown in Fig. 7b. To understand Fig. 7b, we define for each epoch the set of "easiest" points, including all the points whose corresponding consistency score is larger than 0.9. For each epoch, there are two such sets: C Up to now, we investigated a variety of neural network architectures, revealing a common learning pattern. Can we see the same commonalities with other classification methods and paradigms? First, we consider boosting based on linear classifiers as weak learners, because the training of both neural networks and AdaBoost share a dynamic aspect: in neural networks training accuracy increases with time due to the use of GD, while in AdaBoost accuracy increases over time due to the accumulation of weak learners. We find that in both dynamics, there are commonalities in the learning order of examples. Next, we consider other machine learning paradigms, including SVM, KNN classifier, perceptron, decision tree, random forest and Gaussian naïve Bayes. Interestingly, we still find a strong correlation with the order of learning as defined above, in that these classifiers tend to fit those examples which the neural networks learn first. These are discussed in full in Appendix F. In Section 3 we discussed the characteristic bi-modal distribution of consistency scores, illustrated in Fig. 1, which has appeared in all the experiments presented until now, in both the train and test sets. We have already seen that this bi-modality weakens as the similarity between the distributions of train and test data is reduced (see Fig. 5). In this section, we investigate the circumstances under which the bi-modal distribution of consistency scores is no longer seen. Learning to see Gabor patches. The bi-modal distribution of consistency scores through all stages of learning is not an inherent property of neural networks. We demonstrate this point using a dataset of artificial images, consisting of Gabor patches: the dataset contains 12 overlapping classes which differ from each other in orientation, parity and color (see Appendix C). 100 instances of st-VGG have been trained to classify this data. Now the distribution of consistency scores, shown in Fig. 8, is no longer bi-modal. Rather, the distribution is approximately normal most of the time. As learning proceeds, the mean of the distribution slowly shifts towards 1, and the width of the distribution seems to expand. At convergence, the models have reached similar performance, and the bi-modal characteristics partially re-appears on the test data. Random labels. Bi-modality seems to be associated with successful generalization. To see this, we take the small-mammals dataset, and reshuffle the labels such that every image is assigned a random label (following). In this case, training accuracy can reach 100% when disabling dropout (that acts as a regularizer), which indicates that the networks can memorize the data. Interestingly, the distribution of consistency scores in no longer bi-modal, with minimum Pearson score of 1.07 on train set and 1.35 on the test set during the entire learning process. Rather, the distribution in each epoch resembles a Gaussian centered around the mean accuracy of the networks, see Fully connected networks. Bi-modality is not an artifact of using gradient descent for optimization. This can be seen in the following analysis. Specifically, we consider a fully connected neural network architecture, with 2 intermediate layers, ELU and dropout. The networks are trained to discriminate points sampled from two largely overlapping Gaussian distributions in high dimension. The dynamic distribution of consistency scores is shown in Fig. 10, and resembles the distribution of independent networks shown in Fig. 1c. While the final solutions of the networks are similar, the order in which examples are being learned when employing SGD optimization is different. We empirically show that neural networks learn similar classification functions. More surprisingly with respect to earlier work, the learning dynamics is also similar, as they seem to learn similar functions also in intermediate stages of learning, before convergence. This is true for a variety of architectures, including different CNN architectures and LSTMs, irrespective of size and other hyper-parameters of the learning algorithms. We have verified this pattern of using many different CNN architectures, including most of those readily available in the public domain, and many of the datasets of natural images which are in common use when evaluating deep learning. The similarity of network instances is measured in the way they classify examples, including known (train) and new examples. Typically, the similarity over test data is as pronounced as it is over train data, as long as the train and test examples are sampled from the same distribution. We show that this similarity extends also to out of sample test data, but it seems to decrease as the gap between the distribution of the train data and the test data is increased. This pattern of similarity crosses architectural borders: while different architectures may learn at a different speed, the data is learned in the same order. Thus all architectures which reach a certain error rate seem to classify, for the most part, the same examples in the same manner. We also see that stronger architectures, which reach a lower generalization error, seem to start by first learning the examples that weaker architectures classify correctly, followed by the learning of some more difficult examples. This may suggest that the order in which data is learned is an internal property of the data. We also discuss cases where this similarity breaks down, indicating that the observed similarity is not an artifact of using stochastic gradient descent. Rather, the observed pattern of similarity seems to characterize the learning of complex problems with big networks. Curiously, the deeper the network is and the more non-linearities it has, and even though the model has more learning parameters, the progress of learning in different network instances becomes more similar to each other. Un-intuitively, this suggests that in a sense the number of degrees of freedom in the learning process is reduced, and that there are fewer ways to learn the data. This effect seems to force different networks, as long they are deep enough, to learn the dataset in the same way. This counter-intuitive joins other non-intuitive , like the theoretical that a deeper linear neural network converges faster to the global optimum than a shallow network . We also show that the observed pattern of similarity is strongly correlated with effective generalization. What does it tell us about the generalization of neural networks, a question which is considered by many to be poorly understood? Neural networks can memorize an almost limitless number of examples, it would seem. To achieve generalization, most training protocols employ some regularization mechanism which does not allow for unlimited data memorization. As a , the network fits only the train and test examples it would normally learn first, which are, based on our analysis, also the "easier" (or more typical) examples. We hypothesize that this may explain why a regularized network discovers robust solutions, with little variability among its likely instances. , and they can learn any arbitrary complex function . This extended capacity can indeed be reached, and neural networks can memorize datasets with randomly assigned labels . Nevertheless, the dominant hypothesis today is that in natural datasets they "prefer" to learn an easier hypothesis that fits the data rather than memorize it all . Our work is consistent with a hypothesis which requires fewer assumptions, see Section 6. The direct comparison of neural representations is regarded to be a hard problem, due to a large number of parameters and the many underlying symmetries. Many non-direct approaches are available in the literature: compare subsets of similar features across multiple networks, which span similar low dimensional spaces, and show that while single neurons can vary drastically, some features are reliably learned across networks. proposed the SVCCA method, which can compare layers and networks efficiently, with an amalgamation of SVD and CCA. They showed that multiple instances of the same converged network are similar to each other and that networks converge in a bottom-up way, from earlier layers to deeper ones. builds off the of , further showing that networks which generalize are more similar than ones which memorize, and that similarity grows with the width of the network. In various machine learning methods such as curriculum learning , self-paced learning and active learning , examples are presented to the learner in a specific order . Although conceptually similar, here we analyze the order in which examples are learned, while the aforementioned methods seek ways to alter it. Likewise, the design of effective initialization methods is a striving research area (; ;). Here we do not seek to improve these methods, but rather analyze the properties of a collection of network instances generated by the same initialization methodology. In addition to the public domain architectures described in §3, we also experimented with some handcrafted networks. Such networks are simpler and faster to train, and are typically used to investigate the learning of less commonly used datasets, such as the small-mammals dataset and tiny ImageNet. Below we list all the architectures used in this paper. st-VGG. A stripped version of VGG which we used in many experiments. It is a convolutional neural network, containing 8 convolutional layers with 32, 32, 64, 64, 128, 128, 256, 256 filters respectively. The first 6 layers have filters of size 3 × 3, and the last 2 layers have filters of size 2 × 2. Every second layer there is followed by 2 × 2 max-pooling layer and a 0.25 dropout layer. After the convolutional layers, the units are flattened, and there is a fully-connected layer with 512 units followed by 0.5 dropout. When training with random labels, we removed both dropout layers to enable proper training, as suggested in. The batch size we used was 100. The output layer is a fully connected layer with output units matching the number of classes in the dataset, followed by a softmax layer. We trained the network using the SGD optimizer, with cross-entropy loss. When training st-VGG, we used a learning rate of 0.05 which decayed by a factor of 1.8 every 20 epochs. Small st-VGG. To compare st-VGG with another architecture, we created a smaller version of it: we used another convolutional neural network, containing 4 convolutional layers with 32, 32, 64, 64 filters respectively, with filters of size 3 × 3. Every second layer there is followed by 2 × 2 max-pooling and a 0.25 dropout layer. After the convolutional layers, the units are flattened, and there is a fully-connected layer with 128 units followed by 0.5 dropout. The output layer is a fully connected layer with output units matching the number of classes in the dataset, followed by a softmax layer. We trained the network using the SGD optimizer, with cross-entropy loss. We trained this network with the same learning rate and batch size as st-VGG. MNIST architecture. When experimenting with the MNIST dataset, we used some arbitrary small architecture for simplicity, as most architectures are able to reach over 0.99 accuracy. The architecture we used had 2 convolutional layers, with 32 and 64 filters respectively of size 3 × 3. After the convolutions, we used 2 × 2 max-pooling, followed by 0.25 dropout. Finally, we used a fully connected layer of size 128 followed by 0.5 dropout and Softmax. We used a learning rate of 1 for 12 epochs, using AdaDelta optimizer and a batch size of 100. Fully connected architecture. When experimenting with fully connected networks, we used a 4 layers network, which simply flattened the data, followed by 2 fully connected layers with 1024 units, followed byn an output layer with softmax. We used 0.5 dropout after each fully connected layer. Since these networks converge fast, a wide range of learning rates can be used. Specifically, we used 0.04. We experimented with a wide range of numbers of fully connected layers, reaching similar . BiLSTM with Attention. When experimenting on textual data we used a GloVe embeddings, a layer of BiLSTM of size 300, 0.25 dropout and recurrent dropout, an attention layer, a fully connected layer of size 256 with 0.25 dropout and a last fully connected layer to extract output. The networks were optimized using Adam optimization with a learning rate of 0.005 and a batch size of 256. Small Mammals. The small-mammals dataset used in the paper is the relevant super-class of the CIFAR-100 dataset. It contains 2500 train images divided into 5 classes equally, and 500 test images. Each image is of size 32 × 32 × 3. This dataset was chosen due to its small size, which allowed for efficient experimentation. All the observed in this dataset were reproduced on large, public domain datasets, such as CIFAR-100, CIFAR-10, and ImageNet. Insect. Similarly to the small mammals dataset, the relevant super-class of CIFAR-100. Fish. Similarly to the small mammals dataset, the relevant super-class of CIFAR-100. Cats and Dogs. The cats and dogs dataset is a subset of CIFAR-10. It uses only the 2 relevant classes, to create a binary problem. Each image is of size 32 × 32 × 3. The dataset is divided to 20000 train images (10000 per class) and 2000 test images (1000 per class). Gabor. The Gabor dataset used in the paper, is a dataset we created which contains 12 classes of Gabor patches. Each class contains 100 images of Gabor patches which vary in size and orientation. Classes differ from each other in 3 parameters: 1) Parity -there is a different class for odd and even Gabor patches (corresponding to the use of sine or cosine respectively). 2) RGB channel -each class depicts the Gabor patch in a single RGB channel. 3) Orientation -each class can have one of the following base orientations: 45 •. The orientation of each class varies by ±30 •, making some of the classes non-separable, while some classes are. Code for creating this dataset will be published upon acceptance. Gaussian. The Gaussian dataset used in the fully connected case, is a 2-classes dataset. One class is sampled from a multivariate Gaussian with mean 0 and Σ = I, while the other class is sampled from a multivariate Gaussian with mean 0.1 and Σ = I. Other choices for the mean and variance yield similar . Each sampled vector was of dimension 3072, and then reshaped to 32 × 32 × 3 to resemble the shape of CIFAR images. Each class contained 2500 train images and 500 test images. VGGFace2 subset. We created a classification task for face recognition, using a subset of 10 classes from VGGFace2. We chose the classes containing the largest number of images. We chose 600 images from each class arbitrarily to be the train set, while the remaining points (between 89 and 243) served as the test set. Each image was resized to 64 × 64 × 3, using center cropping while maintaining aspect ratio. Stack Overflow. The data from Stack Overflow is publicly shared and used for tutorials. It contains 39K training samples and 1K test samples, each tagged with one of 20 programming languages as the language the question asks about. Each question must be regarded more as a paragraph than a sentence. Many words, terms and symbols are expected to be domain-dependent, and therefore under-represented in the embeddings. Similar qualitative were obtained in all the experiments with natural datasets. To maintain a fair comparison across epochs, the for each shown epoch e (effectively epoch extent) were obtained by independently training a different set of N networks from scratch for e epochs. The specific set of epochs S E, where |S E | = 7, that was used in each plot was determined arbitrarily, to evenly span all sections of learning. All the networks in all test cases converged before the final epoch plotted. Induced class hierarchy. The ranking of training examples induced by the consistency scores typically induces a hierarchical structure over the different classes as well. To see this, we train 100 instances of st-VGG on the small-mammals dataset, and calculate for each image the most frequent class label assigned to it by the collection of networks. In Fig. 27 we plot the histogram of the consistency score (as in Fig. 1), but this time each image is assigned a color, which identifies its most frequent class label (1 of 5 colors). It can be readily seen that at the beginning of learning, only images from 2 classes reach a consistency score of 1. As learning proceeds, more class labels slowly emerge. This suggests that classes are learned in a specific order, across all networks. Moreover, we can see a pattern in the erroneous label assignments, which suggests that the classifiers initially use fewer class labels, and only become more specific later on in the learning process. Dynamics of individual image consistency. We now focus on consistency scores of individual images, as they evolve throughout the entire learning process. For most examples, the score may climb up from random (0.2) to 1 in 1 epoch, it may dip down to 0 and then go up to 1 after a few epochs, or it may go rapidly down to 0. Either, the score remains 1 or 0. These patterns are shown in Fig. 29, and support the bi-modality we report above. The duration in which a certain example maintains a consistency score 0 correlates with the order of learning: the longer it has 0 consistency, the more difficult it is. A minority of the training examples exhibit different patterns of learning. For example, a few images (the green curve in Fig. 29) begin with a high consistency score (near 1), but after a few epochs their score drops to 0 and remains there. and small st-VGG (N = 100), and finally randomly picked super-classes of CIFAR-100, specifically "aquatic-mammals", "insects" and "household furniture" with st-VGG (N = 100). The number of instances N is chosen according to our computational capabilities. However, in all cases, picking much smaller N suffice to yield the same qualitative . In addition to hyper-parameters which may differ between various architectures, we also experimented with changing the hyper-parameters of st-VGG trained on the small-mammals dataset, always observing the same qualitative . All experiments used N = 100 instances. Specifically, we tried a large range of learning rates, learning rate decay, SGD and Adam optimizers, large range of batch sizes, dropout and L2-regularization. Cross architectures diversity. In addition to the in section 4, the same qualitative were obtained for all 2 architectures we trained on the same unmodified dataset. We conducted the following experiments: ImageNet dataset: ResNet-50 vs DenseNet, AlexNet vs DenseNet. Aquatic-mammals and small-mammals super-classes of CIFAR-100: st-VGG vs small st-VGG, Tiny ImageNet: st-VGG vs small st-VGG, CIFAR-10 and CIFAR-100: VGG19 vs st-VGG. All of which yielding similar to the ones analyzed in section 4. Boosting linear classifiers. We use AdaBoost with k weak linear classifiers, trained on the small-mammals dataset. As commonly observed, adding more classifiers (increasing k) improves the final performance. In Fig. 30a we plot accuracy for 3 groups of examples -easy, intermediate and difficult, where grouping is based on the consistency score of the CNN described above. The accuracy of boosting over easy examples is significantly higher than the general accuracy and does not improve as we increase k. This suggests that most of the easy examples can be classified linearly, and are learned first by both boosting and our CNNs. On the other hand, accuracy when classifying difficult examples is worse than the general accuracy and slowly decreases with k. Other classifiers. We train several classifiers on the small-mammals dataset: SVM (accuracy: 0.48), KNN classifier (accuracy: 0.36), perceptron (accuracy: 0.35), decision tree (accuracy: 0.31), random forest (accuracy: 0.48) and Gaussian naïve Bayes classifier (accuracy: 0.38). All classifiers under-perform the CNN architecture described above (accuracy: 0.56). As in the case of boosting, for each method, most of the examples which are classified correctly are among the first to be learned by the CNN architecture. In other words, for all these methods, which achieve low accuracy, it is the easier examples which are being learned for the most part. This is illustrated in Fig. 31, demonstrating the phenomenon that during the learning process, most of the examples that are being learned by other paradigms are the ones already learned by neural networks.
Most neural networks approximate the same classification function, even across architectures, through all stages of learning.
1,341
scitldr
We propose a unified framework for building unsupervised representations of individual objects or entities (and their compositions), by associating with each object both a distributional as well as a point estimate (vector embedding). This is made possible by the use of optimal transport, which allows us to build these associated estimates while harnessing the underlying geometry of the ground space. Our method gives a novel perspective for building rich and powerful feature representations that simultaneously capture uncertainty (via a distributional estimate) and interpretability (with the optimal transport map). As a guiding example, we formulate unsupervised representations for text, in particular for sentence representation and entailment detection. Empirical show strong advantages gained through the proposed framework. This approach can be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools underlying the framework are Wasserstein distances and Wasserstein barycenters (and, hence the title!). One of the main driving factors behind the recent surge of interest and successes in natural language processing and machine learning has been the development of better representation methods for data modalities. Examples include continuous vector representations for language , convolutional neural network (CNN) based text representations (; ; ; BID4, or via other neural architectures such as RNNs, LSTMs BID14 Collobert and Weston, 1 And, hence the title! 2008), all sharing one core idea -to map input entities to dense vector embeddings lying in a lowdimensional latent space where the semantics of the inputs are preserved. While existing methods represent each entity of interest (e.g., a word) as a single point in space (e.g., its embedding vector), we here propose a fundamentally different approach. We represent each entity based on the histogram of contexts (cooccurring with it), with the contexts themselves being points in a suitable metric space. This allows us to cast the distance between histograms associated with the entities as an instance of the optimal transport problem (Monge, 1781; ;). For example, in the case of words as entities, the ing framework then intuitively seeks to minimize the cost of moving the set of contexts of a given word to the contexts of another. Note that the contexts here can be words, phrases, sentences, or general entities cooccurring with our objects to be represented, and these objects further could be any type of events extracted from sequence data, including e.g., products such as movies or web-advertisements BID8, nodes in a graph BID9, or other entities . Any co-occurrence structure will allow the construction of the histogram information, which is the crucial building block for our approach. A strong motivation for our proposed approach here comes from the domain of natural language, where the entities (words, phrases or sentences) generally have multiple semantics under which they are present. Hence, it is important that we consider representations that are able to effectively capture such inherent uncertainty and polysemy, and we will argue that histograms (or probability distributions) over embeddings allows to capture more of this information compared to point-wise embeddings alone. We will call the histogram as the distributional estimate of our object of interest, while we refer to the individual embeddings of single contexts as point estimates. Next, for the sake of clarity, we discuss the framework in the concrete use-case of text representations, when the contexts are just words, by employing the well-known Positive Pointwise Mutual Information (PPMI) matrix to compute the histogram information for each word. With the power of optimal transport, we show how this framework can be of significant use for a wide variety of important tasks in NLP, including word and sentence representations as well as hypernymy (entailment) detection, and can be readily employed on top of existing pre-trained embeddings for the contexts. The connection to optimal transport at the level of words and contexts paves the way to make better use of its vast toolkit (like Wasserstein distances, barycenters, etc.) for applications in NLP, which in the past has primarily been restricted to document distances (; BID16 .We demonstrate that building the required histograms comes at almost no additional cost, as the co-occurrence counts are obtained in a single pass over the corpus. Thanks to the entropic regularization introduced by , Optimal Transport distances can be computed efficiently in a parallel and batched manner on GPUs. Lastly, the obtained transport map FIG0) also provides for interpretability of the suggested framework. Most of the previous work in building representations for natural language has been focused towards vector space models, in particular, popularized through the groundbreaking work in Word2vec and GloVe . The key idea in these models has been to map words which are similar in meaning to nearby points in a latent space. Based on which, many works (a; ;) have suggested specializing the embeddings to capture some particular information required for the task at hand. One of the problems that still persists is the inability to capture, within just a point embedding, the various semantics and uncertainties associated with the occurrence of a particular word BID15.A recent line of work has proposed the view to represent words with Gaussian distributions or mixtures of Gaussian distributions (b;), or hyperbolic cones BID5 for this purpose. Also, a concurrent work from has suggested using elliptical distributions endowed with a Wasserstein metric. While these already provide richer information than typical vector embeddings, their form restricts what could be gained by allowing for arbitrary distributions. In addition, hyperbolic embeddings (; BID5 are so far restricted to supervised tasks (and even elliptical embeddings to a most extent), not allowing unsupervised representation learning as in the focus of the paper here. To this end, we propose to associate with each word a distributional and a point estimate. These two estimates together play an important role and enable us to make use of optimal transport. Amongst the few explorations of optimal transport in NLP, i.e., document distances (; BID16, document clustering , bilingual lexicon induction , or learning an orthogonal Procrustes mapping in Wasserstein distance BID7, the focus has been on transporting words directly. For example, the Word Mover's Distance casts finding the distance between documents as an optimal transport problem between their bag of words representation. Our approach is different as we consider the transport over contexts instead, and use it to propose a representation for words. This enables us to establish any kind of distance (even asymmetric) between words by defining a suitable underlying cost on the movement of contexts, as we show for the case of entailment. Another benefit of defining this transport over contexts is the added flexibility to extend the representation for sentences (or arbitrary length text) by utilizing the idea of Wasserstein barycenters, which to the best of our knowledge has never been considered in the past. Lastly, the proposed framework is not specific to words or sentences but holds for building unsupervised representations for any entity and composition of entities, where a co-occurrence structure can be devised between entites and their contexts. Optimal Transport (OT) provides a way to compare two probability distributions defined over a space G, given an underlying distance on this space (or more generally a cost of moving one point to another). In other terms, it lifts distance between points to distance between distributions. Below, we give a short yet formal description on optimal transport for the discrete case. Let's consider an empirical probability measure of the form µ = n i=1 a i δ(x i) where X = (x 1, . . ., x n) ∈ G n, δ(x) denotes the Dirac (unit mass) distribution at point x ∈ G, and (a 1, . . ., a n) lives in the probability simplex DISPLAYFORM0 If the ground cost of moving from point x i to y j is denoted by M ij, then the Optimal Transport distance between µ and ν is the solution to the following linear program. OT(µ, ν; M):= min DISPLAYFORM1 Here, the optimal T ∈ R n×m is referred to as the transportation matrix: T ij denotes the optimal amount of mass to move from point x i to point y i. Intuitively, OT is concerned with the problem of moving goods from factories to shops in such a way that all the demands are satisfied and the overall transportation cost is minimal. When G = R d and the cost is defined with respect to a metric D G over G (i.e., M ij = D G (x i, y j) p for any i, j), OT defines a distance between empirical probability distributions. This is the p-Wasserstein distance, defined as DISPLAYFORM2 In most cases, we are only concerned with the case where p = 1 or 2.The cost of exactly solving OT problem scales at least in O(n 3 log(n)) (n being the cardinality of the support of the empirical measure) when using network simplex or interior point methods. we consider the entropy regularized Wasserstein distance, W λ p (µ, ν). The above problem can then be solved efficiently using Sinkhorn iterations, albeit at the cost of some approximation error. The regularization strength λ ≥ 0 controls the accuracy of approximation and recovers the true OT for λ = 0. The cost of the Sinkhorn algorithm is only quadratic in n at each iteration. Further on in our discussion, we will make use of the notion of averaging in the Wasserstein space. More precisely the Wasserstein barycenter, introduced by , is a probability measure that minimizes the sum of (p-th power) Wasserstein distances to the given measures. Formally, given N measures {ν 1, . . ., ν N} with corresponding weights η = {η 1, . . ., η N} ∈ Σ N, the Wasserstein barycenter can be written as follows: DISPLAYFORM3 We similarly consider the regularized barycenter B λ p, using entropy regularized Wasserstein distances W λ p in the above minimization problem, following. Employing the method of iterative Bregman projections , we obtain an approximation of the solution at a reasonable computational cost. In this section, we elaborate on both the distributional and the point estimate that we attach to each word, as mentioned in the introduction. A common method in NLP to empirically estimate the probability p(w|c) of occurrence of a word w in some context c, is to compute the number of times the word w co-occurs with context c relative to the total number of times context c appears in the corpus. The context c could be a particular word, phrase, sentence or other definitions of co-occurrence of interest. Distributional Estimate. For a word w, its distributional estimate is built from a histogram over the set of contexts C, and an embedding of these contexts into a space G.A natural way to build this histogram is to maintain a co-occurrence matrix between words in our vocabulary and all possible contexts, such that its each entry indicates how often a word and context occur in an interval (or window) of a fixed size L. Then, the bin values ((H w) c ) c∈C of the histogram (H w) for a word w, can be viewed as the row corresponding to w in this co-occurrence matrix. In Section 5, we discuss how to reduce the number of bins in the histogram, and possible modifications of the co-occurrence matrix to improve associations. The simplest embedding of contexts is into the space of one-hot vectors of all the possible contexts. However, this induces a lot of redundancy in the representation and the distance between contexts does not reflect their semantics. A classical solution would be to instead find a dense lowdimensional embedding of contexts that captures the semantics, possibly using techniques such as SVD or deep neural networks. We denote by V = (v c) c∈C an embedding of the contexts into this low-dimensional space G ⊂ R d, which we refer to as the ground space. (We will consider prototypical cases of how this metric can be obtained in Sections 6 and 7.)Combining the histogram H w and the embedding V, we represent the word w by the following empirical distribution: DISPLAYFORM0 Recall that δ(v c) denotes the Dirac measure at the position v c of the context c. We refer to this representation (Eq.) as the distributional estimate of the word. Together with its distributional estimate, the word w also has an associated point estimate v w when it occurs in the sense of a context, in the form of its position (or embedding) in the ground space. This is what we mean by attaching the distributional and point estimate to each word. Distance. If we equip the ground space G with a meaningful metric D G, then we can subsequently define a distance between the representations of two words w i and w j, as the solution to the following optimal transport problem: DISPLAYFORM1 Intuitively, two words are similar in meaning if the contexts of one word can be easily or cheaply transported to the contexts of the other word, with this ease of transportation being measured by D G. This idea still remains in line with the distributional hypothesis BID11 ) that words in similar contexts have similar meanings, but provides a unique way to quantify it. Interpretation. In fact, both of these estimates are closely tied together and required to serve as an effective representation. For instance, if we only have the distributional estimates, then we may have Here, we pick four contexts at random from a list of top 20 contexts (in terms of PPMI) for the two histograms. Then using the regularized Wasserstein distance (as in Eq. ), we plot the obtained transportation matrix (or commonly called transport map) T as above. Note how'ivory' adjusts its movement towards'skin' (as in skin color) to allow'poaching' to be easily moved to'endangered' as going to other contexts of'mammal' is costly for'poaching', thus capturing a global perspective.two words such as'tennis' and'football' which occur in the contexts of {court, penalty, judge} and {stadium, foul, referee} respectively. While these contexts are mutually disjoint, they are quite close in meaning. Now there could be a third word such as'law' which occurs in the exact same contexts as tennis. So considering the distributional estimate alone, without making use of the point estimates of context, would lead us to have a smaller distance between tennis and law as compared to tennis and football. Whereas, if we only considered the point estimates, then we would lose much of the uncertainty associated about the contexts in which they occur, except for maybe the restricted information of neighboring points in the ground space. This is made clear in a related illustration shown in Figure 2.The family of problems where such a representation can be used is not restricted to entities pertaining to NLP: the framework can be similarly used in any domain where a co-occurrence structure exists between entities and their contexts. For instance, in the case of movie recommendation where users correspond to the entities and movies to the contexts. Lastly, this connection with optimal transport allows us to utilize its rich theoretical and algorithmic toolkit towards important problems in NLP. In the next section, we discuss a concrete framework of how this can be applied and in Section 6 and 7, we detail how the tasks of sentence representation and hypernymy detection can be effectively carried out with this framework. For the sake of brevity, we present the framework for the case where contexts consist of single words. Making associations better. Let's say that a word is considered to be a context word if it appears in a symmetric window of size L around the target word (the word whose distributional estimate we seek). Now, the co-occurrence matrix is between the words of our vocabulary, with rows and columns indicating target words and context words respectively. While each entry of this matrix reflects the co-occurrence count, it may not suggest a strong association between the target and its context. For instance, in the sentence "She prefers her coffee to be brewed fast than being perfect", there is a stronger association between'coffee' and'brewed' rather than between'coffee' and'her', although the co-occurrence counts alone might imply the opposite. Hence, to handle this we consider the well-known Positive Pointwise Mutual Information (PPMI) matrix , whose entries are as follows: DISPLAYFORM0 The PPMI entries are non-zero when the joint probability of the target and context words cooccurring together is higher than the probability when they are independent. Typically, these probabilities are estimated from the co-occurrence counts #(w, c) in the corpus and lead to DISPLAYFORM1 where, #(w) = c #(w, c), #(c) = w #(w, c) and |Z| = w c #(w, c). Also, it is known that PPMI is biased towards infrequent words and assigns them a higher value. A common solution is to smoothen 2 the context probabilities by raising them to an exponent of α lying between 0 and 1. Levy and Goldberg (2014b) have also suggested the use of the shifted PPMI (SPPMI) matrix where the shift 3 by log(s) acts like a prior on the probability of co-occurrence of target and context pairs. These variants of PPMI enable us to extract better semantic associations from the cooccurrence matrix. Finally, we define DISPLAYFORM2 Hence, the bin values for our histogram in Eq. are formed as: DISPLAYFORM3 Computational considerations. The view of optimal transport between histograms of contexts introduced in Eq. offers a pleasing interpretation (see FIG0 . However, it might still be a computationally intractable in its current formulation. Indeed the number of possible contexts can be as large as the size of vocabulary (if the contexts are just single words) or even exponential (if contexts are considered to be phrases, sentences and otherwise). For instance, even with the use of SPPMI matrix, which also helps to sparsify the co-occurrences, the cardinality of the support the word histograms still varies from 10 3 to 5 × 10 4 context words, when considering a vocabulary of size around 2 × 10 5. This is a problem because the Sinkhorn algorithm for regularized optimal transport (Section 3), scales roughly quadratically in the histogram size and the ground cost matrix can also become prohibitive to store in memory, for the range of histogram sizes mentioned. One possible fix is to instead consider a few representative contexts in this ground space. The hope is that with the dense low-dimensional embeddings and a meaningful metric between them, we may not require as many contexts as needed before. For instance, this can be achieved by clustering the contexts with respect to metric D G. Besides the computational gain, the clustering will lead us to consider this transport between more abstract contexts. This will although come at the loss of some interpretability. Another alternative for dealing with this computational issue could be to consider stochastic optimal transport techniques BID6, where the intuition would be to randomly sample a subset of contexts while considering this transport. But we leave that direction for a future work. Now, consider that we have obtained K contexts, each representing some part C k of the set of contexts C. The histogram for word w with respect to these contexts can then be written as DISPLAYFORM4 Hereṽ k ∈Ṽ denotes the point estimate of the k th representative context, and (H w) k are the new bin values for the histogram similar to that in Eq., but with respect to these parts, DISPLAYFORM5 Furthermore, in certain cases 4, it can be important to measure the relative portion of C k's SPPMI (Eq. FORMULA12) that has been used towards a word w. Otherwise the process of making the histogram unit sum in Eq. will misrepresent the actual underlying contribution (check Eq. in Appendix A 4 when the SPPMI contributions towards the partitions (or clusters) have a large variance. for more details): DISPLAYFORM6 Summary. While we detailed the case of context as single words, this framework can be extended in a similar manner to take into account other contexts such as bi-grams, tri-grams, n-grams or other abstract semantic concepts. Building this suggested representation comes at almost free cost during the typical learning of point-estimates for an NLP task, as the co-occurrence counts can simply be maintained while going through the corpus. GloVe even constructs the co-occurrence matrix explicitly as a precursor to learning the point-estimates. Traditionally, the goal of this task is to develop a representation for sentences, that captures the semantics conveyed by it. Most unsupervised representations proposed in the past rely on the composition of vector embeddings for the words, through either additive, multiplicative, or other ways (; ;). We propose to represent sentences as probability distributions to better capture the inherent uncertainty and polysemy. Our belief is that the meaning of a sentence can be understood as a concept that best explains the simultaneous occurrence of the words in it. We hypothesize that a sentence, S = (w 1, w 2, . . ., w N), can be efficiently represented via the Wasserstein barycenter (see Eq.) of distributional estimates of the words in the sentence, i.e., DISPLAYFORM0 which is itself again a distribution over G. Yet another interesting property is the nonassociativity 5 of the barycenter operation. This can be utilized to take into account the order of the words in a sentence. For now, we restrict our focus on exploring how well barycenters of words taken all at once can represent sentences and this direction is left for future work. Interestingly, the classical weighted averaging of point-estimates (BID3 BID1 BID0 BID2 . The objective here is to predict how similar or dissimilar are two sentences in their meanings. Since with barycenter representation as in Eq. FORMULA13, each sentence is also a histogram of contexts, we can again make use of optimal transport to define the distance between two sentences S 1 and S 2, DISPLAYFORM1 As a ground metric, we consider the Euclidean distance between the point estimates of words. This point estimate for a word is its embedding in the context space and can be obtained with the help of Word2vec or GloVe . For, this task we train the word embeddings on the Toronto Book Corpus via GloVe and in the process also gain the distributional estimates of words for free. Since the word embeddings in these methods are constructed so that similar meaning words are close in cosine similarity, we find the representative points by performing K-means clustering with respect to this similarity. We benchmark our performance against SIF (Smooth Inverse Frequency) method from Arora et al. FORMULA12 who regard it as a "simple but toughto-beat baseline", as well as against the common Bag of Words (BoW) averaging. For this experiment, we use SIF's publicly available implementation 6 and perform the evaluation using SentEval 6 https://github.com/PrincetonML/SIF . Table 1 shows that we always beat BoW and SIF with weighted averaging on all tasks. Further, we perform better than the best variant of SIF (which in addition removes the 1 st principal component) on 3 out of 5 tasks. Also, on the other two tasks we still perform competitively and achieve an overall gain over their best variant with K = 300 clusters (refer to TAB5 in Appendix A for detailed ). Note that, the hyperparameters for SIF are taken to be the best ones separately for each task. Whereas we used the same set of hyperparameters for all the above tasks and the PPMI specific hyperparameters haven't been tuned much, but this should not give us an edge over them. In our comparison, we do not include methods such as Sent2vec , as they are specifically trained to work well on the given task of sentence representation, and such an approach for training remains outside the scope of current work. Our approach for representing barycenters does not require any additional training and is still able to match and outperform strong baselines for the task of semantic similarity. This highlights the efficacy of proposed representation. In linguistics, hypernymy is a relation between words (or sentences) where the semantics of one word (the hyponym) are contained within that of another word (the hypernym). A simple form of this relation is the is-a relation, e.g., cat is an animal. Hypernymy is a special case of the more general concept of lexical entailment which may be broadly defined as any semantic relations between two lexical items where the meaning of one is implied by the meaning of the other. Detecting lexical entailment relations is relevant for numerous tasks in NLP. Given a database of lexical entailment relations, e.g., containing Roger Federer is a tennis player might help a question answering system an- Table 2: Comparison between entailment vectors and optimal transport / Wasserstein based entailment measure (WE). The scores are AP@all (%). The hyperparameter α refers to the smoothing exponent and s to the shift in the PPMI computation. More datasets are presented in TAB4 Table 3: Comparison between entailment vectors, optimal transport / Wasserstein based entailment measure (WE) and other state-of-the-art methods. GE+C and GE+KL are Gaussian embeddings with cosine similarity and negative KL-divergence. The scores for GE+C, GE+KL, and DIVE + C·∆S are taken from as we use the same evaluation setup. The scores are again AP@all (%).swering the question " Who is Switzerland's most successful tennis player?". First distributional approaches to detect hyponymy were unsupervised and exploited different linguistic properties of hypernymy (; ; ;). While most of these methods are count-based, word embedding based methods (; ; BID13 have become more popular in recent years. Other approaches represent words by Gaussian distributions and use KL-divergence as a measure of entailment (a;). Especially for tasks like hypernymy detection, these methods have proven to be powerful as they not only capture the semantics but also the uncertainty about various concepts in which the word appears. Using the framework presented in Section 4, we define a measure of entailment as the optimal transport cost (see Eq. ) between associated distributions under a suitable ground cost. For this purpose, we rely on a model that was recently proposed by BID13 BID12 which explicitly models what information is known about a word by interpreting each entry of the embedding as the degree to which a certain feature is present. Based on the logical definition of entailment they derive an approximate inference procedure and an operator measuring the degree of entailment between two so-called entailment vectors defined as follows: DISPLAYFORM0, where the sigmoid function σ and log are applied component-wise on the embeddings v y, v x. Thus, our choice for the ground cost D on the basis of this entailment operator is D DISPLAYFORM1 This asymmetric and not necessarily positive ground cost illustrates that our framework can be flexibly used with an arbitrary cost function defined on the ground space. Evaluation. In total, we evaluated our method on 9 standard datasets: BLESS , EVALution , Lenci/Benotto , Weeds , Henderson 7 (, Baroni , Kotlerman , Levy and Turney . As an evaluation metric, we use average precision AP@all . For comparison we also report the performance of the entailment embeddings that were trained as described in BID12 8. Following we pushed any OOV (out-ofvocabulary) words in the test data to the bottom of the list, effectively assuming that the word pairs do not have a hypernym relation. Table 2 compares the performance 9 of entailment embeddings and the optimal transport measure based on the ground cost defined in Eq.. Our method yields significant improvements over the entailment embeddings by BID12 on almost all of the datasets. Only on the Baroni dataset, our method performs worse but nevertheless still achieves similar performance as other state-of-the-art methods. It confirms the findings of and : there is no single hypernymy scoring function that performs best on all datasets. Furthermore, on some datasets (EVALution, LenciBenotto, Weeds, Turney) we even outperform or match state-of-theart performance (cf . Table 3), by simply using our framework together with ground cost as defined in Eq..Notably, our method is not specific to the entailment vectors by BID12. It can be used with any embedding vectors and ground cost measuring the degree of entailment, without requiring any additional training. A more accurate ground cost or embedding vectors might even further improve the performance. Furthermore, our training dataset (Wikipedia with 1.7B tokens) and our vocabulary with only 80'000 words are rather small compared to the datasets used, e.g., by (a). We expect to get even better by using a larger vocabulary on a larger corpus. To sum up, we advocate for associating both a distributional and point estimate as a representation for each entity. We show how this allows us to use optimal transport over the set of contexts associated with these entities, in problems with a co-occurrence structure. Further, the framework Aitor Gonzalez-Agirre. 2012. Semeval-2012 In particular, when β = 1, we recover the equation for histograms as in Section 5, and β = 0 would imply normalization with respect to cluster sizes. We make use of the Python Optimal Transport (POT) 12 for performing the computation of Wasserstein distances and barycenters on CPU. For more efficient GPU implementation, we built custom implementation using PyTorch. We also implement a batched version for barycenter computation, which to the best of our knowledge has not been done in the past. The batched barycenter computation relies on a viewing computations in the form of blockdiagonal matrices. As an example, this batched mode can compute around 200 barycenters in 0.09 seconds, where each barycenter is of 50 histograms (of size 100) and usually gives a speedup of about 10x. For all our computations involving optimal transport, we typically use λ around 0.1 and make use of log or median normalization as common in POT to stabilize the Sinkhorn iterations. Clustering: For clustering, we make use of kmcuda's 13 efficient implementation of K-Means algorithm on GPUs. We plan to make all our code (for all these parts) and our pre-computed histograms (for the mentioned datasets) publicly available on GitHub soon. Detailed of the sentence representation and hypernymy detection experiments are listed on the following pages.
Represent each entity based on its histogram of contexts and then Wasserstein is all you need!
1,342
scitldr
In this paper, we propose two methods, namely Trace-norm regression (TNR) and Stable Trace-norm Analysis (StaTNA), to improve performances of recommender systems with side information. Our trace-norm regression approach extracts low-rank latent factors underlying the side information that drives user preference under different context. Furthermore, our novel recommender framework StaTNA not only captures latent low-rank common drivers for user preferences, but also considers idiosyncratic taste for individual users. We compare performances of TNR and StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods. The boom of user activity on e-commerce and social networks has continuously fueled the development of recommender systems to most effectively provide suggestions for items that may potentially match user interest. In highlyrated Internet sites such as Amazon.com, YouTube, Netflix, Spotify, LinkedIn, Facebook, Tripadvisor, Last.fm, and IMDb, developing and deploying personalized recommender systems lie at the crux of the services they provide to users and subscribers . For example, Youtube, one of the worlds most popular video sites, has deployed a recommender system that updates regularly to deliver personalized sets of videos to users based on their previous or recent activity on site to help users find videos relevant to their interests, potentially keeping users entertained and engaged BID5.Among the vast advancements in deep learning and matrix completion techniques to build recommender systems (Ricci BID21, one of the most imperative aspect of research in such area is to identify latent (possibly low-rank) commonalities that drive specific types of user behaviour. For example, BID6 proposes a deep neural network based matrix factorization approach that uses explicit rating as well as implicit ratings to map user and items into common low-dimensional space. Yet, such variety of low-rank methodologies do not address the impact of idiosyncratic behaviour among buyers, which may potentially skew the overall learned commonalities across user groups. In this work, we propose two multi-task learning methods to improve performances of recommender systems using contextual side information. We first introduce an approach based on trace-norm regression (TNR) that enables us to extract low-rank latent dimensions underlying the side information that drive user preference according to variations in context, such as item features, user characteristics, time, season, location, etc. This is achieved by introducing a nuclear-norm regularization penalty term in the multi-task regression model, and we highlight that such latent dimensions can be thought of as homogeneous behaviour among particular types of user groups. Furthermore, we propose a novel recommender framework called Stable Trace-norm Analysis (StaTNA) that not only captures latent low-rank common drivers for user preference, but also considers idiosyncratic taste for individual users. This is achieved by, in addition to the low-rank penalty, adding a sparsity regularization term to exploit the sparse nature of heterogeneous behaviour. Finally, we test the performance of StaTNA on the MovieLens datasets against state-of-the-art models, and demonstrate that StaTNA and TNR in general outperforms these methods. We first introduce some notation that will be adopted throughout the rest of our work. We let Ω denote the set of all observed entries, and for any matrix Z, definē Z = P Ω (Z) asZ ij = Z ij if (i, j) ∈ Ω and 0 otherwise. We let Y ∈ R n×p, be the final output of the recommender system, X ∈ R n×d includes all side information, and L, S ∈ R d×p represent the common and idiosyncratic effects of side information on users. Both L and S can be considered as mappings from the side information to the recommender response. To be more specific, we take a movie recommender system for example: n can be the number of movies, p is the number of users, so each entry of Y is the predicted rating of each user for every movie, while X represents the features of each movie where each feature is d dimensional. When a new movie comes in, we apply L or/and S to the movie's feature to predict the rating of each existing user's, and recommend the movie to users with high predicted ratings. Before turning to TNR, we first consider a regularized lowrank solution for large-scale matrix completion problems called Soft-Impute BID14, which sheds light upon the key ideas of trace-norm regression. The SoftImpute problem is formulated as the following: DISPLAYFORM0 where · F denotes the Frobenius norm and · * denotes the nuclear norm. In this formulation, we minimize the reconstruction error subject to a bound on the nuclear norm, which serves as a convex relaxation of rank of a matrix and allows us to exploit the low-rank structure of the matrix L.Based on similar ideas, trace-norm regression extends this idea of incorporating regularization on the rank of a matrix in the context of multi-task learning, as it minimizes square loss while penalizing large ranks of the coefficient matrix: DISPLAYFORM1 Similar to our introduction of Soft-Impute and TNR, we first discuss a non contextual model that incorporates both the low-rank matrix L and sparse matrix S, namely the stable principal component pursuit (SPCP) BID24; BID23: DISPLAYFORM0 where · 1 denotes the sum of absolute values for all entries of a matrix. S 1 and L * models sparsity in S and the low-rank structure in L respectively. To further illustrate some intuition for the choice of such norms, we provide an example in the context of foreground- separation in video processing. L can be considered as the stationary , which is low-rank due to the strong correlation between frames; while S can represent foreground objects, which normally occupy only a fraction of the video and hence can be treated as sparse. BID15 Finally, in light of SPCP, we propose a novel framework called Stable Trace-norm Analysis (StaTNA) by adding contextual side information to consideration: DISPLAYFORM1 Note that StaTNA can be considered as a combination of trace norm regression and SPCP, and some theoretical aspects are discussed in BID0. The matrix L captures latent homogeneity among preferences of users, such as an Academy Award winning film would be preferable to many users. On the other hand, idiosyncratic tastes of users are embodied in S, such as some users particularly like horror movies that involve robots and monsters. Note that S can also be considered as a way to be robust to outliers in user behaviour or preference. In the case where X is the identity matrix, this problem is reduced to SPCP. Also, TNR can be considered as a special case of StaTNA by taking λ S = ∞, which explains why StaTNA will in general be more robust compared to TNR. In this subsection we will only briefly discuss the methodologies used in this paper to solve TNR and StaTNA, due to space limitations. We first highlight the computational feasibility for both models since methods such as proximal gradient descent BID16 or (Fast) Iterative Shrinkage-Thresholding Algorithm (FISTA, BID2) can be used to solve these problems with provable (sub)linear convergence rate BID12. For StaTNA, we directly apply FISTA to estimate L and S, and the procedure is detailed in Algorithm 1 in Appendix A.1. Next, as aforementioned, TNR is a special case for StaTNA, so to solve TNR, we simply take λ S = ∞ in Algorithm 1. We also point out that Algorithm 1 may be computationally expensive when the matrix is large. In Appendix A.2 we will propose several modifications to our method for solving TNR and StaTNA which will enable us to improve computational complexity. In our work, we consider the MovieLens 100K and MovieLens 1M datasets. The summary for both datasets are shown in TAB1 Note that MovieLens 1M movies do not include all MovieLens 100k movies. In both datasets, each user has rated at least 20 movies, and ratings are whole numbers in the scale of 1-5. In addition, each dataset is associated with a side information matrix whose rows are indexed by movies and includes 1 feature column denoting the movie category (19 categories in total), along with 1128 columns which denote relevant scores of a movie to provided tags. We pre-process side information to obtain two types of sideinformation matrices. For the first type, we apply one-hotencoding to the categorical feature and obtain 19 categorical features for each dataset. The final side information matrix is the concatenation of this categorical representation and relevance scores to given tags, which has dimensions n × 1147 where n is the number of movies in each dataset. For the second type, in addition to one-hot-encoding, we apply GloVe word embeddings with K = 300 BID18 to these categories using an average pooling trick, and in a 300-dimensional categorical representation vector for each movie. The final side information matrix has dimensions n × 1428. For simplicity, we use the suffix "-G" to denote models trained on the second type of side information processed using GloVe word embedding, while models without this suffix will denote models trained on the first type of side information. In this work, we perform two experiments: 1. (Matrix completion with side information) We fill in missing values in the rating matrix using the pre-processed side information matrix, which is the traditional matrix completion problem. For each dataset, we randomly select 20% of observed values in the rating matrix as the test set, and train TNR & StaTNA, and TNR-G & StaTNA-G on the two types of side-information matrices respectively. We use state-of-theart models such as SVD, Sparse FC, and GC-MC as our baseline. Here we point out that these baseline models do not incorporate side information, opposed to our TNR and StaTNA models which are trained on side information. Yet, our experimental will demonstrate that our proposed models, via utilizing side information, will significantly improve performance in this matrix completion task. 2. (Regression) We predict the ratings for new movies based on new side information of the movie, which is similar to the traditional regression problem. For each dataset, we train TNR, TNR-G, StaTNA, StaTNA-G on a randomly selected 80% of all movies and apply trained models on the remaining 20%. Furthermore, both experiments involve two hyperparameters, namely λ L and λ S, which are tuned using 10-fold cross validation. We use Lasso, Elastic Nets, Multi-task Lasso and Multi-task Elastic Nets BID17 as our baseline models. Note that standard Lasso and standard Elastic Nets are trained for each user independently to predict the user's rating for a given movie. The matrix formulations of these baseline models are shown in Table 6 We notice that both TNR and StaTNA do not perform as well as state-of-the-art models in MovieLens 100K. One explanation is that our models require more training data compared to baseline models to fully capture latent structures of both L and S in order to generalize well on test data. Finally, we also point out that StaTNA converges faster, and in general performs better than TNR, as shown in Figure 1 within Appendix C.1.Experiment 2: Regression In the second experiment to predict user ratings for movies, as shown in TAB4. MAE and RMSE for test data in Experiment 2 (regression) for baseline models (Lasso and Elastic Nets (denoted as EN)), TNR and StaTNA using MovieLens 100K and MovieLens 1M. The prefix "MT" is the abbreviation for "Multi-task".MovieLens 100K, all StaTNA and TNR models significantly outperform baseline models, while TNR with GloVe embedding yields the best out-of-sample performance. For MovieLens 1M, StaTNA with GloVe embedding in the best out-of-sample performance compared to all other baseline models including TNR, with and without GloVe embedding. The strong performance of StaTNA and StaTNA-G across both experiments, and especially in MovieLens 1M, indicates that our StaTNA framework provides promising performance guarantees for solving both matrix completion and regression tasks. As mentioned in earlier sections, we are interested in analyzing particular underlying commonalities in user preferences. We achieve this by investigating the principal components of our estimate of the low-rank matrix L, each of which we consider as a common type of user preference. Since our estimated L is of rank 6, we conclude that there are 6 major common types of user preferences, whose component scores (i.e. explained variance percentages) are listed in Table 4, where we observe that the first principal component explains 88.94% of the variability in user ratings. Table 5. Top 12 features of highest absolute weights within the first two principal components (PC1 and PC2). Details of other principle components are shown in TAB9 in Appendinx C.2. Our methodology to solve TNR and StaTNA (i.e. Algorithm 1 in Appendix A.1) may be computationally expensive when the matrix is large since it requires calling a Singular Value Decomposition (SVD) oracle in each iteration of the algorithm. Hence we propose two alternative methods, a FW-T algorithm and a nonconvex reformulation of the problem, to avoid using an SVD oracle. These are detailed in Appendix A.2. Furthermore, our current studies use side information from only one side, namely movie information. Our StaTNA framework can be extended to incorporate side information for both movies and users: DISPLAYFORM0 where U and M denotes users and movies respectively. Moreover, our StaTNA framework is also compatible with neural networks by including nuclear norm and sparse penalties to the objective. We believe that similar formulations will provide us with better performance guarantees, but at the cost of model interpretability. In this section, we discuss the methodologies we use to solve TNR and StaTNA. As mentioned earlier, we use (Fast) Iterative Shrinkage-Thresholding Algorithm (FISTA, BID2) to solve these problems. Before we address the detailed applications of these algorithms in our context to solve TNR and StaTNA, we introduce the following optimization oracles. We define the proximal mapping of the 1 norm as DISPLAYFORM1, whose extension to matrices is obtained by applying the scalar operator to each element. Moreover, we define the proximal mapping of the nuclear norm BID4 BID13 DISPLAYFORM2 V, and Y = U DV is the SVD of matrix Y. Now, using these definitions, we detail the algorithm to solve StaTNA in Algorithm 1. Note that one can also initialize L 0 in both Algorithm 1 as DISPLAYFORM3, where † denotes the pseudo-inverse of a matrix. For StaTNA, we directly apply FISTA to estimate L and S, and the procedures are detailed in Algorithm 1. As aforementioned, TNR is a special case for StaTNA, so to solve TNR, we simply take λ S = ∞ in Algorithm 1, which forces all S k andŜ k to 0. DISPLAYFORM4 In Algorithm 1, we call an SVD oracle in each iteration in FISTA to find the proximal mapping of the nuclear norm, which is computationally expensive when the matrix is large, i.e. the number of movie features and users are large. Here we propose two methods to avoid using an SVD oracle. First, inspired by a scalable algorithm FW-T on SPCP BID10 BID15, we propose a similar FW-T algorithm to solve StaTNA by replacing the proximal mapping of the nuclear norm with a Frank-Wolfe update in each iteration. To be more specific, we consider the following reformulation of StaTNA: DISPLAYFORM0 for some U L and U S such that the optimal solution (L, S) is still feasible to the above problem, i.e. L * ≤ U L and S * ≤ U S. For simplicity, we write the above objective function as g(L, S, t L, t S). In each iteration of the FW-T algorithm, we call the following Frank-Wolfe oracle (Algorithm 2) for L and S respectively. We can see that for the Frank-Wolfe update for matrix L only requires to compute the leading singular pairs for a matrix, which can be achieved by computationally cheap power iteration BID11. In addition, we perform an exact line-search by easily solving a DISPLAYFORM1 The full algorithm for the FW-T is detailed in Algorithm 3.Algorithm 3 FW-T for StatNA DISPLAYFORM2 Second, we propose a nonconvex formulation of the problem suggested by BID1 BID22: DISPLAYFORM3 where U ∈ R d×r, V ∈ R p×r. This problem is nonconvex but smooth, and these two problems are equivalent in the sense that there is a ono-on-one correspondence between the global minima of these two problems BID3 BID7. Since this new formulation has multi-affine structure, according to new given by BID9, we can use ADMM to solve this, which is detailed in Algorithm 4.To apply ADMM on this nonconvex reformulation, we further reformulate the problem as the following DISPLAYFORM4 5: DISPLAYFORM5 8: DISPLAYFORM6 10: DISPLAYFORM7 DISPLAYFORM8 Frobineous norm Table 6. Summary for formulations of models. MTR is the abbreviation for multitask regression, while MC is the abbreviations for matrix completion. DISPLAYFORM9 DISPLAYFORM10 C. Additional Stuff C.1. Experiment Figures (Figures 1 and 2) C.2. Other Principal Components (TAB9 . Top 12 features of highest absolute weights within the third to sixth principal components (PC3, PC4, PC5 and PC6)
Methodologies for recommender systems with side information based on trace-norm regularization
1,343
scitldr
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space. Text-based games became popular in the mid 80s with the game series Zork ing in many different text-based games being produced and published . These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands. Recently, there has been a growing interest in developing agents that can automatically solve text-based games by interacting with them. These settings challenge the ability of an artificial agent to understand natural language, common sense knowledge, and to develop the ability to interact with environments using language . Since the actions in these games are commands that are in natural language form, the major obstacle is the extremely large action space of the agent, which leads to a combinatorially large exploration problem. In fact, with a vocabulary of N words (e.g. 20K) and the possibility of producing sentences with at most m words (e.g. 7 words), the total number of actions is O(N m) (e.g. 20K 7 ≈ 1.28e 30). To avoid this large action space, several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs (; ;). Moreover, many existing works rely on using predetermined sets of admissible actions (; ;). However, a more ideal, and still under explored, alternative would be an agent that can operate in the full, unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment. To address this challenge, we propose to use the idea behind the recently proposed GoExplore algorithm. Specifically, we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq model that maps observations to actions, in an imitation learning fashion. To show the effectiveness of our proposed methodology, we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector. Then we use the 4,440 games of "First TextWorld Problems" (Côté, 2018), which are generated using the machinery introduced by, to show the generalization ability of our proposed methodology. In the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions, and in the latter, we show that training a Seq2Seq model on the trajectories found by Go-Explore in stronger generalization, as suggested by the stronger performance on unseen games, compared to existing competitive baselines . Reinforcement Learning Based Approaches for Text-Based Games Among reinforcement learning based efforts to solve text-based games two approaches are prominent. The first approach assumes an action as a sentence of a fixed number of words, and associates a separate Qfunction with each word position in this sentence. This method was demonstrated with two-word sentences consisting of a verb-object pair (e.g. take apple) (; ; ;). In the second approach, one Q-function that scores all possible actions (i.e. sentences) is learned and used to play the game (; ;). The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words (e.g. lack of language modelling). In the second approach, on the other hand, the number of possible actions can become exponentially large if the admissible actions (a predetermined low cardinality set of actions that the agent can take) are not provided to the agent. A possible solution to this issue has been proposed by , where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation, and subsequently one element of this set is chosen as the action for that observation. However, in our experiments we show that even in settings where the true set of admissible actions is provided by the environment, a Q-scorer does not generalize well in our setting (Section 5.2 Zero-Shot) and we would expect performance to degrade even further if the admissible actions were generated by a separate model. Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions or by compressing them in a latent space . In most text-based games rewards are sparse, since the size of the action space makes the probability of observing a reward extremely low when taking only random actions. Sparse reward environments are particularly challenging for reinforcement learning as they require longer term planning. Many exploration based solutions have been proposed to address the challenges associated with reward sparsity. Among these exploration approaches are novelty search (; 2011; ; ;), intrinsic motivation (b; ;), and curiosity based rewards (; 1991a;). For text based games exploration methods have been studied by, where the authors showed the effectiveness of the episodic discovery bonus in environments with sparse rewards. This exploration method can only be applied in games with very small action and state spaces, since their counting methods rely on the state in its explicit raw form. Go-Explore differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states, allowing it to explore sparse-reward environments in a sample efficient way (see as well as section 4.1). For the experiments in this paper we mainly focus on the final performance of our policy, not how that policy is trained, thus making Go-Explore a suitable algorithm for our experiments. Go-Explore is composed of two phases. In phase 1 (also referred to as the "exploration" phase) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive. During this phase, instead of resuming the exploration from scratch, the algorithm starts exploring from promising states in the archive to find high performing trajectories. In phase 2 (also referred to as the "robustification" phase, while in our variant we will call it "generalization") the algorithm trains a policy using the trajectories found in phase 1. Following this framework, which is also shown in Figure 3 (Appendix A.2), we define the Go-Explore phases for text-based games. Let us first define text-based games using the same notation as. A text-based game can be framed as a discrete-time Partially Observable Markov Decision Process (POMDP) defined by (S, T, A, Ω, O, R), where: S is the set of the environment states, T is the state transition function that defines the next state probability, i.e. T (s t+1 |a t ; s t)∀s t ∈ S, A is the set of actions, which in our case is all the possible sequences of tokens, Ω is the set of observations, i.e. text observed by the agent every time has it to take an action in the game (i.e. dialogue turn) which is controlled by the conditional observation probability O, i.e. O(o t |s t, a t−1), and, finally, R is the reward function i.e. r = R(s, a). Let us also define the observation o t ∈ Ω and the action a t ∈ A. Text-based games provide some information in plain text at each turn and, without loss of generality, we define an observation o t as the sequence of tokens {o Similarly, we define the tokens of an action a t as the sequence {a 0 t, · · ·, a m t}. Furthermore, we define the set of admissible actions A t ∈ A as A t = {a 0, · · ·, a z}, where each a i, which is a sequence of tokens, is grammatically correct and admissible with reference to the observation o t. In phase 1, Go-Explore builds an archive of cells, where a cell is defined as a set of observations that are mapped to the same, discrete representation by some mapping function f (x). Each cell is associated with meta-data including the trajectory towards that cell, the length of that trajectory, and the cumulative reward of that trajectory. New cells are added to the archive when they are encountered in the environment, and existing cells are updated with new meta-data when the trajectory towards that cells is higher scoring or equal scoring but shorter. At each iteration the algorithm selects a cell from this archive based on meta-data of the cell (e.g. the accumulated reward, etc.) and starts to randomly explore from the end of the trajectory associated with the selected cell. Phase 1 requires three components: the way that observations are embedded into cell representations, the cell selection, and the way actions are randomly selected when exploring from a selected cell. In our variant of the algorithm, f (x) is defined as follows: given an observation, we compute the word embedding for each token in this observation, sum these embeddings, and then concatenate this sum with the current cumulative reward to construct the cell representation. The ing vectors are subsequently compressed and discretized by binning them in order to map similar observations to the same cell. This way, the cell representation, which is the key of the archive, incorporates information about the current observation of the game. Adding the current cumulative reward to the cell representation is new to our Go-Explore variant as in the original algorithm only down-scaled image pixels were used. It turned out to be a very very effective to increase the speed at which high reward trajectories are discovered. In phase 1, we restrict the action space to the set of admissible actions A t that are provided by the game at every step of the game 1. This too is particularly important for the random search to find a high reward trajectory faster. Finally, we denote the trajectory found in phase 1 for game g as 1 Note that the final goal is to generalize to test environments where admissible actions are not available. The assumption that admissible actions are available at training time holds in cases where we build the training environment for an RL agent (e.g. a hand-crafted dialogue system), and a system trained in such an environment can be practically applied as long as the system does not rely on such information at test time. Thus, we assumed that these admissible commands are not available at test time. 2.2 PHASE 2: GENERALIZATION Phase 2 of Go-Explore uses the trajectories found in phase 1 and trains a policy based on those trajectories. The goal of this phase in the original Go-Explore algorithm is to turn the fragile policy of playing a trajectory of actions in sequence into a more robust, state-conditioned policy that can thus deal with environmental stochasticity. In our variant of the algorithm the purpose of the second phase is generalization: although in our environment there is no stochasticity, our goal is to learn a general policy that can be applied across different games and generalize to unseen games. In the original Go-Explore implementation, the authors used the backward Proximal Policy Optimization algorithm (PPO) to train this policy. In this work we opt for a simple but effective Seq2Seq imitation learning approach that does not use the reward directly in the loss. More specifically, given the trajectory T g 2, we train a Seq2Seq model to minimize the negative log-likelihood of the action a t given the observation o t. In other words, consider a word embedding matrix E ∈ R d×|V | where d is the embedding size and |V | is the cardinality of the vocabulary, which maps the input token to an embedded vector. Then, we define an encoder LSTM enc and a decoder LSTM dec. Every token o t from the trajectory T g is converted to its embedded representation using the embedding matrix E and the sequence of these embedding vectors is passed through LSTM enc: The last hidden state h enc |ot| is used as the initial hidden state of the decoder which generates the action a t token by token. Specifically, given the sequence of hidden states H ∈ R d×|ot| of the encoder, tokens a j t are generated as follows: where W ∈ R 2d×|V | is a matrix that maps the decoder hidden state, concatenated with the context vector, into a vocabulary size vector. During training, the parameters of the model are trained by minimizing: which is the sum of the negative log likelihood of each token in a t (using teacher forcing ). However, at test time the model produces the sequence in an auto-regressive way using greedy search. A set of commonly used standard benchmarks ) for agents that play text-based games are simple games which require no more than two words in each step to solve the game, and have a very limited number of admissible actions per observation. While simplifying, this setting limits the agent's ability to fully express natural language and learn more complex ways to speak. In this paper, we embrace more challenging environments where multiple words are needed at each step to solve the games and the reward is particularly sparse. Hence, we have selected the following environments: • CoinCollector ) is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms 3. The agent wins the game after it collects the coin, at which point (for the first and only time) a reward of +1 is received by the agent. The environment parses only five admissible commands (go north, go east, go south, go west, and take coin) made by two worlds; • CookingWorld 4 (Côté, 2018) in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.). The parser of each game accepts 18 verbs and 51 entities with a predefined grammar, but the overall size of the vocabulary of the observations is 20,000. In Appendix A.1 we provide more details about the levels and the games' grammar. In our experiments, we try to address two major research questions. First, we want to benchmark the exploration power of phase 1 of Go-Explore in comparison to existing exploration approaches used in text-based games. For this purpose, we generate 10 CoinCollector games with the hardest setting used by, i.e. hard-level 30 (refer to the Appendix A.1 for more information) and use them as a benchmark. In fact, CoinCollector requires many actions (at least 30 on hard games) to find a reward, which makes it suitable for testing the exploration capabilities of different algorithms. Secondly, we want to verify the generalization ability of our model in creating complex strategies using natural language. CoinCollector has a very limited action space, and is mainly designed to benchmark models on their capability of dealing with sparse rewards. Therefore we use the more complex CookingWorld games to evaluate the generalization capabilities of our proposed approach. We design three different settings for CookingWorld: 1) Single: treat each game independently, which means we train and test one agent for each game to evaluate how robust different models are across different games.; 2) Joint: training and testing a single policy on all the 4,440 CookingWorld games at the same time to verify that models can learn to play multiple games at the same time; 3) Zero-Shot: split the games into training, validation, and test sets, and then train our policy on the training games and test it on the unseen test games. This setting is the hardest among all, since it requires generalization to unseen games. In both CoinCollector and CookingWorld games an observation o t provided by the environment consists of a room description D, inventory information I, quest Q, previous action P and feedback F provided in the previous turn. Table 1 shows an example for each of these components. In our experiments for phase 1 of Go-Explore we only use D as the observation. For the CoinCollector games, we compared Go-Explore with the episodic discovery bonus that was used by to improve two Q-learning-based baselines: DQN++ and DRQN++. We used the code provided by the authors and the same hyper-parameters 5. For the CookingWorld games, we implemented three different treatments based on two existing methods: • LSTM-DQN (; : An LSTM based state encoder with a separate Q-functions for each component (word) of a fixed pattern of Verb, Adjective1, Noun1, Adjective2, and Noun2. In this approach, given the observation o t, the tokens are first converted into embeddings, then an LSTM is used to extract a sequence of hidden states H dqn ∈ R d×|ot|. A mean-pool layer is applied to H dqn to produce a single vector h o t that represents the whole sequence. Next, a linear transformation W type ∈ R d×|Vtype| is used to generate each of the Q values, where |V type | |V | is the subset of the original vocabulary restricted to the word type of a particular game (e.g for Verb type: take, drop, etc.). Formally, we have: where, type ∈ {Verb, Obj, Noun, Obj2,Noun2}. Next, all the Q-functions are jointly trained using the DQN algorithm with -greedy exploration . At evaluation time, the argmax of each Q-function is concatenated to produce a t. Importantly, in V type a special token <s> is used to denote the absence of a word, so the model can produce actions with different lengths. Figure 4 in Appendix A.2 shows a depiction of this model. It is the same model as LSTM-DQN, except that the random actions for -greedy exploration are sampled from the set of admissible actions instead of creating them by sampling each word separately. • DRRN : In this approach a model learns how to score admissible actions instead of directly generating the action token by token. The policy uses an LSTM for encoding the observation and actions are represented as the sum of the embedding of the word tokens they contain. Then, the Q value is defined as the dot product between the embedded representations of the observation and the action. Following the aforementioned notation, h o t is generated as in the LSTM-DQN baseline. Next, we define its embedded representation as, where E is an embedding matrix as in Equation 1. Thus, the Q-function is defined as: At testing time the action with the highest Q value is chosen. Figure 6 in Appendix A.2 shows a depiction of this model. In all the games the maximum number of steps has been set to 50. As mentioned earlier, the cell representation used in the Go-Explore archive is computed by the sum of embeddings of the room description tokens concatenated with the current cumulative reward. The sum of embeddings is computed using 50 dimensional pre-trained GloVe vectors. In the CoinCollector baselines we use the same hyper-parameters as in the original paper. In CookingWorld all the baselines use pre-trained GloVe of dimension 100 for the single setting and 300 for the joint one. The LSTM hidden state has been set to 300 for all the models. In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between GoExplore and algorithms that use a count-based reward in text-based games. Importantly, showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure 1 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively. In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table 3 summarizes the in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above. Single In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table 3, the LSTM-DQN (; approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table 3, however, by using the admissible actions in the -greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table 3). DRRN achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure 7 (in Appendix A.3) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed. Joint In this setting, a single model is trained on all the games at the same time, to test whether one agent can learn to play multiple games. Overall, as expected, all the evaluated models achieved a lower performance compared to the single game setting. One reason for this could be that learning multiple games at the same time leads to a situation where the agent encounters similar observations in different games, and the correct action to take in different games may be different. Furthermore, it is important to note that the order in which games are presented greatly affects the performance of LSTM-DQN and DRRN. In our experiments, we tried both an easy-to-hard curriculum (i.e. sorting the games by increasing level of difficulty) and a shuffled curriculum. Shuffling the games at each epoch ed in far better performance, thus we only report the latter. In Figure 2 we show the score breakdown, and we can see that all the baselines quickly fail, even for easier games. Zero-Shot In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table 3, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising by solving almost half of the unseen games. Figure 8 (in Appendix A.3) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games. Experimental show that our proposed Go-Explore exploration strategy is a viable methodology for extracting high-performing trajectories in text-based games. This method allows us to train supervised models that can outperform existing models in the experimental settings that we study. Finally, there are still several challenges and limitations that both our methodology and previous solutions do not fully address yet. For instance: State Representation The state representation is the main limitation of our proposed imitation learning model. In fact, by examining the observations provided in different games, we notice a large overlap in the descriptions (D) of the games. This overlap leads to a situation where the policy receives very similar observations, but is expected to imitate two different actions. This show especially in the joint setting of CookingWorld, where the 222 games are repeated 20 times with different entities and room maps. In this work, we opted for a simple Seq2Seq model for our policy, since our goal is to show the effectiveness of our proposed exploration methods. However, a more complex Hierarchical-Seq2Seq model or a better encoder representation based on knowledge graphs (a; b) would likely improve the of performance this approach. Language Based Exploration In Go-Explore, the given admissible actions are used during random exploration. However, in more complex games, e.g. Zork I and in general the Z-Machine games, these admissible actions are not provided. In such settings, the action space would explode in size, and thus Go-Explore, even with an appropriate cell representation, would have a hard time finding good trajectories. To address this issue one could leverage general language models to produce a set of grammatically correct actions. Alternatively one could iteratively learn a policy to sample actions, while exploring with Go-Explore. Both strategies are viable, and a comparison is left to future work. It is worth noting that a hand-tailored solution for the CookingWorld games has been proposed in the "First TextWorld Problems" competition. This solution 6 managed to obtain up to 91.9% of the maximum possible score across the 514 test games on an unpublished dataset. However, this solution relies on entity extraction and template filling, which we believe limits its potential for generalization. Therefore, this approach should be viewed as complementary rather than competitor to our approach as it could potentially be used as an alternative way of getting promising trajectories. In this paper we presented a novel methodology for solving text-based games which first extracts high-performing trajectories using phase 1 of Go-Explore and then trains a simple Seq2Seq model that maps observations to actions using the extracted trajectories. Our experiments show promising in three settings, with improved generalization and sample efficiency compared to existing methods. Finally, we discussed the limitations and possible improvements of our methodology, which leads to new research challenges in text-based games. CoinCollector In the hard setting (mode 2), each room on the path to the coin has two distractor rooms, and the level (e.g. 30) indicates the shortest path from the starting point to the coin room. CookingWorld The game's complexity is determined by the number of skills and the types of skills that an agent needs to master. The skills are: • recipe {1,2,3}: number of ingredients in the recipe • take {1,2,3}: number of ingredients to find (not already in the inventory) • open: whether containers/doors need to be opened • cook: whether ingredients need to be cooked • cut: whether ingredients need to be cut • drop: whether the inventory has limited capacity • go {1,6,9,12}: number of locations Thus the hardest game would be a recipe with 3 ingredients, which must all be picked up somewhere across 12 locations and then need to be cut and cooked, and to get access to some locations, several doors or objects need to be opened. The handicap of a limited capacity in the inventory makes the game more difficult by requiring the agent to drop an object and later on take it again if needed. The grammar used for the text-based games is the following: • go, look, examine, inventory, eat, open/close, take/drop, put/insert • cook X with Y −→ grilled X (when Y is the BBQ) • cook X with Y −→ roasted X (when Y is the oven) • cook X with Y −→ fried X (when Y is the stove) • slice X with Y −→ sliced X • chop X with Y −→ chopped X • dice X with Y −→ diced X • prepare meal • Where Y is something sharp (e.g. knife).
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
1,344
scitldr
The recent “Lottery Ticket Hypothesis” paper by Frankle & Carbin showed that a simple approach to creating sparse networks (keep the large weights) in models that are trainable from scratch, but only when starting from the same initial weights. The performance of these networks often exceeds the performance of the non-sparse base model, but for reasons that were not well understood. In this paper we study the three critical components of the Lottery Ticket (LT) algorithm, showing that each may be varied significantly without impacting the overall . Ablating these factors leads to new insights for why LT networks perform as well as they do. We show why setting weights to zero is important, how signs are all you need to make the re-initialized network train, and why masking behaves like training. Finally, we discover the existence of Supermasks, or masks that can be applied to an untrained, randomly initialized network to produce a model with performance far better than chance (86% on MNIST, 41% on CIFAR-10). Many neural networks are over-parameterized BID0 BID1, enabling compression of each layer BID1 BID14 BID4 or of the entire network BID9. Some compression approaches enable more efficient computation by pruning parameters, by factorizing matrices, or via other tricks BID4 BID5 BID8 BID10 BID11 BID12 BID13 BID14 BID15 BID16. A recent work by Frankle & Carbin BID2 presented a simple algorithm for finding sparse subnetworks within larger networks that can meet or exceed the performance of the original network. Their approach is as follows: after training a network, set all weights smaller than some threshold to zero BID2, rewind the rest of the weights to their initial configuration BID3, and then retrain the network from this starting configuration but with the zero weights frozen (not trained). See Section S1 for a more formal description of this algorithm. In this paper we perform ablation studies along the above three dimensions of variability, considering alternate mask criteria (Section 2), alternate mask-1 actions (Section 3), and alternate mask-0 actions (Section 4). These studies in aggregate reveal new insights for why lottery ticket networks work as they do. Along the way we also discover the existence of Supermasks-masks that produce above-chance performance when applied to untrained networks (Section 5). We begin our investigation with a study of different Mask Criteria, or functions that decide which weights to keep vs. prune. In this paper, we define the mask for each individual weight as a function of the weight's values both at initialization and after training: M (w i, w f). We can visualize this function as a set of decision boundaries in a 2D space as shown in FIG0. In BID2, the mask criterion simply keeps weights with large final value; we refer to this as the large_final mask, M (w i, w f) = |w f |.In addition to large_final, we also experimented with the inverse, small_final, versions that evaluate weights based on their initial magnitudes instead, large_init and small_init, versions that select for both, large_init_large_final and small_init_small_final, two mask criteria that evaluate how far weights moved, magnitude_increase and movement, and a control, random, that chooses masks randomly. These nine masks are depicted along with their associated equations in FIG1. In this section and throughout the remainder of the paper, we follow the experimental framework from BID2 and perform iterative pruning experiments on a fully-connected network (FC) trained on MNIST BID7 and on three convolutional networks (Conv2, Conv4, and Conv6) trained on CIFAR-10 BID6. For more achitecture and training details, see Section S2.Results of these pruning experiments are shown in FIG0. Note that the first six criteria out of the eight form three opposing pairs; in each case, we see when one member of the pair performs better than the random baseline, the opposing member performs worse than it. We see that the mask criterion large_final, as employed by the LT algorithm, is indeed a competitive mask criteria. However, the magnitude_increase criterion turns out to work just as well as the large_final criterion, and in some cases significantly better. In Section 4, we provide an explanation to why some mask criteria work well while others don't.3 Mask-1 actions: show me a signNow that we have explored various ways of choosing which weights to keep and prune, we will consider what values to set for the kept weights. In particular, we want to explore an interesting observation in BID2 which showed that the pruned, skeletal LT networks train well when you rewind to its original initialization, but degrades in performance when you randomly reinitialize the network. Why does reinitialization cause LT networks to train poorly? Which components of the original initialization are important? We evaluate a number of variants of reinitialization to investigate:• "Reinit" experiments: reinitialize kept weights based on the original initialization distribution • "Reshuffle" experiments: reinitialize while respecting the original distribution of remaining weights in that layer by reshuffling the kept weights' initial values • "Constant" experiments: reinitialize by setting mask-1 weight values to a positive or negative constant, with the constant set to be the standard deviation of each layer's original initialization. Thus every weight on a layer becomes one of three values: −α, 0, or α. We find that none of these three variants alone are able to train as well as the original LT network, shown as dashed lines in FIG1. However, all three variants work better when we ensure that the new values of the kept weights are of the same sign as their original initial values. These are shown as solid color lines in FIG1. Clearly, the common factor in all working variants including the original rewind action is the sign. As long as you keep the sign, reinitialization is not a deal breaker; in fact, even setting all kept weights to a constant value consistently performs well! What should we do with weights that are pruned? Typical network pruning procedures perform two actions on pruned weights: set them to zero, and freeze them in subsequent training. However, it is unclear which of these two components leads to the increased performance in LT networks. To separate the two factors, we run a simple experiment: we reproduce the LT iterative pruning experiments in which network weights are masked out in alternating train/mask/rewind cycles, but try an additional treatment: freeze masked weights at their initial values instead of at zero. If pruned weights are truly unimportant, we would expect that setting them to any other sensible values, such as their original initializations, should lead to a similarly performing network. Figure 3 shows the for this experiment. We find that networks perform significantly better when weights are frozen specifically at zero than at random initial values. For these networks masked via the LT large_final criterion, zero would seem to be a particularly good value to set weights to when they had small final values. So why does zero work better than initial values? One hypothesis is that the mask criterion we use tends to mask to zero those weights that were headed toward zero anyway. To test out this hypothesis, we run another experiment interpolated between the previous two: for any weight to be frozen, we freeze it to zero if it moved toward zero over the course of training, and we freeze it at its random initial value if it moved away from zero. Results are shown in Figure 3. By setting only the selected subset of pruned weights to zero, we fully recover the performance of the original LT networks. This supports our hypothesis that the benefit derived from freezing values to zero comes from the fact that those values were moving toward zero anyway. In fact, if we apply this treatment to all weights, including the kept weights, we can outperform even the original LT networks. The hypothesis above suggests that for certain mask criteria, like large_final, that masking is training: the masking operation tends to move weights in the direction they would have moved during training. If so, just how powerful is this training operation? To answer this question, we can start all the way from the beginning-not training the network at all, but simply applying a mask to the randomly initialized network. It turns out that with a well-chosen mask, an untrained network can already attain a test accuracy far better than chance. Although it is not entirely implausible to have better-than-chance performance since the masks are derived from the training process, the large improvement in performance is still surprising because the only transmission of information from the training back to the initial network is via a zero-one mask based on a simple criterion. We call these masks that can produce better-than-chance accuracy without training of the underlying weights "Supermasks". Figure 3: Performance of various treatments of pruned weights for Conv4 on CIFAR-10. Horizontal black line represents the performance of the original, unpruned network. Solid blue line represents networks trained using the LT algorithm, which freeze pruned weights at zero. Dotted blue line represents networks where pruned weights are frozen to their initial values. Dashed grey line represents networks trained using the new proposed scheme for pruned weights: freeze pruned weights at zero if they decreased in magnitude by the end of training, otherwise freeze them at their initialization values. Dotted grey line represents networks trained with the new proposed scheme apply to all weights by initializing kept weights to zero if they decreased in magnitude. Performance on other models are shown in FIG4.We turn our attention to evaluating how various mask criteria perform as Supermasks. In addition to evaluating the mask criteria from Section 2, we define a new large_final_same_sign mask criterion based on the demonstration in Section 3 of the importance of signs and of keeping large weights. The large_final_same_sign mask criterion selects for weights with large final magnitudes that also maintained the same sign by the end of training. This criterion is depicted in Figure S7. Also included as a control is the large_final_diff_sign. Performances of Supermasks produced by all 10 criteria are included in FIG2, compared with two baselines: networks untrained and unmasked (untrained_baseline) and networks fully trained (trained_baseline). By using this simple mask criterion of large_final_same_sign, we can create networks that obtain a remarkable 80% test accuracy on MNIST and 24% on CIFAR-10 without training. Another curious observation is that if we apply the mask to a signed constant (as described in Section 3) rather than the actual initial weights, we can produce even higher test accuracy of up to 86% on MNIST and 41% on CIFAR-10! Detailed across network architectures, pruning percentages, and these two treaments, are shown in FIG6. We find it fascinating that these Supermasks exist and can be found via such simple criteria. As an aside, they also present a method for network compression, since we only need to save a binary mask and a single random seed to reconstruct the full weights of the network. for: Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask S1 Lottery Ticket Pruning Algorithm We describe the lottery ticket algorithm in more detail: 0. Initialize a mask m to all ones. Randomly initialize the parameters w of a network f (x; w m) 1. Train the parameters w of the network f (x; w m) to completion. Denote the initial weights before training w i and the final weights after training w f.2. Mask Criterion. Use the mask criterion M (w i, w f) to produce a masking score for each currently unmasked weight. Rank the weights in each layer by their scores, set the mask value for the top p% to 1, the bottom (100 − p)% to 0, breaking ties randomly. Here p may vary by layer, and we follow the ratios chosen in BID2, summarized in Table S1. In BID2 the mask selected weights with large final value corresponding to M (w i, w f) = |w f |. In Section 2 we consider other mask criteria.3. Mask-1 Action. Take some action with the weights with mask value 1. In BID2 these weights were reset to their initial values and marked for training in the next round. We consider this and other mask-1 actions in Section 3.4. Mask-0 Action. Take some action with the weights with mask value 0. In BID2 these weights were pruned: set to 0 and frozen during any subsequent training. We consider this and other mask-0 actions in Section 4. Table S1 contains the architectures used in this study, together with relevant training hyperparameters, based off of experiments in BID2. Table S1: The architectures used in this paper. Table reproduced and modified from BID2. Conv networks use 3x3 convolutional layers with max pooling followed by fully connected layers. FC layer sizes are from BID7. Figure S2: Mask criteria studied in this section, starting with large_final that was used in BID2. Names we use to refer to the various methods are given along with the formula that projects each (w i, w f) pair to a score. Weights with the largest scores (colored regions) are kept, and weights with the smallest scores (gray regions) are pruned. The x axis in each small figure is w i and the y axis is w f. In two methods, α is adjusted as needed to align percentiles between w i and w f. When masks are created, ties are broken randomly, so the random method just assigns a score of 0 to every weight. FIG0 shows the representation of large_final mask criterion on a 2D plane. FIG1 shows the formulation and visual depiction of all the mask criteria considered in Section 2. Figure S3 shows the convergence speed and performance of all mask critera for FC on MNIST and Conv2, 4, 6 on CIFAR-10. FIG2 shows the convergence speed and performance of various reinitialization methods for FC on MNIST and Conv2, 4, 6 on CIFAR-10. FIG4 shows the performance of various treatment of pruned weights for FC on MNIST and Conv2, 4, 6 on CIFAR-10. FIG5 illustrates why the large_final criterion biases weights that were moving toward zero during training toward zero in the mask, effectively pushing them further in the direction they were headed. It also illustrates why the large_final criterion creates Supermasks. Figure S7 depicts the effect of Supermasks, as well as the two additional mask criteria considered only as Supermasks. FIG6 shows performance of various mask criteria on initial test accuracy for FC on MNIST and Conv 2, 4, 6 on CIFAR-10. Now that we know Supermasks exist, and those derived from simple heuristics work remarkably well, we might wonder how far we can push the performance of Supermasks for a given network. One can search, in the search space of all 2 n possible masks, where n is the number of parameters in that network. We can also try learning it with regular optimizers. To do that, we create a trainable mask variable for each layer while freezing all original parameters for that layer at their random initialization values. For an original weight tensor w and a mask tensor m of the same shape, we have as the effective weight w = w i g(m), where w i denotes the initial values weights are frozen at, is element-wise multiplication and g is a point-wise function that transform a matrix of continuous values into binary values. One example of g is (S(m)), where S is the sigmoid function and means rounding. Bias terms are added as usual to the product of w with the inputs as per the usual fully connected or convolutional kernels. We train the masks with g(m) = Bern(S(m)), where Bern(p) is the bernoulli sampler with probability p. It works slightly better than (S(m)). The bernoulli sampling adds some stochasticity that helps with training, mitigates the bias of all things starting at the same value, and uses in effect the expected value of S(m), which is especially useful when they are close to 0.5.By training the m matrix with SGD, we obtained up to 95.3% test accuracy on MNIST and 65.4% on CIFAR-10. Results are shown in FIG6, along with all the heuristic based, unlearned Supermasks. Note that there is no straightforward way to control for the pruning percentage. What we do is initializing m with a constant of different magnitudes, whose value nudges the network toward pruning more or less. With this tactic we are able to produce masks with the amounts of pruning (percentages of zeros) ranging from 7% to 89%. Further details about the training can be seen in Section S6. TAB4 summarizes the best test accuracy obtained through different treatments. The shows striking improvement of learned Supermasks over heuristic based ones. And learning Supermasks in performance not too far from training the full network. It suggests that a network upon initialization has already contained powerful subnetworks that work well. Additionally, the learning of Supermask allows identifying a possibly optimal pruning rate for each layer, since each layer is free to learn the distribution of 0s in m on their own. For instance, in BID2 the last layer of each network is designed to be pruned approximately half as much as the other layers, in our setting this ratio is automatically adjusted. We train the networks with mask m for each layer (and all regular kernels and biases frozen) with SGD, 0.9 momentum. The {FC, Conv2, Conv4, Conv6} networks respectively had {100, 100, 50, 20} for learning rates and trained for {2000, 2000, 1000, 800} iterations. These hyperparameters may seem absurd, but a network of masks is quite different and cannot train well with typical learning rates. Conv4 and Conv6 showed significant overfitting, thus we used early stopping as we are unable to use standard regularizing techniques. For evaluation, we also use Bernoulli sampling, but average the accuracies over 10 independent samples. For adjusting the amount pruned, we initialized m in every layer to be the same constant, which ranged from -5 to 5. In the future it may be worth trying different initializations of m for each layer for more granular control over per-layer pruning rates. A different method to try would be to add an L1 loss to influence layers to go toward certain values, which may alleviate the cold start problems of some networks not learning anything due to mask values starting too low (effectively having the entire network start at zero). Figure S3: Performance of different mask criteria for four networks at various pruning rates. We show early stopping iteration on the left and test accuracy on the right. Each line is a different mask criteria, with bands around magnitude_increase, large_final, movement, and random depicting the min and max over 5 runs. Stars represent points that are significantly above all other lines at a p = 0.05 level. large_final and magnitude_increase show the best convergence speed and accuracy, with magnitude_increase having slightly higher accuracy in Conv2 and Conv4. As expected, criteria using small weight values consistently perform worse than random. FIG2: The effects of various 1-actions for the four networks and various pruning rates. Dotted lines represent the three described methods, and solid lines are those three except with each weight having the same sign as its original initialization. Shaded bands around notable runs depict the min and max over 5 runs. Stars represent points that are significantly above all other lines at a p = 0.05 level. We also include the original rewinding method and random reinitialization as baselines. "Reshuffle, init sign" and "constant, init sign" perform similarly to the "rewind" baseline. Figure S7: (left) Untrained networks perform at chance (10% accuracy, for example, on the MNIST dataset as depicted), if they are randomly initialized, or randomly initialized and randomly masked. However, applying the large_final mask improves the network accuracy beyond the chance level. (right) The large_final_same_sign mask criterion (left) that tends to produce the best Supermasks. In contrast to the large_final mask in FIG0, this criterion masks out the quadrants where the sign of w i and w f differ. We include large_final_diff_sign (right) as a control. No training is performed in any network. Weights are frozen at either initialization or constant and various masks are applied. Within heuristic based Supermasks (excluding learned_mask), the large_final_same_sign mask creates the highest performing Supermask by a wide margin. Note that aside from the five independent runs performed to generate uncertainty bands for this plot, every data point on the plot is the same underlying network, just with different masks.
In neural network pruning, zeroing pruned weights is important, sign of initialization is key, and masking can be thought of as training.
1,345
scitldr
Fine-tuning with pre-trained models has achieved exceptional for many language tasks. In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks. However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning. In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts. In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations. The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set. In recent years, unsupervised pretrained models have dominated the field of natural language processing (NLP). The construction of a framework for such a model involves two steps: pretraining and fine-tuning. During pretraining, an encoder neural network model is trained using large-scale unlabeled data to learn word embeddings; parameters are then fine-tuned with labeled data related to downstream tasks. Traditionally, word embeddings are vector representations learned from large quantities of unstructured textual data such as those from Wikipedia corpora . Each word is represented by an independent vector, even though many words are morphologically similar. To solve this problem, techniques for contextualized word representation have been developed; some have proven to be more effective than conventional word-embedding techniques, which extract only local semantic information of individual words. By contrast, pretrained contextual representations learn sentence-level information from sentence encoders and can generate multiple word embeddings for a word. Pretraining methods related to contextualized word representation, such as BERT , OpenAI GPT , and ELMo , have attracted considerable attention in the field of NLP and have achieved high accuracy in GLUE tasks such as single-sentence, similarity and paraphrasing, and inference tasks. Among the aforementioned pretraining methods, BERT, a state-of-the-art network, is the leading method that applies the architecture of the Transformer encoder, which outperforms other models with respect to the GLUE benchmark. BERT's performance suggests that self-attention is highly effective in extracting the latent meanings of sentence embeddings. This study aimed to improve contextualized word embeddings, which constitute the output of encoder layers to be fed into a classifier. We used the original method of the pretraining stage in the BERT model. During the fine-tuning process, we introduced a new architecture known as Squeeze and Excitation alongside Gaussian blurring with symmetrically SAME padding ("SESAME" hereafter). First, although the developer of the BERT model initially presented several options for its use, whether the selective layer approaches involved information contained in all layers was unclear. In a previous study, by investigating relationships between layers, we observed that the Squeeze and Excitation method is key for focusing on information between layer weights. This method enables the network to perform feature recalibration and improves the quality of representations by selectively emphasizing informative features and suppressing redundant ones. Second, the self-attention mechanism enables a word to analyze other words in an input sequence; this process can lead to more accurate encoding. The main benefit of the self-attention mechanism method is its high ability to capture global dependencies. Therefore, this paper proposes the strategy, namely Gaussian blurring, to focus on local contexts. We created a Gaussian matrix and performed convolution alongside a fixed window size for sentence embedding. Convolution helps a word to focus on not only its own importance but also its relationships with neighboring words. Through such focus, each word in a sentence can simultaneously maintain global and local dependencies. We conducted experiments with our proposed method to determine whether the trained model could outperform the BERT model. We observed that SesameBERT yielded marked improvement across most GLUE tasks. In addition, we adopted a new evaluation set called HANS, which was designed to diagnose the use of fallible structural heuristics, namely the lexical overlap heuristic, subsequent heuristic, and constituent heuristic. Models that apply these heuristics are guaranteed to fail in the HANS dataset. For example, although BERT scores highly in the given test set, it performs poorly in the HANS dataset; BERT may label an example correctly not based on reasoning regarding the meanings of sentences but rather by assuming that the premise entails any hypothesis whose words all appear in the premise . By contrast, SesameBERT performs well in the HANS dataset; this implies that this model does not merely rely on heuristics. In summary, our final model proved to be competitive on multiple downstream tasks. Most related studies have used pretrained word vectors as the primary components of NLP architectures. This is problematic because word vectors capture semantics only from a word's surrounding text. Therefore, a vector has the same embedding for the same word in different contexts, even though the word's meaning may be different. Pretrained contextualized word representations overcome the shortcomings of word vectors by capturing the meanings of words with respect to context. ELMo can extract contextsensitive representations from a language model by using hidden states in stacked LSTMs. Generative pretraining uses the "Transformer encoder" rather than LSTMs to acquire textual representations for NLP downstream tasks; however, one limitation of this model is that it is trained to predict future left-to-right contexts of a unidirectional nature. BERT involves a masked language modeling task and achieves high performance on multiple natural language-understanding tasks. In BERT architecture, however, because the output data of different layers encode a wide variety of information, the most appropriate pooling strategy depends on the case. Therefore, layer selection is a challenge in learning how to apply the aforementioned models. The Squeeze and Excitation method was introduced by , who aimed to enhance the quality of representations produced by a network. Convolutional neural networks traditionally use convolutional filters to extract informative features from images. Such extraction is achieved by fusing the spatial and channel-wise information of the image in question. However, the channels of such networks' convolutional features have no interdependencies with one another. The network weighs each of its channels equally during the creation of output feature maps. Through Squeeze and Excitation, a network can take advantage of feature recalibration and use global information to emphasize informative features and suppress less important ones. The self-attention network relies on an attention mechanism to capture global dependencies without considering their distances by calculating all the positions in an input sequence. Our Gaussianblurring method focuses on learning local contexts while maintaining a high ability to capture longrange dependencies. Localness modeling was considered a learnable form of Gaussian bias in which a central position and dynamic window are predicted alongside intermediate representations in a neural network. However, instead of using Gaussian bias to mask the logit similarity of a word, we performed Gaussian bias in the layer after the embedding layer to demonstrate that performing element-wise operations in this layer can improve the model performance. A recent study investigated whether neural network architectures are prone to adopting shallow heuristics to achieve success in training examples rather than learning the underlying generalizations that need to be captured. For example, in computer vision, neural networks trained to recognize objects are misled by contextual heuristics in cases of monkey recognition . For example, in the field of natural language inference (NLI), a model may predict a label that contradicts the input because the word "not", which often appears in examples of contradiction in standard NLI training sets, is present . In the present study, we aimed to make SesameBERT robust with respect to all training sets. Consequently, our experiments used HANS datasets to diagnose some fallible structural heuristics presented in this paper. We focused on BERT, which is the encoder architecture of a multilayer Transformer , featuring some improvements. The encoder consists of L encoder layers, each containing two sublayers, namely a multihead self-attention layer and a feed-forward network. The multihead mechanism runs through a scaled dot product attention function, which can be formulated by querying a dictionary entry with key value pairs . The self-attention input consists of a query Q ∈ R l×d, a key K ∈ R l×d, and a value V ∈ R l×d, where l is the length of the input sentence, and d is the dimension of embedding for query, key and value. For subsequent layers, Q, K, V comes from the output of the previous layer. The scaled dot product attention is defined as follows: The output represents the multiplication of the attention weights A and the vector v, where A = sof tmax(The attention weights A i,j enabled us to better understand about the importance of the i-th key-value pairs with respect to the j-th query in generating the output . During fine-tuning, We used the output encoder layer from the pretrained BERT model to create contextualized word embeddings and feed these embeddings into the model. Although several methods have been developed for extracting contextualized embeddings from various layers, we believed that these methods had substantial room for improvement. Therefore, we used Squeeze and Excitation to solve the aforementioned problem. In this study, we proposed the application of Squeeze and Excitation ; its application to the output of the encoder layer was straightforward once we realized that the number of channels was equivalent to the number of layers. Therefore, we intended to use the term channels and layers interchangeably. First, we defined U:,:,k as the output of the k-th encoder layer, for all 1 ≤ k ≤ n. We wanted to acquire global information from between the layers before feeding the input into the classifier; therefore, we concatenated all the output from each encoder layer to form the feature maps U ∈ R l×d×n. In the squeeze step, by using global average pooling on the kth layer, we were able to squeeze the global spatial information into a layer descriptor. In other words, we set the kth layer's output of the squeeze function as Z:,:,k. Figure 1: We extracted the output from each layer of the encoders and concatenated all the layers to form a three-dimensional tensor U. We then performed Squeeze f sq (U) and Excitation f ex (f sq (U)) to obtain the weight of each output layer. Finally, we fed the weighted average of all layers into the classifier. In this work we employed n = 12 attention layers. In the excitation step, we aimed to fully capture layer-wise dependencies. This method uses the layer-wise output of the squeeze operation f sq to modulate interdependencies of all layers. Excitation is a gating mechanism with a sigmoid activation function that contains two fully connected layers. Let W 1 and W 2 be the weights of the first and second fully connected layers, respectively, and let r be the bottleneck in the layer excitation that encodes the layer-wise dependencies; therefore, W 1 ∈ R n× n r, and W 2 ∈ R n r ×n. The excitation function f ex: where z is the vector squeezed from tensor Z. Finally, we rescaled the output Z:,:,k by multiplying it by s k. The rescaled output is deonted as u k. The scaling function f scale is defined as follows: We concatenated all rescaled outputs from all encoder layers to form our rescaled feature maps u. The architecture is shown in Figure 1. We then extracted layers from the rescaled feature maps, or calculated a weighted average layer u avg. Given an input sequence X = {x 1, x 2, ..., x l} ∈ R l×d, the model transformed it into queries Q, keys K, and values V, where Q, K, and V ∈ R l×d. Multihead attention enabled the model to jointly attend to information from different representation subspaces at different positions. Thus, the three types of representations are split into h subspaces of size where To capture the local dependency related to each word, we first used a predefined fixed window size k to create a Gaussian blur g, where g ∈ R k: where σ refers to the standard deviation. Several Gaussian-blurring strategies are feasible for applying convolutional operations to attention outputs. Figure 2: Diagram of a one-dimensional Gaussian blur kernel, which was convoluted through the input dimension. This approach enabled the central word to acquire information concerning neighboring words with weights proportional to the Gaussian distribution. The first strategy focuses on each attention output O h. We restrictÔ h i,j,: to a local scope with a fixed size k centered at the position i and dimension j, where 1 ≤ j ≤ d, and k can be any odd number between 1 and l, expressed as follows: We then enhance the localness ofÔ h i,j,: through a parameter-free 1D convolution operation with g. The final attention output is O h, which is the dot product between the Gaussian kernel and the corresponding input array elements at every position ofÔ where * is defined as a convolution operation, as illustrated in Figure 2. Another option focuses on values V. We applied the aforementioned method again but restrict V h to a local scope. The final attention output O h is denoted as follows: The difference between the present method and the method of performing Gaussian blurring on attention outputs and values is that the method of performing Gaussian blurring on attention outputs and values places greater emphasis on the interaction of cross-query vectors, whereas the present method focuses on cross-value vectors. Finally, the outputs of the h attention heads are concatenated to form the final output representation O: where O ∈ R l×d. The multihead mechanism enables each head to capture distinct linguistic input properties. Furthermore, because our model is based on BERT, which builds an encoder framework with a stack of 12 layers, we were able to apply locality modeling to all layers through Squeeze and Excitation. Therefore, we expected that the global information and local properties captured by all layers could be exploited. We evaluated the proposed SesameBERT model by conducting multiple classification tasks. For comparison with the of a previous study on BERT , we reimplemented the BERT model in TensorFlow in our experiments. 1 In addition, we set most of the parameters to be identical to those in the original BERT model, namely, batch size: 16, 32, learning rate: 5e-5, 3e-5, 2e-5, and number of epochs: 3, 4. All of the in this paper can be replicated in no more than 12 hours by a graphics processing unit with nine GLUE datasets. We trained all of the models in the same computation environment with an NVIDIA Tesla V100 graphics processing unit. GLUE benchmark is a collection of nine natural language-understanding tasks, including questionanswering, sentiment analysis, identification of textual similarities, and recognition of textual entailment. GLUE datasets were employed because they are sets of tools used to evaluate the performance of models for a diverse set of existing NLU tasks. The datasets and metrics used for the experiments in this study are detailed in the appendix A. We used a new evaluation set, namely the HANS dataset, to diagnose fallible structural heuristics presented in a previous study based on syntactic properties. More specifically, models might apply accurate labels not based on reasoning regarding the meanings of words but rather by assuming that the premise entails any hypothesis whose words all appear in the premise . Furthermore, an instance that contradicts the lexical overlap heuristics in MNLI is likely too rare to prevent a model from learning heuristics. Models may learn to assume that a label is contradictory whenever a negation word is contained in the premise but not the hypothesis. Therefore, whether a model scored well on a given test set because it relied on heuristics can be observed. For example, BERT performed well on MNLI tasks but poorly on the HANS dataset; this finding suggested that the BERT model employs the aforementioned heuristics. The main difference between the MNLI and HANS datasets is their numbers of labels. The MNLI dataset has three labels, namely Entailment, Neutral, and Contradiction. In the HANS dataset, instances labeled as Contradiction or Neutral are translated into non-entailment. Therefore, this dataset has only two labels: Entailment and Non-entailment. The HANS dataset targets three heuristics, namely Lexical overlap, Subsequence, and Constituent, with more details in appendix B. This dataset not only serves as a tool for measuring progress in this field but also enables the visualization of interpretable shortcomings in models trained using MNLI. This subsection provides the experiment of the baseline model and the models trained using our proposed method. We performed Gaussian blurring on attention outputs in the experiment. In addition, we employed a batch size of 32, learning rates of 3e-5, and 3 epochs over the data for all GLUE tasks. We fine-tuned the SesameBERT model through 9 downstream tasks in the datasets. For each task, we performed fine-tuning alongside Gaussian blur kernel sigmas 1e-2, 1e-1, 3e-1, and 5e-1 and selected that with the most favorable performance in the dev set. Because GLUE datasets do not distribute labels for test sets, we uploaded our predictions to the GLUE server for evaluation. The are presented in Table 1; GLUE benchmark is provided for reference. In most tasks, our proposed method outperformed the original BERT-Base model . For example, in the RTE and AX datasets, SesameBERT yielded improvements of 1.2% and 1.6%, respectively. We conducted experiments on GLUE datasets to test the effects of Gaussian blurring alongside BERT on the value layer and context layer. Table 2 shows the degrees of accuracy in the dev set. The performance of Gaussian blurring with respect to self-attention layers varied among cases. Table 1: Test in relation to the GLUE benchmark. The metrics for these tasks, shown in appendix A, were calculated using a GLUE score. We compared our SesameBERT model with the original BERT-Base model, ELMo and OpenAI GPT (et al. demonstrated that different layers vary in terms of their abilities to distinguish and capture neighboring positions and global dependency between words. We evaluated the weights learned from all layers. These weights indicated that a heavier weight represents greater importance. The are shown in appendix C. Because the lower layer represents word embeddings that are deficient in terms of context , the self-attention model in the lower layer may need to encode representations with global context and may struggle to learn localness. Table 3 shows the degree of accuracy predicted by each extracted attention output layer method. The indicated that the lower layers had lower accuracy. We performed three ablation studies. First, we examined the performance of our method without blurring; we observed that Squeeze and Excitation helped the higher layer. This trend suggested that higher layers benefit more than do lower layers from Squeeze and Excitation. Second, we analyzed the effect of Gaussian blurring on the context layer. The revealed that the method with blurring achieved higher accuracy in lower layers. We assumed that capturing short-range dependencies among neighboring words in lower layers is an effective strategy. Even if self-attention models capture long-range dependencies beyond phrase boundaries in higher layers, modeling localness remains a helpful metric. Finally, we observed the direct effects of SesameBERT. Although our proposed architecture performed poorly in lower layers, it outperformed the other methods in higher layers. This finding indicated that in higher layers, using Squeeze and Excitation alongside Gaussian blurring helps self-attention models to capture global information in all layers. We trained both BERT and SesameBERT on the MNLI-m dataset to evaluate their classification accuracy. Similar to the of another study , BERT achieved 84.6% accuracy, which is higher than that of SesameBERT, as shown in Table 1. In the HANS dataset, we explored the effects of two models on each type of heuristic. The are presented in Figure 3; we first examined heuristics for which the label was Entailment. We can see that both models performed well; they assigned the correct labels almost 100% of the time, as we had expected them to do after adopting the heuristics targeted by HANS. Next, we evaluated the heuristics labeled as Non-entailment. BERT performed poorly for all three cases, meaning that BERT assigned correct labels based on heuristics instead of applying the correct rules of inference. By contrast, our proposed method performed almost three times as well as BERT in the case of "Lexical overlap". BERT performed poorly in all three cases in Right; this indicated that the model had adopted shallow heuristics rather than learning the latent information that it intended to capture. This paper argues that capturing local contexts for self-attention networks with Gaussian blurring can prevent models from easily adopting heuristics. Although our models performed poorly in cases of "Subsequence" and "Constituent", both of these heuristics may be hierarchical cases of the lexical overlap heuristic, meaning that the performance of this hierarchy would not necessarily match the performance of our models. This paper proposes a fine-tuning approach named SesameBERT based on the pretraining model BERT to improve the performance of self-attention networks. Specifically, we aimed to find highquality attention output layers and then extract information from aspects in all layers through Squeeze and Excitation. Additionally, we adopted Gaussian blurring to help capture local contexts. Experiments using GLUE datasets revealed that SesameBERT outperformed the BERT baseline model. The also revealed the weight distributions of different layers and the effects of applying different Gaussian-blurring approaches when training the model. Finally, we used the HANS dataset to determine whether our models were learning what we wanted them to learn rather than using shallow heuristics. We highlighted the use of lexical overlap heuristics as an advantage over the BERT model. SesameBERT could be further applied to prevent models from easily adopting shallow heuristics. A DESCRIPTIONS OF GLUE DATASETS
We proposed SesameBERT, a generalized fine-tuning method that enables the extraction of global information among all layers through Squeeze and Excitation and enriches local information by capturing neighboring contexts via Gaussian blurring.
1,346
scitldr
A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL. Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates. Learning with Opponent-Learning Awareness (LOLA) is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma. Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence. In fact, remarkably few algorithms have theoretical guarantees applying across all (n-player, non-convex) games. In this paper we present Stable Opponent Shaping (SOS), a new method that interpolates between LOLA and a stable variant named LookAhead. We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games. SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally. Problem Setting. While machine learning has traditionally focused on optimising single objectives, generative adversarial nets (GANs) BID9 have showcased the potential of architectures dealing with multiple interacting goals. They have since then proliferated substantially, including intrinsic curiosity BID19, imaginative agents BID20, synthetic gradients, hierarchical reinforcement learning (RL) BID23 BID22 and multi-agent RL in general BID2.These can effectively be viewed as differentiable games played by cooperating and competing agents -which may simply be different internal components of a single system, like the generator and discriminator in GANs. The difficulty is that each loss depends on all parameters, including those of other agents. While gradient descent on single functions has been widely successful, converging to local minima under rather mild conditions BID13, its simultaneous generalisation can fail even in simple two-player, two-parameter zero-sum games. No algorithm has yet been shown to converge, even locally, in all differentiable games. Related Work. Convergence has widely been studied in convex n-player games, see especially BID21; BID5. However, the recent success of non-convex games exemplified by GANs calls for a better understanding of this general class where comparatively little is known. BID14 recently prove local convergence of no-regreat learning to variationally stable equilibria, though under a number of regularity assumptions. Conversely, a number of algorithms have been successful in the non-convex setting for restricted classes of games. These include policy prediction in two-player two-action bimatrix games BID24; WoLF in two-player two-action games BID1; AWESOME in repeated games BID3; Optimistic Mirror Descent in two-player bilinear zero-sum games BID4 and Consensus Optimisation (CO) in two-player zerosum games BID15 ). An important body of work including BID10; BID16 has also appeared for the specific case of GANs. Working towards bridging this gap, some of the authors recently proposed Symplectic Gradient Adjustment (SGA), see BID0. This algorithm is provably'attracted' to stable fixed points while'repelled' from unstable ones in all differentiable games (n-player, non-convex). Nonetheless, these are weaker than strict convergence guarantees. Moreover, SGA agents may act against their own self-interest by prioritising stability over individual loss. SGA was also discovered independently by BID8, drawing on variational inequalities. In a different direction, Learning with Opponent-Learning Awareness (LOLA) modifies the learning objective by predicting and differentiating through opponent learning steps. This is intuitively appealing and experimentally successful, encouraging cooperation in settings like the Iterated Prisoner's Dilemma (IPD) where more stable algorithms like SGA defect. However, LOLA has no guarantees of converging or even preserving fixed points of the game. Contribution. We begin by constructing the first explicit tandem game where LOLA agents adopt'arrogant' behaviour and converge to non-fixed points. We pinpoint the cause of failure and show that a natural variant named LookAhead (LA), discovered before LOLA by BID24, successfully preserves fixed points. We then prove that LookAhead locally converges and avoids strict saddles in all differentiable games, filling a theoretical gap in multi-agent learning. This is enabled through a unified approach based on fixed-point iterations and dynamical systems. These techniques apply equally well to algorithms like CO and SGA, though this is not our present focus. While LookAhead is theoretically robust, the shaping component endowing LOLA with a capacity to exploit opponent dynamics is lost. We solve this dilemma with an algorithm named Stable Opponent Shaping (SOS), trading between stability and exploitation by interpolating between LookAhead and LOLA. Using an intuitive and theoretically grounded criterion for this interpolation parameter, SOS inherits both strong convergence guarantees from LA and opponent shaping from LOLA.On the experimental side, we show that SOS plays tit-for-tat in the IPD on par with LOLA, while all other methods mostly defect. We display the practical consequences of our theoretical guarantees in the tandem game, where SOS always outperforms LOLA. Finally we implement a more involved GAN setup, testing for mode collapse and mode hopping when learning Gaussian mixture distributions. SOS successfully spreads mass across all Gaussians, at least matching dedicated algorithms like CO, while LA is significantly slower and simultaneous gradient descent fails entirely. We frame the problem of multi-agent learning as a game. Adapted from BID0, the following definition insists only on differentiability for gradient-based methods to apply. This concept is strictly more general than stochastic games, whose parameters are usually restricted to action-state transition probabilities or functional approximations thereof. Definition 1. A differentiable game is a set of n players with parameters θ = (θ 1, . . ., θ n) ∈ R d and twice continuously differentiable losses DISPLAYFORM0 Crucially, note that each loss is a function of all parameters. From the viewpoint of player i, parameters can be written as θ = (θ i, θ −i) where θ −i contains all other players' parameters. We do not make the common assumption that each L i is convex as a function of θ i alone, for any fixed opponent parameters θ −i, nor do we restrict θ to the probability simplex -though this restriction can be recovered via projection or sigmoid functions σ: R →. If n = 1, the'game' is simply to minimise a given loss function. In this case one can reach local minima by (possibly stochastic) gradient descent (GD). For arbitrary n, the standard solution concept is that of Nash equilibria. Definition 2. A pointθ ∈ R d is a (local) Nash equilibrium if for each i, there are neighbourhoods DISPLAYFORM1 In other words, each player's strategy is a local best response to current opponent strategies. We write DISPLAYFORM2 Define the simultaneous gradient of the game as the concatenation of each player's gradient, DISPLAYFORM3 The ith component of ξ is the direction of greatest increase in L i with respect to θ i. If each agent minimises their loss independently from others, they perform GD on their component ∇ i L i with learning rate α i. Hence, the parameter update for all agents is given by θ ← θ − α ξ, where α = (α 1, . . ., α n) and is element-wise multiplication. This is also called naive learning (NL), reducing to θ ← θ − αξ if agents have the same learning rate. This is assumed for notational simplicity, though irrelevant to our . The following example shows that NL can fail to converge. Example 1. Consider L 1/2 = ±xy, where players control the x and y parameters respectively. The origin is a (global and unique) Nash equilibrium. The simultaneous gradient is ξ = (y, −x) and cycles around the origin. Explicitly, a gradient step from (x, y) yields (x, y) ← (x, y) − α(y, −x) = (x − αy, y + αx) which has distance from the origin (1 + α 2)(x 2 + y 2) > (x 2 + y 2) for any α > 0 and (x, y) = 0. It follows that agents diverge away from the origin for any α > 0. The cause of failure is that ξ is not the gradient of a single function, implying that each agent's loss is inherently dependent on others. This in a contradiction between the non-stationarity of each agent, and the optimisation of each loss independently from others. Failure of convergence in this simple two-player zero-sum game shows that gradient descent does not generalise well to differentiable games. We consider an alternative solution concept to Nash equilibria before introducing LOLA. Consider the game given by L 1 = L 2 = xy where players control the x and y parameters respectively. The optimal solution is (x, y) DISPLAYFORM0 However the origin is a global Nash equilibrium, while also a saddle point of xy. It is highly undesirable to converge to the origin in this game, since infinitely better losses can be reached in the anti-diagonal direction. In this light, Nash equilibria cannot be the right solution concept to aim for in multi-agent learning. To define stable fixed points, first introduce the'Hessian' of the game as the block matrix DISPLAYFORM1 This can equivalently be viewed as the Jacobian of the vector field ξ. Importantly, note that H is not symmetric in general unless n = 1, in which case we recover the usual Hessian H = ∇ 2 L.Definition 3. A pointθ is a fixed point if ξ(θ) = 0. It is stable if H(θ) 0, unstable if H(θ) ≺ 0 and a strict saddle if H(θ) has an eigenvalue with negative real part. The name'fixed point' is coherent with GD, since ξ(θ) = 0 implies a fixed updateθ ←θ − αξ(θ) = θ. Though Nash equilibria were shown to be inadequate above, it is not obvious that stable fixed points (SFPs) are a better solution concept. In Appendix A we provide intuition for why SFPs are both closer to local minima in the context of multi-loss optimisation, and more tractable for convergence proofs. Moreover, this definition is an improved variant on that in BID0, assuming positive semi-definiteness only atθ instead of holding in a neighbourhood. This makes the class of SFPs as large as possible, while sufficient for all our theoretical . Assuming invertibility of H(θ) at SFPs is crucial to all convergence in this paper. The same assumption is present in related work including BID15, and cannot be avoided. Even for single losses, a fixed point with singular Hessian can be a local minimum, maximum, or saddle point. Invertibility is thus necessary to ensure that SFPs really are'local minima'. This is omitted from now on. Finally note that unstable fixed points are a subset of strict saddles, making Theorem 6 both stronger and more general than for SGA by BID0. Accounting for nonstationarity, Learning with Opponent-Learning Awareness (LOLA) modifies the learning objective by predicting and differentiating through opponent learning steps. For simplicity, if n = 2 then agent 1 optimises L 1 (θ 1, θ 2 + ∆θ 2) with respect to θ 1, where ∆θ 2 is the predicted learning step for agent 2. assume that opponents are naive learners, namely ∆θ 2 = −α 2 ∇ 2 L 2. After first-order Taylor expansion, the loss is approximately given by L 1 + ∇ 2 L 1 · ∆θ 2. By minimising this quantity, agent 1 learns parameters that align the opponent learning step ∆θ 2 with the direction of greatest decrease in L 1, exploiting opponent dynamics to further reduce one's losses. Differentiating with respect to θ 1, the adjustment is DISPLAYFORM0 By explicitly differentiating through ∆θ 2 in the rightmost term, LOLA agents actively shape opponent learning. This has proven effective in reaching cooperative equilibria in multi-agent learning, finding success in a number of games including tit-for-tat in the IPD. The middle term above was originally dropped by the authors because "LOLA focuses on this shaping of the learning direction of the opponent". We choose not to eliminate this term, as also inherent in LOLA-DiCE. Preserving both terms will in fact be key to developing stable opponent shaping. First we formulate n-player LOLA in vectorial form. Let H d and H o be the matrices of diagonal and anti-diagonal blocks of H, so that DISPLAYFORM1 While experimentally successful, LOLA fails to preserve fixed pointsθ of the game since DISPLAYFORM2 in general. Even ifθ is a Nash equilibrium, the updateθ ←θ − αLOLA =θ can push them away despite parameters being optimal. This may worsen the losses for all agents, as in the game below. Example 2 (Tandem). Imagine a tandem controlled by agents facing opposite directions, who feed x and y force into their pedals respectively. Negative numbers correspond to pedalling backwards. Moving coherently requires x ≈ −y, embodied by a quadratic loss (x+y) 2. However it is easier for agents to pedal forwards, translated by linear losses −2x and −2y. The game is thus given by L 1 (x, y) = (x + y) 2 − 2x and L 2 (x, y) = (x + y) 2 − 2y. These sub-goals are incompatible, so agents cannot simply accelerate forwards. The SFPs are given by {x + y = 1}. Computing χ (x, 1 − x) = = 0, none of these are preserved by LOLA. Instead, we show in Appendix C that LOLA can only converge to sub-optimal scenarios with worse losses for both agents, for any α. Intuitively, the root of failure is that LOLA agents try to shape opponent learning and enforce compliance by accelerating forwards, assuming a dynamic response from their opponent. The other agent does the same, so they become'arrogant' and suffer by pushing strongly in opposite directions. The shaping term χ prevents LOLA from preserving fixed points. Consider removing this component entirely, giving (I − αH o)ξ. This variant preserves fixed points, but what does it mean from the perspective of each agent? Note that LOLA optimises L 1 (θ 1, θ 2 + ∆θ 2) with respect to θ 1, while ∆θ 2 is a function of θ 1. In other words, we assume that our opponent's learning step depends on our current optimisation with respect to θ 1. This is inaccurate, since opponents cannot see our updated parameters until the next step. Instead, assume we optimise DISPLAYFORM0 are the current parameters. After Taylor expansion, the gradient with respect to θ 1 is given by DISPLAYFORM1 does not depend on θ 1. In vectorial form, we recover the variant (I −αH o)ξ since the shaping term corresponds precisely to differentiating through ∆θ 2. We name this LookAhead, which was discovered before LOLA by BID24 though not explicitly named. Using the stop-gradient operator ⊥ 1, this can be reformulated as optimising L 1 (θ 1, θ 2 + ⊥∆θ 2) where ⊥ prevents gradient flowing from ∆θ 2 upon differentiation. The main of BID24 is that LookAhead converges to Nash equilibria in the small class of two-player, two-action bimatrix games. We will prove local convergence to SFP and non-convergence to strict saddles in all differentiable games. On the other hand, by discarding the problematic shaping term, we also eliminated LOLA's capacity to exploit opponent dynamics and encourage cooperation. This will be witnessed in the IPD, where LookAhead agents mostly defect. We propose Stable Opponent Shaping (SOS), an algorithm preserving both advantages at once. Define the partial stop-gradient operator ⊥ p:= p⊥ + (1 − p)I, where I is the identity and p stands for partial. A p-LOLA agent optimises the modified objective DISPLAYFORM0 collapsing to LookAhead at p = 0 and LOLA at p = 1. The ing gradient is given by DISPLAYFORM1 We obtain an algorithm trading between shaping and stability as a function of p. Note however that preservation of fixed points only holds if p is infinitesimal, in which case p-LOLA is almost identical to LookAhead -losing the very purpose of interpolation. Instead we propose a two-part criterion for p at each learning step, through which all guarantees descend. First choose p such that ξ p points in the same direction as LookAhead. This will not be enough to prove convergence itself, but prevents arrogant behaviour by ensuring convergence only to fixed points. Formally, the first criterion is given by ξ p, ξ 0 ≥ 0. If −α χ, ξ 0 ≥ 0 then ξ p, ξ 0 ≥ 0 automatically, so we choose p = 1 for maximal shaping. Otherwise choose DISPLAYFORM2 −α χ, ξ 0 with any hyperparameter 0 < a < 1. This guarantees a positive inner product DISPLAYFORM3 We complement this with a second criterion ensuring local convergence. The idea is to scale p by a function of ξ if ξ is small enough, which certainly holds in neighbourhoods of fixed points. Let 0 < b < 1 be a hyperparameter and take p = ξ 2 if ξ < b, otherwise p = 1. Choosing p 1 and p 2 according to these criteria, the two-part criterion is p = min{p 1, p 2}. SOS is obtained by combining p-LOLA with this criterion, as summarised in Algorithm 1. Crucially, all theoretical in the next section are independent from the choice of hyperparameters a and b. Algorithm 1: Stable Opponent Shaping 1 Initialise θ randomly and fix hyperparameters a, b ∈. 2 while not done do DISPLAYFORM4 Our central theoretical contribution is that LookAhead and SOS converge locally to SFP and avoid strict saddles in all differentiable games. Since the learning gradients involve second-order Hessian terms, our assume thrice continuously differentiable losses (omitted hereafter). Losses which are C 2 but not C 3 are very degenerate, so this is a mild assumption. Statements made about SOS crucially hold for any hyperparameters a, b ∈. See Appendices D and E for detailed proofs. Convergence is proved using Ostrowski's Theorem. This reduces convergence of a gradient adjustment g to positive stability (eigenvalues with positive real part) of ∇g at stable fixed points. Theorem 2. Let H 0 be invertible with symmetric diagonal blocks. Then there exists > 0 such that (I − αH o)H is positive stable for all 0 < α <.This type of would usually be proved either by analytical means showing positive definiteness and hence positive stability, or direct eigenvalue analysis. We show in Appendix D that (I − αH o)H is not necessarily positive definite, while there is no necessary relationship between eigenpairs of H and H o. This makes our theorem all the more interesting and non-trivial. We use a similarity transformation trick to circumvent the dual obstacle, allowing for analysis of positive definiteness with respect to a new inner product. We obtain positive stability by invariance under change of basis. Corollary 3. LookAhead converges locally to stable fixed points for α > 0 sufficiently small. Using the second criterion for p, we prove local convergence of SOS in all differentiable games despite the presence of a shaping term (unlike LOLA). Theorem 4. SOS converges locally to stable fixed points for α > 0 sufficiently small. Using the first criterion for p, we prove that SOS only converges to fixed points (unlike LOLA). Proposition 5. If SOS converges toθ and α > 0 is small thenθ is a fixed point of the game. Now assume that θ is initialised randomly (or with arbitrarily small noise), as is standard in ML. Let F (θ) = θ − αξ p (θ) be the SOS iteration. Using both the second criterion and the Stable Manifold Theorem from dynamical systems, we can prove that every strict saddleθ has a neighbourhood U such that {θ ∈ U | F n (θ) →θ as n → ∞} has measure zero for α > 0 sufficiently small. Since θ is initialised randomly, we obtain the following . Theorem 6. SOS locally avoids strict saddles almost surely, for α > 0 sufficiently small. This also holds for LookAhead, and could be strenghtened to global initialisations provided a strong boundedness assumption on H 2. This is trickier for SOS since p(θ) is not globally continuous. Altogether, our for LookAhead and the correct criterion for p-LOLA lead to some of the strongest theoretical guarantees in multi-agent learning. Furthermore, SOS retains all of LOLA's opponent shaping capacity while LookAhead does not, as shown experimentally in the next section. We evaluate the performance of SOS in three differentiable games. We first showcase opponent shaping and superiority over LA/CO/SGA/NL in the Iterated Prisoner's Dilemma (IPD). This leaves SOS and LOLA, which have differed only in theory up to now. We bridge this gap by showing that SOS always outperforms LOLA in the tandem game, avoiding arrogant behaviour by decaying p while LOLA overshoots. Finally we test SOS on a more involved GAN learning task, with similar to dedicated methods like Consensus Optimisation. IPD: This game is an infinite sequence of the well-known Prisoner's Dilemma, where the payoff is discounted by a factor γ ∈ at each iteration. Agents are endowed with a memory of actions at the previous state. Hence there are 5 parameters for each agent i: the probability P i (C | state) of cooperating at start state s 0 = ∅ or state s t = (a 1 t−1, a 2 t−1) for t > 0. One Nash equilibrium is to always defect (DD), with a normalised loss of 2. A better equilibrium with loss 1 is named tit-for-tat (TFT), where each player begins by cooperating and then mimicks the opponent's previous action. We run 300 training episodes for SOS, LA, CO, SGA and NL. The parameters are initialised following a normal distribution around 1/2 probability of cooperation, with unit variance. We fix α = 1 and γ = 0.96, following. We choose a = 0.5 and b = 0.1 for SOS. The first is a robust and arbitrary middle ground, while the latter is intentionally small to avoid poor SFP. Tandem: Though local convergence is guaranteed for SOS, it is possible that SOS diverges from poor initialisations. This turns out to be impossible in the tandem game since the Hessian is globally positive semi-definite. We show this explicitly by running 300 training episodes for SOS and LOLA. Parameters are initialised following a normal distribution around the origin. We found performance to be robust to hyperparameters a, b. Here we fix a = b = 0.5 and α = 0.1. We reproduce a setup from BID0. The game is to learn a Gaussian mixture distribution using GANs. Data is sampled from a highly multimodal distribution designed to probe the tendency to collapse onto a subset of modes during training -see ground truth in Appendix F. The generator and discriminator networks each have 6 ReLU layers of 384 neurons, with 2 and 1 output neurons respectively. Learning rates are chosen by grid search at iteration 8k, with a = 0.5 and b = 0.1 for SOS, following the same reasoning as the IPD. IPD: Results are given in FIG1. Parameters in part (A) are the end-run probabilities of cooperating for each memory state, encoded in different colours. Only 50 runs are shown for visibility. Losses at each step are displayed in part (B), averaged across 300 episodes with shaded deviations. SOS and LOLA mostly succeed in playing tit-for-tat, displayed by the accumulation of points in the correct corners of (A) plots. For instance, CC and CD points are mostly in the top right and left corners so agent 2 responds to cooperation with cooperation. Agents also cooperate at the start state, represented by ∅ points all hidden in the top right corner. Tit-for-tat strategy is further indicated by the losses close to 1 in part (B). On the other hand, most points for LA/CO/SGA/NL are accumulated at the bottom left, so agents mostly defect. This in poor losses, demonstrating the limited effectiveness of recent proposals like SGA and CO. Finally note that trained parameters and losses for SOS are almost identical to those for LOLA, displaying equal capacity in opponent shaping while also inheriting convergence guarantees and outperforming LOLA in the next experiment. Tandem: Results are given in Figure 3. SOS always succeeds in decreasing p to reach the correct equilibria, with losses averaging at 0. LOLA fails to preserve fixed points, overshooting with losses averaging at 4/9. The criterion for SOS is shown in action in part (B), decaying p to avoid overshooting. This illustrates that purely theoretical guarantees descend into practical outperfor- mance. Note that SOS even gets away from the LOLA fixed points if initialised there (not shown), converging to improved losses using the alignment criterion with LookAhead. The generator distribution and KL divergence are given at {2k, 4k, 6k, 8k} iterations for NL, CO and SOS in Figure 4. Results for SGA, LOLA and LA are in Appendix F. SOS achieves convincing by spreading mass across all Gaussians, as do CO/SGA/LOLA. LookAhead is significantly slower, while NL fails through mode collapse and hopping. Only visual inspection was used for comparison by BID0, while KL divergence gives stronger numerical evidence here. SOS and CO are slightly superior to others with reference to this metric. However CO is aimed specifically toward two-player zero-sum GAN optimisation, while SOS is widely applicable with strong theoretical guarantees in all differentiable games. Theoretical in machine learning have significantly helped understand the causes of success and failure in applications, from optimisation to architecture. While gradient descent on single losses has been studied extensively, algorithms dealing with interacting goals are proliferating, with little grasp of the underlying dynamics. The analysis behind CO and SGA has been helpful in this respect, though lacking either in generality or convergence guarantees. The first contribution of this paper is to provide a unified framework and fill this theoretical gap with robust convergence for LookAhead in all differentiable games. Capturing stable fixed points as the correct solution concept was essential for these techniques to apply. Furthermore, we showed that opponent shaping is both a powerful approach leading to experimental success and cooperative behaviour -while at the same time preventing LOLA from preserving fixed points in general. This conundrum is solved through a robust interpolation between LookAhead and LOLA, giving birth to SOS through a robust criterion. This was partially enabled by choosing to preserve the'middle' term in LOLA, and using it to inherit stability from LookAhead. This in convergence guarantees stronger than all previous algorithms, but also in practical superiority over LOLA in the tandem game. Moreover, SOS fully preserves opponent shaping and outperforms SGA, CO, LA and NL in the IPD by encouraging tit-for-tat policy instead of defecting. Finally, SOS convincingly learns Gaussian mixtures on par with the dedicated CO algorithm. In the main text we showed that Nash equilibria are inadequate in multi-agent learning, exemplified by the simple game given by L 1 = L 2 = xy, where the origin is a global Nash equilibrium but a saddle point of the losses. It is not however obvious that SFP are a better solution concept. We begin by pointing out that for single losses, invertibility and symmetry of the Hessian imply positive definiteness at SFP. These are exactly local minima of L detected by the second partial derivative test, namely those points provably attainable by gradient descent. To emphasise this, note that gradient descent does not converge locally to all local minima. This can be seen by considering the example L(x, y) = y 2 and the local (global) minimum. There is no neighbourhood for which gradient descent converges to, since initialising at (x 0, y 0) will always converge to (x 0, 0) for appropriate learning rates, with x 0 = 0 almost surely. This occurs precisely because the Hessian is singular at. Though a degenerate example, this suggests an important difference to make between the ideal solution concept (local minima) and that for which local convergence claims are possible to attain (local minima with invertible H 0). Accordingly, the definition of SFP is the immediate generalisation of'fixed points with positive semi-definite Hessian', or in other words,'second-order-tractable local minima'. It is important to impose only positive semi-definiteness to keep the class as large as possible, despite strict positive definiteness holding for single losses due to symmetry. Imposing strict positivity would for instance exclude the origin in the cyclic game L 1 = xy = −L 2, a point certainly worthy of convergence. Note also that imposing a weaker condition than H 0 would be incorrect. Invertibility aside, local convergence of gradient descent on single functions cannot be guaranteed if H 0, since such points are strict saddles. These are almost always avoided by gradient descent, as proven by BID12 and. It is thus necessary to impose H 0 as a minimal requirement in optimisation methods attempting to generalise gradient descent. Remark A.1. A matrix H is positive semi-definite iff the same holds for its symmetric part S = (H + H)/2, so SFP could equivalently be defined as S(θ) 0. This is the original formulation given by part of the authors BID0, who also imposed the extra requirement S(θ) 0 in a neighbourhood ofθ. After discussion we decided to drop this assumption, pointing out that it is 1) more restrictive, 2) superficial to all theoretical and 3) weakens the analogy with tractable local minima. The only thing gained by imposing semi-positivity in a neighbourhood is that SFP become a subset of Nash equilibria. Regarding unstable fixed points and strict saddles, note that H(θ) 0 implies H(θ) 0 in a neighbourhood, hence being equivalent to the definition in BID0. It follows also that unstable points are a subset of strict saddles: if H(θ) ≺ 0 then all eigenvalues are negative since any eigenpair (v, λ) satisfies DISPLAYFORM0 We introduced strict saddles in this paper as a generalisation of unstable FP, which are more difficult to handle but nonetheless tractable using dynamical systems. The name is chosen by analogy to the definition in BID12 for single losses. Proposition B.1. The LOLA gradient adjustment is DISPLAYFORM0 in the usual assumption of equal learning rates. Proof. Recall the modified objective DISPLAYFORM1 for agent 1, and so on for each agent. First-order Taylor expansion yields DISPLAYFORM2 and similarly for each agent. Differentiating with respect to θ i, the adjustment for player i is DISPLAYFORM3 and thus DISPLAYFORM4 C TANDEM GAME We provide a more detailed exposition of the tandem game in this section, including computation of fixed points for NL/LOLA and corresponding losses. Recall that the game is given by DISPLAYFORM5 Intuitively, agents wants to have x ≈ −y since (x + y) 2 is the leading loss, but would also prefer to have positive x and y. These are incompatible, so the agents must not be'arrogant' and instead make concessions. The fixed points are given by ξ = 2(x + y − 1) 1 1 = 0, namely any pair (x, 1 − x). The corresponding losses are L 1 = 1 − 2x = −L 2, summing to 0 for any x. We have DISPLAYFORM6 everywhere, so all fixed points are SFP. LOLA fails to preserve these, since DISPLAYFORM7 which is non-zero for any SFP (x, 1 − x). Instead, LOLA can only converge to points such that DISPLAYFORM8 We solve this explicitly as follows: DISPLAYFORM9 The fixed points for LOLA are thus pairs (x, y) such that DISPLAYFORM10 This leads to worse losses DISPLAYFORM11 for agent 1 and similarly for agent 2. In particular, losses always sum to something greater than 0. This becomes negligible as the learning rate becomes smaller, but is always positive nonetheless Taking α arbitrarily small is not a viable solution since convergence will in turn be arbitrarily slow. LOLA is thus not a strong algorithm candidate for all differentiable games. We use Ostrowski's theorem as a unified framework for proving local convergence of gradient-based methods. This is a standard on fixed-point iterations, adapted from (, 10.1.3). We also invoke and prove a topological of our own, Lemma D.9, at the end of this section. This is useful in deducing local convergence, though not central to intuition. DISPLAYFORM0, and assumex ∈ Ω is a fixed point. If all eigenvalues of ∇F (x) are strictly in the unit circle of C, then there is an open neighbourhood U ofx such that for all x 0 ∈ U, the sequence F (k) (x 0) converges tox. Moreover, the rate of convergence is at least linear in k. Recall the simultaneous gradient ξ and the Hessian H defined for differentiable games. Let X be any matrix with continuously differentiable entries. Corollary D.3. Assumex is a fixed point of a differentiable game such that XH(x) is positive stable. Then the iterative procedure DISPLAYFORM1 converges locally tox for α > 0 sufficiently small. Proof. By definition of fixed points, ξ(x) = 0 and so DISPLAYFORM2 is positive stable by assumption, namely has eigenvalues a k + ib k with a k > 0. It follows that DISPLAYFORM3 has eigenvalues 1 − αa k − iαb k, which are in the unit circle for small α. More precisely, DISPLAYFORM4 which is always possible for a k > 0. Hence ∇F (x) has eigenvalues in the unit circle for 0 DISPLAYFORM5, and we are done by Ostrowski's Theorem sincex is a fixed point of F.We apply this corollary to LookAhead, which is given by DISPLAYFORM6 where X = (I − αH o). It is thus sufficient to prove the following . Theorem D.4. Let H 0 invertible with symmetric diagonal blocks. Then there exists > 0 such that (I − αH o)H is positive stable for all 0 < α <.Remark D.5. Note that (I − αH o)H may fail to be positive definite, though true in the case of 2 × 2 matrices. This no longer holds in higher dimensions, exemplified by the Hessian DISPLAYFORM7 By direct computation (symbolic in α), one can show that G = (I − αH o)H always has positive eigenvalues for small α > 0, whereas its symmetric part S always has a negative eigenvalue with magnitude in the order of α. This implies that S and in turn G is not positive definite. As such, an analytical proof of the theorem involving bounds on the corresponding bilinear form will fail. This makes the all the more interesting, but more involved. Central to the proof is a similarity transformation proving positive definiteness with respect to a different inner product, a novel technique we have not found in the multi-agent learning literature. Proof. We cannot study the eigenvalues of G directly, since there is no necessary relationship between eigenpairs of H and H o. In the aim of using analytical tools, the trick is to find a positive definite matrix which is similar to G, thus sharing the same positive eigenvalues. First define DISPLAYFORM8 where H d is the sub-matrix of diagonal blocks,and rewrite DISPLAYFORM9 Note that H d is block diagonal with symmetric blocks ∇ ii L i 0, so (I + αH d) is symmetric and positive definite for all α ≥ 0. In particular its principal square root DISPLAYFORM10 is unique and invertible. Now note that DISPLAYFORM11 which is positive semi-definite since DISPLAYFORM12 for all non-zero u. In particular M provides a similarity transformation which eliminates H d from G 1 while simultaneously delivering positive semi-definiteness. We can now prove that First note that a Taylor expansion of M in α yields DISPLAYFORM13 DISPLAYFORM14 and DISPLAYFORM15 There are two cases to distinguish. If u Hu > 0 then DISPLAYFORM0 for α sufficiently small. Otherwise, u Hu = 0 and consider decomposing H into symmetric and antisymmetric parts S = (H + H)/2 and A = (H − H)/2, so that H = S + A. By antisymmetry of A we have u Au = 0 and hence u Hu = 0 = u Su. Now H 0 implies S 0, so by Cholesky decomposition of S there exists a matrix T such that S = T T. In particular 0 = u Su = T u 2 implies T u = 0, and in turn Su = 0. Since H is invertible and u = 0, we have 0 = Hu = Au and so Au 2 > 0. It follows in particular that DISPLAYFORM1 Using positive semi-definiteness of DISPLAYFORM2 for α > 0 small enough. We conclude that for any u ∈ S m there is u > 0 such that DISPLAYFORM3 compact. By Lemma D.9, this can be extended uniformly with some > 0 such that DISPLAYFORM4 for all u ∈ S m and 0 < α <. It follows that M −1 GM is positive definite for all 0 < α < and thus G is positive stable for α in the same range, by similarity. Corollary D.6. LookAhead converges locally to stable fixed points for α > 0 sufficiently small. 0 invertible by definition, with diagonal blocks ∇ ii L i symmetric by twice continuous differentiability. We are done by the above and Corollary D.3.We now prove that local convergence descend to SOS. The following lemma establishes the crucial claim that our criterion for p is C 1 in neighbourhoods of fixed points. This is necessary to invoke analytical arguments including Ostrowski's Theorem, and would be untrue globally. Lemma D.7. Ifθ is a fixed point and α is sufficiently small then p = ξ 2 in a neighbourhood ofθ. Proof. First note that ξ(θ) = 0, so there is a (bounded) neighbourhood V ofθ such that ξ(θ) < b for all θ ∈ V, for any choice of hyperparameter b ∈. In particular p 2 (θ) = ξ(θ) 2 by definition of the second criterion. We want to show that p(θ) = p 2 (θ) nearθ, or equivalently DISPLAYFORM0 in some neighbourhood U ⊆ V ofθ, for any choice of hyperparameter a ∈. Now by boundedness of V and continuity of χ, there exists c > 0 such that −α χ (θ) = α 2 χ (θ) < c for all θ ∈ V and bounded α. It follows by Cauchy-Schwartz that DISPLAYFORM1 in V, for some d > 0 and α sufficiently small, by boundedness of V and continuity of H o. Finally there is a sub-neighbourhood U ⊂ V such that ξ(θ) < ad/c for all θ ∈ U, so that ad ξ /c > ξ(θ) 2 and hence DISPLAYFORM2 2 for all θ ∈ U, as required. Theorem D.8. SOS converges locally to stable fixed points for α > 0 sufficiently small. Proof. Though the criterion for p is dual, we will only use the second part. More precisely, DISPLAYFORM3 The aim is to show that ifθ is an SFP then ∇ξ p (θ) is positive stable for small α, using Ostrowski to conclude as usual. The first problem we face is that ∇ξ p does not exist everywhere, since p(θ) is not a continuous function. However we know by Lemma D.7 that p = ξ 2 in a neighbourhood U ofθ, so ξ p is continuously differentiable in U. Moreover, p(θ) = ξ(θ) 2 = 0 with gradient DISPLAYFORM4 by definition of fixed points. It follows that DISPLAYFORM5 which is identical to LookAhead. This is positive stable for all 0 < α <, andθ is a fixed point of the iteration since DISPLAYFORM6 We conclude by Corollary D.3 that SOS converges locally to SFP for any a, b ∈ and α sufficiently small. Lemma D.9. Let g: R + × Y → Z continuous with Y compact and Z ⊆ R. Assume that for any u ∈ Y there is u > 0 such that g(α, u) > 0 for all 0 < α < u. Then there exists > 0 such that g(α, u) > 0 for all 0 < α < and u ∈ Y.Proof. For any u ∈ Y there is u > 0 such that DISPLAYFORM7 We would like to extend this uniformly in u, namely prove that DISPLAYFORM8 is open by continuity of g, so each (0, u) × {u} has a neighbourhood X u contained in g −1 (0, ∞). Open sets in a product topology are unions of open products, so DISPLAYFORM9 In particular (0, u) ⊆ x U x and at least one V x contains u, so we can take the open neighbourhood to be DISPLAYFORM10 for some neighbourhood V u of u. In particular Y ⊆ u∈Y V u, and by compactness there is a finite DISPLAYFORM11 Lemma E.1. Let a k and b k be sequences of real numbers, and define DISPLAYFORM0 for all k ≥ M, k ≥ N. Expanding the absolute value, this implies DISPLAYFORM1 which implies the contradiction L < L + δ.Proposition E.2. If SOS converges toθ and α > 0 is small thenθ is a fixed point of the game. Proof. The iterative procedure is given by DISPLAYFORM2 If θ k →θ as k → ∞ then taking limits on both sides of the iteration yields DISPLAYFORM3 and so lim k ξ p (θ k) = 0, omitting k → ∞ for convenience. It follows by continuity that DISPLAYFORM4 noting that p(θ) is not a globally continuous function. Assume for contradiction that ξ 0 (θ) = 0. There are two cases to distinguish for clarity.(i) First assume −α χ, ξ 0 (θ) ≥ 0. Note that lim k p(θ k) ≥ 0 since p(θ) ≥ 0 for all θ, and so DISPLAYFORM5 In both cases a contradiction is obtained, hence ξ 0 (θ) = 0 = (I − αH o)ξ(θ). Now note that (I − αH o)(θ) is singular iff H o (θ) has an eigenvalue 1/α, which is impossible for α sufficiently small. Hence (I − αH o)ξ(θ) = 0 implies ξ(θ) = 0, as required. Now assume that θ is initialised randomly (or with arbitrarily small noise around a point), as is standard in ML. We prove that SOS locally avoids strict saddles using the Stable Manifold Theorem, inspired from BID13. Theorem E.3 (Stable Manifold Theorem). Letx be a fixed point for the C 1 local diffeomorphism F: U → R d, where U is a neighbourhood ofx in R d. Let E s ⊕ E u be the generalised eigenspaces of ∇F (x) corresponding to eigenvalues with |λ| ≤ 1 and |λ| > 1 respectively. Then there exists a local stable center manifold W with tangent space E s atx and a neighbourhood B ofx such that DISPLAYFORM6 In particular, if ∇F (x) has at least one eigenvalue |λ| > 1 then E u has dimension at least 1. Since W has tangent space E s atx, with codimension at least one, it follows that W has measure zero in R d. This is central in proving that the set of initial points in a neighbourhood which converge through SOS to a given strict saddleθ has measure zero. Theorem E.4. SOS locally avoids strict saddles almost surely, for α > 0 sufficiently small. Proof. Letθ a strict saddle and recall that SOS is given by DISPLAYFORM7 Recall by Lemma D.7 that p(θ) = ξ(θ) 2 for all θ in a neighbourhood U ofθ. Restricting F to U, all terms involved are continuously differentiable and DISPLAYFORM8 by assumption that ξ(θ) = 0. Since all terms except I are of order at least α, ∇F (θ) is invertible for all α sufficiently small. By the inverse function theorem, there exists a neighbourhood V ofθ such that F is has a continuously differentiable inverse on V. Hence F restricted to U ∩ V is a C 1 diffeomorphism with fixed pointθ. By definition of strict saddles, H(θ) has a negative eigenvalue. It follows by continuity that (I − αH o)H(θ) also has a negative eigenvalue a + ib with a < 0 for α sufficiently small. Finally, DISPLAYFORM9 has an eigenvalue λ = 1 − αa − iαb with |λ| = 1 − 2αa + α 2 (a 2 + b 2) ≥ 1 − 2αa > 1.It follows that E s has codimension at least one, implying in turn that the local stable set W has measure zero. We can now prove that DISPLAYFORM10 has measure zero, or in other words, that local convergence toθ occurs with zero probability. Let B the neighbourhood guaranteed by the Stable Manifold Theorem, and take any θ ∈ Z. By definition of convergence there exists N ∈ N such that F N +n (θ) ∈ B for all n ≥ 0, so that DISPLAYFORM11 by the Stable Manifold Theorem. This implies that θ ∈ F −N (W), and finally θ ∈ ∪ n∈N F −n (W). Since θ was arbitrary, we obtain the inclusion Z ⊆ ∪ n∈N F −n (W). Now F −1 is C 1, hence locally Lipschitz and thus preserves sets of measure zero, so that F −n (W) has measure zero for each n. Countable unions of measure zero sets are still measure zero, so we conclude that Z also has measure zero. In other words, SOS converges toθ with zero probability upon random initialisation of θ in U. In the Gaussian mixture experiment, data is sampled from a highly multimodal distribution designed to probe the tendency to collapse onto a subset of modes during training, given in FIG5.The generator distribution and KL divergence are given at {2k, 4k, 6k, 8k} iterations for LA, LOLA and SGA in Figure 6. LOLA and SGA successfully spread mass across all Gaussians. LookAhead displays mode collapse and hopping in early stages, but begins to incorporate further mixtures near 8k iterations. We ran further iterations and discovered that LookAhead eventually spreads mass across all mixtures, though very slowly. Comparing with for NL/CO/SOS in the main text, we see that CO/SOS/LOLA/SGA are equally successful in qualitative terms. Note that SOS/CO are slightly superior with respect to KL divergence after 6-8k iterations, though LOLA is initially faster. This may be due only to random sampling. We also noticed experimentally that LOLA often moves away from the correct distribution after 8-10k iterations (not shown), while SOS stays stable in the long run. This may occur thanks to the two-part criterion encouraging convergence, while LOLA continually attempts to exploit opponent learning. Finally we plot ξ at all iterations up to 12k for SOS, LA and NL in Figure 7 (other algorithms are omitted for visibility). This gives further evidence of SOS converging quite rapidly to the correct distribution, while NL perpetually suffers from mode hopping and LA lags behind significantly.
Opponent shaping is a powerful approach to multi-agent learning but can prevent convergence; our SOS algorithm fixes this with strong guarantees in all differentiable games.
1,347
scitldr
The rate at which medical questions are asked online significantly exceeds the capacity of qualified people to answer them, leaving many questions unanswered or inadequately answered. Many of these questions are not unique, and reliable identification of similar questions would enable more efficient and effective question answering schema. While many research efforts have focused on the problem of general question similarity, these approaches do not generalize well to the medical domain, where medical expertise is often required to determine semantic similarity. In this paper, we show how a semi-supervised approach of pre-training a neural network on medical question-answer pairs is a particularly useful intermediate task for the ultimate goal of determining medical question similarity. While other pre-training tasks yield an accuracy below 78.7% on this task, our model achieves an accuracy of 82.6% with the same number of training examples, an accuracy of 80.0% with a much smaller training set, and an accuracy of 84.5% when the full corpus of medical question-answer data is used. With the ubiquity of the Internet and the emergence of medical question-answering websites such as ADAM (www.adam.com), WebMD (www.webmd.com), and HealthTap (www.healthtap. com), people are increasingly searching online for answers to their medical questions. However, the number of people asking medical questions online far exceeds the number of qualified experts -i.e doctors -answering them. One way to address this imbalance is to build a system that can automatically match unanswered questions with semantically similar answered questions, or mark them as priority if no similar answered questions exist. This approach uses doctor time more efficiently, reducing the number of unanswered questions and lowering the cost of providing online care. Many of the individuals seeking medical advice online are otherwise reluctant to seek medical help due to cost, convenience, or embarrassment. For these patients, an accurate online system is critical because it may be the only medical advice they receive. Of course, some medical problems require in-person care, and an online system must indicate that. Other patients use the internet in addition to in-person care either to determine when an appointment is needed or to follow up after visits when they have lingering questions. For this second group, if the answers they see online do not match those given to them by their doctors, they are less likely to follow the advice of their doctors , which can have serious consequences. Coming up with an accurate algorithm for finding similar medical questions, however, is difficult. Simple heuristics such as word-overlap are ineffective because Can a menstrual blood clot travel to your heart or lungs like other blood clots can? and Can clots from my period cause a stroke or embolism? are similar questions with low overlap, but Is candida retested after treatment and Is Chlamydia retested after treatment? are critically different and only one word apart. Machine learning is a good candidate for such complex tasks, but requires labeled training data. As no widely available data for this particular task exists, we generate and release our own dataset of medical question pairs such as the ones shown in Table 1. Given the recent success of pre-trained bi-directional transformer networks for natural language processing (NLP) outside the medical field (; ; Radford et al.; ;), most research efforts in medical NLP have tried to apply general. However, these models are not trained on medical information, and make errors that reflect this. In this work, we augment the features in these general language models using the depth of information that is stored within a medical question-answer pair to embed medical knowledge into the model. Our models pre-trained on this task outperform models pre-trained on out-of-domain question similarity with high statistical significance, and the show promise of generalizing to other domains as well. The task of question-answer matching was specifically chosen because it is closely related to that of question similarity; one component of whether or not two questions are semantically similar is whether or not the answer to one also answers the other. We show that the performance gains achieved by this particular task are not realized by other in-domain tasks, such as medical questioncategorization and medical answer completion. The main contributions of this paper are: • We release a dataset of medical question pairs generated and labeled by doctors that is based upon real, patient-asked questions • We prove that, particularly for medical NLP, domain matters: pre-training on a different task in the same domain outperforms pre-training on the same task in a different domain • We show that the task of question-answer matching embeds relevant medical information for question similarity that is not captured by other in-domain tasks 2 RELATED WORK 2.1 PRE-TRAINED NETWORKS FOR GENERAL LANGUAGE UNDERSTANDING NLP has undergone a transfer learning revolution in the past year, with several large pre-trained models earning state-of-the-art scores across many linguistic tasks. Two such models that we use in our own experiments are BERT and XLNet . These models have been trained on semi-supervised tasks such as predicting a word that has been masked out from a random position in a sentence, and predicting whether or not one sentence is likely to follow another. The corpus used to train BERT was exceptionally large (3.3 billion words), but all of the data came from BooksCorpus and Wikipedia. recently found that BERT generalizes better to other datasets drawn from Wikipedia than to tasks using other web snippets. This is consistent with our finding that pre-training domain makes a big difference. To address the need for pre-trained models in particular domains, some researchers have recently re-trained BERT on different text corpora such as scientific papers , doctor's medical notes and biomedical journal articles . However, retraining BERT on the masked-language and next-sentence prediction tasks for every new domain is unwieldy and time-consuming. We investigate whether the benefits of re-training on a new domain can also be realized by fine-tuning BERT on other in-domain tasks. see a boost with other tasks across less dramatic domain changes, where a different text corpus is used for the final task but not an entirely different technical vocabulary or domain. Previous work on question similarity has investigated the importance of in-domain word embeddings, but the methods of determining question similarity have varied widely. use a CNN that minimizes the mean squared error between two questions' vector representations. use an encoder and a question-pairs metric of cosine similarity. look at relative'best' match out of a set instead of absolute similarity, as we are interested in. Abacha & Demner-Fushman (2016; very clearly describe the utility of medical question similarity as we have framed it; rather than training a model to answer every conceivable medical question correctly, we can train a model to determine if any existing questions in an FAQ mean the same thing as a new question and, if so, use the existing answer for the new question. If there were a large corpus of labeled similar medical questions, training on that would likely produce the best . However, labeled training data is still one of the largest barriers to supervised learning, particularly in the medical field where it is expensive to get doctor time for hand-labeling data. Previous work has tried to overcome this using augmentation rules to generate similar question pairs automatically , but this leads to an overly simplistic dataset in which negative question-pairs contain no overlapping keywords and positive question-pairs follow similar lexical structures. Another technique for generating training data is weak supervision , but due to the nuances of determining medical similarity, generating labeling functions for this task is difficult. Another way to address a dearth of training data is to use transfer learning from a different but related task. This is the path we choose. Several large datasets exist that are relevant to our final task of medical question similarity. Quora Question Pairs (QQP) is a labeled corpus of 363,871 question pairs from Quora, an online question-answer forum . These question pairs cover a broad range of topics, most of which are not related to medicine. However, it is a well-known dataset containing labeled pairs of similar and dissimilar questions. HealthTap is a medical question-answering website in which patients can have their questions answered by doctors. We use a publicly available crawl (durakkerem, 2018) with 1.6 million medical questions. Each question has corresponding long and short answers, doctor meta-data, category labels, and lists of related topics. We reduce this dataset to match the size of QQP for direct performance comparisons, but also run one experiment leveraging the full corpus. WebMD is an online publisher of medical information including articles, videos, and frequently asked questions (FAQ). For a second medical question-answer dataset, we use a publicly available crawl over the FAQ of WebMD with 46,872 question-answer pairs. We decrease the size of QQP and HealthTap to match this number before making direct performance comparisons. Most of our pre-training tasks come from restructuring the HealthTap and WebMD data. Question Answer Pairs (QA) In order to correctly determine whether or not two questions are semantically similar, as is our ultimate goal, a network must be able to interpret the nuances of each question. Another task that requires such nuanced understanding is that of pairing questions with their correct answers. We isolate each true question-answer pair from the medical questionanswering websites and label these as positive examples. We then take each question and pair it with a random answer from the same main category or tag and label these as negative examples. Finally, we train a classifier to label question-answer pairs as either positive or negative. Answer Completion (AA) One task that has been known to generalize well is that of next-sentence prediction, which is one of two tasks used to train the BERT model. To mimic this task, we take each answer from HealthTap and split it into two parts: the first two sentences (start), and the remaining sentences (end). We then take each answer start and end that came from the same original question and label these pairs as positives. We also pair each answer start with a different end from the same main category and label these as negatives. This is therefore a binary classification task in which the model tries to predict whether an answer start is completed by the given answer end. Question Categorization (QC) We take the questions from HealthTap, pair them up with their main-category labels and call these positive examples. We then pair each question with a random other category and call this a negative example. There are 227 main categories represented, such as abdominal pain, acid reflux, acne, adhd, alcohol, etc. The model is trained to classify category matches and mismatches, rather than predict to which of the classes each example belongs. A small number of questions from HealthTap are used in the question pairs dataset and thus withheld from the above tasks to reduce bias. There is no existing dataset that we know of for medical question similarity. Therefore, one contribution of this paper is that we have generated such a dataset and are releasing it. Although our task is related to that of recognizing question entailment (RQE), for which there is a small dataset available , it is different in two key ways. First, our metric of similarity is symmetric, but in question entailment, it is possible to have asymmetric entailment, if one question is more specific than the other. Second, our questions are all patient-asked, which means they use less technical language, include more misspellings, and span a different range of topics than doctor-asked questions. Because of these differences we decide to generate a dataset that is specific to our needs. We have doctors hand-generate 3,000 medical question pairs. We explicitly choose doctors for this task because determining whether or not two medical questions are the same requires medical training that crowd-sourced workers rarely have. We present doctors with a list of patient-asked questions from HealthTap, and for each provided question, ask them to: 1. Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response (ex.'I'm a 22-y-o female' could become ' My 26 year old daughter'). 2. Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words. The first instruction generates a positive question pair (match) and the second generates a negative question pair (mismatch). With the above instructions, we intentionally frame the task such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial. What specific exercises would help bursitis of the suprapatellar? Hey doc! My doctor diagnosed me with suprapatellar bursitis. Are there any exercises that I can do at home? Can I take any medication for pain due to suprapatellar bursitis? Unable to exercise.:(We anticipate that each doctor interprets these instructions slightly differently, so no doctor providing data in the train set generates any data in the test set. This should reduce bias. To obtain an oracle score, we have doctors hand-label question pairs that a different doctor generated. The accuracy of the second doctor with respect to the labels intended by the first is used as an oracle and is 87.6% in our test set of 836 question pairs. See Table 2 for example questions from our curated data. Our ultimate goal is to be able to determine whether two medical questions mean the same thing. Our hypothesis is that by taking an existing language model with complex word embeddings and training it on a large corpus for a similar medical task, we can embed medical knowledge into an otherwise generic language model. Our approach uses transfer learning from a bi-directional transformer network to get the most out of our small medical question pairs dataset. We start with the architecture and weights from BERT and perform a double finetune; we first finetune on an intermediate task and then we finetune on the final task of medical question pairs. We do this for four different intermediate tasks: quora question similarity (QQP), medical question answering (QA), medical answer completion (AA), and medical question classification (QC) (Figure 1). For a baseline we skip the intermediate finetune and directly train BERT on our small medical question-pairs dataset. For each intermediate task, we train the network for 5 epochs with 364 thousand training examples to ensure that differences in performance are not due to different dataset sizes. We then finetune each of these intermediate-task-models on a small number of labeled, medicalquestion pairs until convergence. A maximum sentence length of 200 tokens, learning rate of 2e-5, and batch size of 16 is used for all models. Each model is trained on two parallel NVIDIA Tesla V100 GPUs. All experiments are done with 5 different random train/validation splits to generate error bars representing one standard deviation in accuracy. We use accuracy of each model as our quantitative metric for comparison and a paired t-test to measure statistical significance. To compare against previous state-of-the-art (SOTA) models in the medical field, we also finetune the BioBERT , SciBERT , and ClinicalBERT models on our final task, three BERT models that have been finetuned once already on the original BERT tasks but with different text corpora. We also perform an ablation over pre-trained model architecture and reproduce our starting with the XLNet model instead of BERT. To get a better qualitative understanding of performance, we perform error analysis. We define a consistent error as one that is made by at least 4 of the 5 models trained on different train/validation splits. Similarly, we consider a model as getting an example consistently correct if it does so on at least 4 of the 5 models trained on different train/validation splits. By investigating the question pairs that a model-type gets consistently wrong, we can form hypotheses about why the model may have failed on that specific example. Then, by making small changes to the input until the models label those examples correctly, we can validate or disprove these hypotheses. Here we investigate whether domain of the training corpus matters more than task-similarity when choosing an intermediate training step for the medical question similarity task. Accuracy on the final task (medical question similarity) is our quantitative proxy for performance. We finetune BERT on the intermediate tasks of Quora question pairs (QQP) and HealthTap question answer pairs (QA) before finetuning on the final task to compare performance. We find that the QA model performs better than the QQP model by 2.4% to 4.5%, depending on size of the final training set (Figure 2). Conducting a paired t-test over the 5 data splits used for each experiment, the p-value is always less than 0.0006, so this difference is very statistically significant. We thus see with high confidence that models trained on a related in-domain task (medical question-answering) outperform models trained on the same question-similarity task but an out-of-domain corpus (quora question pairs). Furthermore, when the full corpus of questionanswer pairs from HealthTap is used, the performance climbs all the way to 84.5% ±0.7%. Results hold across models The same trends hold when the BERT base model is replaced with XLNet, with a p-value of 0.0001 (Table 3). Results hold across datasets We repeat our experiments with a question-answer dataset from WebMD and restrict the HealthTap and QQP dataset sizes for fair comparison. We find that the QA model again outperforms the QQP model by a statistically significant margin (p-value 0.049) and that the WebMD model even outperforms the HealthTap model with the same amount of data (Table 3). Our findings therefore hold across multiple in-domain datasets. We investigate further the extent to which task matters for an in-domain corpus in two different ways. We start by using the same HealthTap data and forming different tasks from the questions therein, and then we compare our models against intermediate models trained by other researchers. To test the extent to which any in-domain task would boost the performance of an out-of-domain model, we design two additional tasks using the HealthTap data: answer completion (AA) and question categorization (QC). As before, we use accuracy on the final question-similarity task as our proxy for performance and keep the test set constant across all models. We follow the same protocol as above, finetuning BERT on the intermediate task before finetuning further on the final task. We find that both of these tasks actually perform worse than the baseline BERT model, making the word embeddings less useful for understanding the subtler differences between two questions (Figure 2). We conclude that, while domain does matter a lot, many tasks are not well-suited to encoding the proper domain information from the in-domain corpus. Comparison to Medical SOTA Models To benchmark ourselves against existing medical models, we compare our finetuned models to BioBERT, SciBERT, and ClinicalBERT. Each of these models has finetuned the original BERT weights on a medically relevant corpus using the original BERT tasks. We take each of these off-the-shelf models and finetune them on our final task dataset as we do with our own intermediate-task models. Only BioBERT outperforms the original BERT model, and the differences in performance are not statistically significant. We hypothesize that this is because technical literature and doctor notes each have their own vocabularies that, while more medical in nature that Wikipedia articles, are still quite distinct from those of medical question-answer forums. From looking at the question pairs that our models get wrong, we can form hypotheses about why each example is mislabeled. We can then augment each question pair to add or remove one challenging aspect at a time and observe whether or not those changes in a different label. With this method, we can prove or disprove our hypotheses. The augmented questions are not added to our test set and do not contribute to our quantitative performance metrics; they are only created for the sake of probing and understanding the network. Consider the example in Table 4. In order to label this example correctly as it is written in row 1, the model has to understand the syntax of the question and know that 4'8" in this context represents poor growth. Changing the second question to what is written in row 2 prompts the QA model to label it correctly, indicating that one thing the QA model was misunderstanding was the question's word order. Additionally, changing the phrase I am 4'8" with I have not grown as shown in row 3 is enough to help the out-of-domain models label it correctly. So, while numerical reasoning was the difficult part of that question pair for the other models, the question answer model was actually able to identify 4'8" as a short height. This supports the claim that pre-training on the medical task of Misspellings, capitalization We find that differences in spelling and capitalization do not cause a significant number of errors in any model, although they are present in many questions. To understand the broader applicability of our findings, we apply our approach to a non-medical domain: the AskUbuntu question-answer pairs from. As before, we avoid making the pre-training task artificially easy by creating negatives from related questions. This time, since there are no category labels, we index all of the data with Elasticsearch 1. For the question similarity task, the authors have released a candidate set of pairs that were human labeled as similar or dissimilar. Without any pre-training (baseline), we observe an accuracy of 65.3% ± 1.2% on the question similarity task. Pre-training on QQP leads to a significant reduction in accuracy to 62.3% ± 2.1% indicating that an out-of-domain pretraining task can actually hurt performance. When the QA task is used for intermediate pre-training, the improve to 66.6% ± 0.9%. While this improvement may not be statistically significant, it is consistent with the main premise of our work that related tasks in the same domain can help performance. We believe that the low accuracy on this task, as well as the small inter-model performance gains, may be due to the exceptionally long question lengths, some of which are truncated by the models during tokenization. In the future, we would explore ways to reduce the length of these questions before feeding them into the model. In this work, we release a medical question-pairs dataset and show that the semi-supervised approach of pre-training on in-domain question-answer matching (QA) is particularly useful for the difficult task of duplicate question recognition. Although the QA model outperforms the out-of-domain same-task QQP model, there are a few examples where the QQP model seems to have learned information that is missing from the QA model (see Appendix A). In the future, we can further explore whether these two models learned independently useful information from their pre-training tasks. If they did, then we hope to be able to combine these features into one model with multitask learning. An additional benefit of the error analysis is that we have a better understanding of the types of mistakes that even our best model is making. It is therefore now easier to use weak supervision and augmentation rules to supplement our datasets to increase the number of training examples in those difficult regions of the data. With both of these changes, we expect to be able to bump up accuracy on this task by several more percentage points.
We show that question-answer matching is a particularly good pre-training task for question-similarity and release a dataset for medical question similarity
1,348
scitldr
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them ing in a more effective feature extraction compared to learning a single task. Intuitively, the ing set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance. Multi-Task Learning (MTL) ambitiously aims to learn multiple tasks jointly instead of learning them separately, leveraging the assumption that the considered tasks have common properties which can be exploited by Machine Learning (ML) models to generalize the learning of each of them. For instance, the features extracted in the hidden layers of a neural network trained on multiple tasks have the advantage of being a general representation of structures common to each other. This translates into an effective way of learning multiple tasks at the same time, but it can also improve the learning of each individual task compared to learning them separately . Furthermore, the learned representation can be used to perform Transfer Learning (TL), i.e. using it as a preliminary knowledge to learn a new similar task ing in a more effective and faster learning than learning the new task from scratch . The same benefits of extraction and exploitation of common features among the tasks achieved in MTL, can be obtained in Multi-Task Reinforcement Learning (MTRL) when training a single agent on multiple Reinforcement Learning (RL) problems with common structures . In particular, in MTRL an agent can be trained on multiple tasks in the same domain, e.g. riding a bicycle or cycling while going towards a goal, or on different but similar domains, e.g. balancing a pendulum or balancing a double pendulum 1. Considering recent advances in Deep Reinforcement Learning (DRL) and the ing increase in the complexity of experimental benchmarks, the use of Deep Learning (DL) models, e.g. deep neural networks, has become a popular and effective way to extract common features among tasks in MTRL algorithms (; ;). However, despite the high representational capacity of DL models, the extraction of good features remains challenging. For instance, the performance of the learning process can degrade when unrelated tasks are used together ; another detrimental issue may occur when the training of a single model is not balanced properly among multiple tasks . Recent developments in MTRL achieve significant in feature extraction by means of algorithms specifically developed to address these issues. While some of these works rely on a single deep neural network to model the multi-task agent (; ; ;), others use multiple deep neural networks, e.g. one for each task and another for the multi-task agent (; ; ;). Intuitively, achieving good in MTRL with a single deep neural network is more desirable than using many of them, since the training time is likely much less and the whole architecture is easier to implement. In this paper we study the benefits of shared representations among tasks. We theoretically motivate the intuitive effectiveness of our method, deriving theoretical guarantees that exploit the theoretical framework provided by , in which the authors present upper bounds on the quality of learning in MTL when extracting features for multiple tasks in a single shared representation. The significancy of this is that the cost of learning the shared representation decreases with a factor O(1 / √ T), where T is the number of tasks for many function approximator hypothesis classes. The main contribution of this work is twofold. Policy-Iteration (API) 2 in the MTRL setting, and we extend the approximation error bounds in to the case of multiple tasks with different dimensionalities. Then, we show how to combine these ing in, to the best of our knowledge, the first proposed extension of the finite-time bounds of AVI/API to MTRL. Despite being an extension of previous works, we derive these to justify our approach showing how the error propagation in AVI/API can theoretically benefit from learning multiple tasks jointly. 2. We leverage these proposing a neural network architecture, for which these bounds hold with minor assumptions, that allow us to learn multiple tasks with a single regressor extracting a common representation. We show an empirical evidence of the consequence of our bounds by means of a variant of Fitted Q-Iteration (FQI) , based on our shared network and for which our bounds apply, that we call Multi Fitted Q-Iteration (MFQI). Then, we perform an empirical evaluation in challenging RL problems proposing multitask variants of the Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) algorithms. These algorithms are practical implementations of the more general AVI/API framework, designed to solve complex problems. In this case, the bounds apply to these algorithms only with some assumptions, e.g. stationary sampling distribution. The outcome of the empirical analysis joins the theoretical , showing significant performance improvements compared to the singletask version of the algorithms in various RL problems, including several MuJoCo domains. Let B(X) be the space of bounded measurable functions w.r.t. the σ-algebra σ X, and similarly B(X, L) be the same bounded by L < ∞. A Markov Decision Process (MDP) is defined as a 5-tuple M =< S, A, P, R, γ >, where S is the state space, A is the action space, P: S × A → S is the transition distribution where P(s |s, a) is the probability of reaching state s when performing action a in state s, R: S × A × S → R is the reward function, and γ ∈ is the discount factor. A deterministic policy π maps, for each state, the action to perform: π: S → A. Given a policy π, the value of an action a in a state s represents the expected discounted cumulative reward obtained by performing a in s and following π thereafter:, where r i+1 is the reward obtained after the i-th transition. The expected discounted cumulative reward is maximized by following the optimal policy π * which is the one that determines the optimal action values, i.e., the ones that satisfy the Bellman optimality equation : > where t ∈ {1, . . ., T} and T is the number of MDPs. For each MDP M (t), a deterministic policy π t: In this setting, the goal is to maximize the sum of the expected cumulative discounted reward of each task. In our theoretical analysis of the MTRL problem, the complexity of representation plays a central role. As done in , we consider the Gaussian complexity, a variant of the well-known Rademacher complexity, to measure the complexity of the representation. Given a setX ∈ X T n of n input samples for each task t ∈ {1, . . ., T}, and a class H composed of k ∈ {1, . . ., K} functions, the Gaussian complexity of a random set H(X) = {(h k (X ti)): h ∈ H} ⊆ R KT n is defined as follows: where γ tki are independent standard normal variables. We also need to define the following quantity, taken from: let γ be a vector of m random standard normal variables, and f ∈ F: Equation 2 can be viewed as a Gaussian average of Lipschitz quotients, and appears in the bounds provided in this work. Finally, we define L(F) as the upper bound of the Lipschitz constant of all the functions f in the function class F. The following theoretical study starts from the derivation of theoretical guarantees for MTRL in the AVI framework, extending the of in the MTRL scenario. Then, to bound the approximation error term in the AVI bound, we extend the described in to MTRL. As we discuss, the ing bounds described in this section clearly show the benefit of sharing representation in MTRL. To the best of our knowledge, this is the first general for MTRL; previous works have focused on finite MDPs or linear models . problem: where we use f = (f 1, . . ., f T), w = (w 1, . . ., w T), and define the minimizers of Equation asŵ, h, andf. We assume that the loss function: R × R → is 1-Lipschitz in the first argument for every value of the second argument. While this assumption may seem restrictive, the obtained can be easily scaled to the general case. To use the principal of this section, for a generic loss function, it is possible to use (·) = (·) / max, where max is the maximum value of. The expected loss over the tasks, given w, h and f is the task-averaged risk: The minimum task-averaged risk, given the set of tasks µ and the hypothesis classes W, H and F is ε * avg, and the corresponding minimizers are w *, h * and f *. We start by considering the bound for the AVI framework which applies for the single-task scenario. Theorem 1. (Theorem 3.4 of) Let K be a positive integer, and ν, we have: where with E(ε 0, . . ., ε K−1 ; r) = K−1 k=0 α 2r k ε k, the two coefficients c VI1,ρ,ν, c VI2,ρ,ν, the distributions ρ and ν, and the series α k are defined as in. In the multi-task scenario, let the average approximation error across tasks be: where Q t,k+1 =f t,k •ĥ k •ŵ t,k, and T * t is the optimal Bellman operator of task t. In the following, we extend the AVI bound of Theorem 1 to the multi-task scenario, by computing the average loss across tasks and pushing inside the average using Jensen's inequality. Theorem 2. Let K be a positive integer, and we have: Remarks Theorem 2 retains most of the properties of Theorem 3.4 of , except that the regression error in the bound is now task-averaged. Interestingly, the second term of the sum in Equation depends on the average maximum reward for each task. In order to obtain this we use an overly pessimistic bound on γ and the concentrability coefficients, however this approximation is not too loose if the MDPs are sufficiently similar. We bound the task-averaged approximation error ε avg at each AVI iteration k involved in following a derivation similar to the one proposed by , obtaining: Theorem 3. Let µ, W, H and F be defined as above and assume 0 ∈ H and f = 0, ∀f ∈ F. Then for δ > 0 with probability at least 1 − δ in the draw ofZ ∼ T t=1 µ n t we have that Remarks The assumptions 0 ∈ H and f = 0 for all f ∈ F are not essential for the proof and are only needed to simplify the . For reasonable function classes, the Gaussian complexity If sup w w(X) and sup h,w h(w(X)) can be uniformly bounded, then they are O(√ nT). For some function classes, the Gaussian average of Lipschitz quotients O(·) can be bounded independently from the number of samples. Given these assumptions, the first and the fourth term of the right hand side of Equation, which represent respectively the cost of learning the meta-state space w and the task-specific f mappings, are both O(1 / √ n). The second term represents the cost of learning the multi-task representation h and is O(1 / √ nT), thus vanishing in the multi-task limit T → ∞. The third term can be removed if ∀h ∈ H, ∃p 0 ∈ P: h(p) = 0; even when this assumption does not hold, this term can be ignored for many classes of interest, e.g. neural networks, as it can be arbitrarily small. The last term to be bounded in is the minimum average approximation error ε * avg at each AVI iteration k. Recalling that the task-averaged approximation error is defined as in, applying Theorem 5.3 by we obtain: with C AE defined as in. Final remarks The bound for MTRL is derived by composing the in Theorems 2 and 3, and Lemma 4. The above highlight the advantage of learning a shared representation. The bound in Theorem 2 shows that a small approximation error is critical to improve the convergence towards the optimal action-value function, and the bound in Theorem 3 shows that the cost of learning the shared representation at each AVI iteration is mitigated by using multiple tasks. This is particularly beneficial when the feature representation is complex, e.g. deep neural networks. As stated in the remarks of Equation, the benefit of MTRL is evinced by the second component of the bound, i.e. the cost of learning h, which vanishes with the increase of the number of tasks. Obviously, adding more tasks require the shared representation to be large enough to include all of them, undesirably causing the term sup h,w h(w(X)) in the fourth component of the bound to increase. This introduces a tradeoff between the number of features and number of tasks; however, for Figure 1: (a) The architecture of the neural network we propose to learn T tasks simultaneously. The w t block maps each input x t from task µ t to a shared set of layers h which extracts a common representation of the tasks. Eventually, the shared representation is specialized in block f t and the output y t of the network is computed. Note that each block can be composed of arbitrarily many layers. a reasonable number of tasks the number of features used in the single-task case is enough to handle them, as we show in some experiments in Section 5. Notably, since the AVI/API framework provided by provides an easy way to include the approximation error of a generic function approximator, it is easy to show the benefit in MTRL of the bound in Equation. Despite being just multi-task extensions of previous works, our are the first one to theoretically show the benefit of sharing representation in MTRL. Moreover, they serve as a significant theoretical motivation, besides to the intuitive ones, of the practical algorithms that we describe in the following sections. We want to empirically evaluate the benefit of our theoretical study in the problem of jointly learning T different tasks µ t, introducing a neural network architecture for which our bounds hold. Following our theoretical framework, the network we propose extracts representations w t from inputs x t for each task µ t, mapping them to common features in a set of shared layers h, specializing the learning of each task in respective separated layers f t, and finally computing the output (Figure 1(a) ). The idea behind this architecture is not new in the literature. For instance, similar ideas have already been used in DQN variants to improve exploration on the same task via bootstrapping and to perform MTRL . The intuitive and desirable property of this architecture is the exploitation of the regularization effect introduced by the shared representation of the jointly learned tasks. Indeed, unlike learning a single task that may end up in overfitting, forcing the model to compute a shared representation of the tasks helps the regression process to extract more general features, with a consequent reduction in the variance of the learned function. This intuitive justification for our approach, joins the theoretical benefit proven in Section 3. Note that our architecture can be used in any MTRL problem involving a regression process; indeed, it can be easily used in value-based methods as a Q-function regressor, or in policy search as a policy regressor. In both cases, the targets are learned for each task µ t in its respective output block f t. Remarkably, as we show in the experimental Section 5, it is straightforward to extend RL algorithms to their multi-task variants only through the use of the proposed network architecture, without major changes to the algorithms themselves. To empirically evince the effect described by our bounds, we propose an extension of FQI , that we call MFQI, for which our AVI bounds apply. Then, to empirically evaluate our approach in challenging RL problems, we introduce multi-task variants of two well-known DRL algorithms: DQN and DDPG , which we call Multi Deep Q-Network (MDQN) and Multi Deep Deterministic Policy Gradient (MDDPG) respectively. Note that for these methodologies, our AVI and API bounds hold only with Figure 2: Discounted cumulative reward averaged over 100 experiments of DQN and MDQN for each task and for transfer learning in the Acrobot problem. An epoch consists of 1, 000 steps, after which the greedy policy is evaluated for 2, 000 steps. The 95% confidence intervals are shown. the simplifying assumption that the samples are i.i.d.; nevertheless they are useful to show the benefit of our method also in complex scenarios, e.g. MuJoCo . We remark that in these experiments we are only interested in showing the benefit of learning multiple tasks with a shared representation w.r.t. learning a single task; therefore, we only compare our methods with the single task counterparts, ignoring other works on MTRL in literature. Experiments have been developed using the MushroomRL library , and run on an NVIDIA R DGX Station TM and Intel R AI DevCloud. Refer to Appendix B for all the details and our motivations about the experimental settings. As a first empirical evaluation, we consider FQI, as an example of an AVI algorithm, to show the effect described by our theoretical AVI bounds in experiments. We consider the Car-On-Hill problem as described in , and select four different tasks from it changing the mass of the car and the value of the actions (details in Appendix B). Then, we run separate instances of FQI with a single task network for each task respectively, and one of MFQI considering all the tasks simultaneously. Figure 1(b) shows the L 1 -norm of the difference between Q * and Q π K averaged over all the tasks. It is clear how MFQI is able to get much closer to the optimal Q-function, thus giving an empirical evidence of the AVI bounds in Theorem 2. For completeness, we also show the advantage of MFQI w.r.t. FQI in performance. Then, in Figure 1 (c) we provide an empirical evidence of the benefit of increasing the number of tasks in MFQI in terms of both quality and stability. As in , our MDQN uses separate replay memories for each task and the batch used in each training step is built picking the same number of samples from each replay memory. Furthermore, a step of the algorithm consists of exactly one step in each task. These are the only minor changes to the vanilla DQN algorithm we introduce, while all other aspects, such as the use of the target network, are not modified. Thus, the time complexity of MDQN is considerably lower than vanilla DQN thanks to the learning of T tasks with a single model, but at the cost of a higher memory complexity for the collection of samples for each task. We consider five problems with similar state spaces, sparse rewards and discrete actions: Cart-Pole, Acrobot, Mountain-Car, Car-On-Hill, and Inverted-Pendulum. The implementation of the first three problems is the one provided by the OpenAI Gym library , while Car-On-Hill is described in and Inverted-Pendulum in. Figure 2(a) shows the performance of MDQN w.r.t. to vanilla DQN that uses a single-task network structured as the multi-task one in the case with T = 1. The first three plots from the left show good performance of MDQN, which is both higher and more stable than DQN. In Car-On-Hill, MDQN is slightly slower than DQN to reach the best performance, but eventually manages to be more stable. Finally, the Inverted-Pendulum experiment is clearly too easy to solve for both approaches, but it is still useful for the shared feature extraction in MDQN. The described provide important hints about the better quality of the features extracted by MDQN w.r.t. DQN. To further demonstrate this, we evaluate the performance of DQN on Acrobot, arguably the hardest of the five problems, using a single-task network with the shared parameters in h initialized with the weights of a multi-task network trained with MDQN on the other four problems. Arbitrarily, the pre-trained weights can be adjusted during the learning of the new task or can be kept fixed and only the remaining randomly initialized parameters in w and f are trained. From Figure 2(b), the advantages of initializing the weights are clear. In particular, we compare the performance of DQN without initialization w.r.t. DQN with initialization in three settings: in Unfreeze-0 the initialized weights are adjusted, in NoUnfreeze they are kept fixed, and in Unfreeze-10 they are kept fixed until epoch 10 after which they start to be optimized. Interestingly, keeping the shared weights fixed shows a significant performance improvement in the earliest epochs, but ceases to improve soon. On the other hand, the adjustment of weights from the earliest epochs shows improvements only compared to the uninitialized network in the intermediate stages of learning. The best are achieved by starting to adjust the shared weights after epoch 10, which is approximately the point at which the improvement given by the fixed initialization starts to lessen. In order to show how the flexibility of our approach easily allows to perform MTRL in policy search algorithms, we propose MDDPG as a multi-task variant of DDPG. As an actor-critic method, DDPG requires an actor network and a critic network. Intuitively, to obtain MDDPG both the actor and critic networks should be built following our proposed structure. We perform separate experiments on two sets of problems with similar continuous state and action spaces: the first set includes Inverted-Pendulum, Inverted-Double-Pendulum, and Inverted-Pendulum-Swingup as implemented in the pybullet library, whereas the second set includes Hopper-Stand, Walker-Walk, and Half-Cheetah-Run as implemented in the. Figure 3(a) shows a relevant improvement of MDDPG w.r.t. DDPG in the pendulum tasks. Indeed, while in Inverted-Pendulum, which is the easiest problem among the three, the performance of MDDPG is only slightly better than DDPG, the difference in the other two problems is significant. The advantage of MDDPG is confirmed in Figure 3 (c) where it performs better than DDPG in Hopper and equally good in the other two tasks. Again, we perform a TL evaluation of DDPG in the problems where it suffers the most, by initializing the shared weights of a single-task network with the ones of a multi-task network trained with MDDPG on the other problems. Figures 3(b) and 3(d) show evident advantages of pre-training the shared weights and a significant difference between keeping them fixed or not. Our work is inspired from both theoretical and empirical studies in MTL and MTRL literature. In particular, the theoretical analysis we provide follows previous about the theoretical properties of multi-task algorithms. For instance, and prove the theoretical advantages of MTL based on linear approximation. More in detail, derives bounds on MTL when a linear approximator is used to extract a shared representation among tasks. , which we considered in this work, describes similar that extend to the use of non-linear approximators. Similar studies have been conducted in the context of MTRL. Among the others, and give theoretical proofs of the advantage of learning from multiple MDPs and introduces new algorithms to empirically support their claims, as done in this work. Generally, contributions in MTRL assume that properties of different tasks, e.g. dynamics and reward function, are generated from a common generative model. About this, interesting analyses consider Bayesian approaches; for instance assumes that the tasks are generated from a hierarchical Bayesian model, and likewise considers the case when the value functions are generated from a common prior distribution. Similar considerations, which however does not use a Bayesian approach, are implicitly made in , , and also in this work. In recent years, the advantages of MTRL have been empirically evinced also in DRL, especially exploiting the powerful representational capacity of deep neural networks. For instance, and propose to derive a multi-task policy from the policies learned by DQN experts trained separately on different tasks. compares to a therein introduced variant of DQN, which is very similar to our MDQN and the one in , showing how their method overcomes it in the Atari benchmark. Further developments, extend the analysis to policy search , and to multi-goal RL . addresses the problem of balancing the learning of multiple tasks with a single deep neural network proposing a method that uniformly adapts the impact of each task on the training updates of the agent. We have theoretically proved the advantage in RL of using a shared representation to learn multiple tasks w.r.t. learning a single task. We have derived our extending the AVI/API bounds to MTRL, leveraging the upper bounds on the approximation error in MTL provided in. The of this analysis show that the error propagation during the AVI/API iterations is reduced according to the number of tasks. Then, we proposed a practical way of exploiting this theoretical benefit which consists in an effective way of extracting shared representations of multiple tasks by means of deep neural networks. To empirically show the advantages of our method, we carried out experiments on challenging RL problems with the introduction of multi-task extensions of FQI, DQN, and DDPG based on the neural network structure we proposed. As desired, the favorable empirical confirm the theoretical benefit we described. A PROOFS Proof of Theorem 2. We compute the average expected loss across tasks: with γ = max VI,ρ,ν (K; t, r), and R max,avg = 1 /T T t=1 R max,t. Considering the term 1 /T Using Jensen's inequality: So, now we can write as avg (ε avg,0, . . ., ε avg,K−1 ; r) with ε avg,k = 1 /T T t=1 ε t,k and E avg (ε avg,0, . . ., ε avg,K−1 ; r) = Proof of Lemma 4. Let us start from the definition of optimal task-averaged risk: where Q * t,k, with t ∈ [1, T], are the minimizers of ε avg,k. Consider the taskť such thatť we can write the following inequality: By the application of Theorem 5.3 by to the right hand side, and defining Qť k,i ν, we obtain: Squaring both sides yields the : We start by considering the bound for the API framework: ν, we have: where with E(ε 0, . . ., ε K−1 ; r) = K−1 k=0 α 2r k ε k, the three coefficients c PI1,ρ,ν, c PI2,ρ,ν, c PI3,ρ,ν, the distributions ρ and ν, and the series α k are defined as in. From Theorem 5, by computing the average loss across tasks and pushing inside the average using Jensen's inequality, we derive the API bounds averaged on multiple tasks. ν, we have: avg (ε avg,0, . . ., ε avg,K−1 ; r) with Proof of Theorem 6. The proof is very similar to the one for AVI. We compute the average expected loss across tasks: with ε avg,k = 1 /T T t=1 ε t,k and E avg (ε avg,0, . . ., ε avg,K−1 ; r) = K−1 k=0 α 2r k ε avg,k. Proof of Theorem 3. Let w * 1,..., w * T, h * and f * 1,..., f * T be the minimizers of ε * avg, then: We proceed to bound the three components individually: • C can be bounded using Hoeffding's inequality, with probability 1 − δ /2 by ln(2 /δ) /(2nT), as it contains only nT random variables bounded in the interval; Firstly, to bound L(F), let y, y ∈ R KT n, where y = (y ti) with y ti ∈ R K and y = (y ti) with y ti ∈ R K. We can write the following: whence L(F) ≤ L(F). Then, we bound: Then, since it is possible to bound the Euclidean diameter using the norm of the supremum value in the set, we bound D(S) ≤ 2 sup h,w h(w(X)) and D(W (X)) ≤ 2 sup w∈W T w(X). E sup g∈F γ, g(y) − g(y) = E sup f ∈F T ti γ ti (f t (y ti) − f t (y ti)) whence O(F) ≤ √ To minimize the last term, it is possible to choose y 0 = 0, as f = 0, ∀f ∈ F, ing in min y∈Y G(F(y)) = G(F) = 0. Then, substituting in, and recalling that G(S) ≤ G(S):
A study on the benefit of sharing representation in Multi-Task Reinforcement Learning.
1,349
scitldr
We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO rotation group, translation and permutation of the unordered input sets. The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure. The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space. In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers. Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets. It is now well understood that in order to learn a compact representation of the input data, one needs to respect the symmetries in the problem domain (; a). Arguably, one of the primary reasons of the success of 2D convolutional neural networks (CNN) is the translation-invariance of the 2D convolution acting on the image grid (; . Recent trends aim to transfer this success into the 3D domain in order to support many applications such as shape retrieval, shape manipulation, pose estimation, 3D object modeling and detection. There, the data is naturally represented as sets of 3D points or a point cloud (a; b). Unfortunately, extension of CNN architectures to 3D point clouds is non-trivial due to two reasons: 1) point clouds are irregular and unstructured, 2) the group of transformations that we are interested in is more complex as 3D data is often observed under arbitrary non-commutative SO rotations. As a , achieving appropriate embeddings requires 3D networks that work on points to be equivariant to these transformations, while also being invariant to the permutations of the point set. In order to fill this important gap, we propose the quaternion equivariant point capsule network or QE-Network that is suited to process point clouds and is equivariant to SO rotations compactly parameterized by quaternions (Fig. 2), in addition to preserved translation and permutation equivariance. Inspired by the local group equivariance ), we efficiently cover SO by restricting ourselves to the sparse set of local reference frames (LRF) that collectively characterize the object orientation. The proposed capsule layers deduces equivariant latent representations by robustly combining those local LRFs using the proposed Weiszfeld dynamic routing. Hence, our latent features specify to local orientations disentangling the pose from object existence. Such explicit storage is unique to our work and allows us to perform rotation estimation jointly with object classification. Our final architecture is a hierarchy of QE-networks, where we use classification error as the only training cue and adapt a Siamese version when the relative rotation is to be regressed. We neither explicitly supervise the network with pose annotations nor train by augmenting rotations. Overall, our contributions are: 1. We propose a novel, fully SO-equivariant capsule architecture that is tailored for simultaneous classification and pose estimation of 3D point clouds. This network produces in- shows the LRFs randomly sampled from (a) and these are inputs to the first layer of our network. Subsequently, we obtain a multi-channel LRF that is a set of reference frames per pooling center (d). Holistically, our network aggregates the LRFs to arrive at rotation equivariant capsules. variant latent representations while explicitly decoupling the orientation into capsules, thus attaining equivariance. Note that equivariance have not been previously achieved regarding the quaternion parameterization of the 3D special orthogonal group. 2. By utilizing LRFs on points, we reduce the space of orientations that we consider and hence can work sparsely on a subset of the group elements. 3. We theoretically prove the equivariance properties of our 3D network regarding the quaternion group. Moreover, to the best of our knowledge, we for the first time establish a connection between the dynamic routing of and Generalized Weiszfeld iterations . By that, we theoretically argue for the convergence of the employed dynamic routing. 4. We experimentally demonstrate the capabilities of our network on classification and orientation estimation of 3D shapes. In this paper we will speak of the equivariance of point clouds under the actions of quaternions. We now provide the necessary required for the grasp of this content. Definition 1 (Equivariant Map). For a G-space acting on X, the map Φ: G × X → X is said to be equivariant if its domain and co-domain are acted on by the same symmetry group (; a): where g 1 ∈ G and g 2 ∈ G. Equivalently Φ(T (g 1) x) = T (g 2) Φ(x), where T (·) is a linear representation of the group G. Note that T (·) does not have to commute. It suffices for T (·) to be a homomorphism: In this paper we use a stricter form of equivariance and consider g 2 = g 1. Definition 2 (Equivariant Network). An architecture or network is said to be equivariant if all of its layers are equivariant maps. Due to the transitivity of the equivariance, stacking up equivariant layers will in globally equivariant networks e.g., rotating the input will produce output vectors which are transformed by the same rotation. The choice of 4-vector quaternions has multiple motivations: 1. All 3-vector formulations suffer from infinitely many singularities as angle goes to 0, whereas quaternions avoid those. 2. 3-vectors also suffer from infinitely many redundancies (the norm can grow indefinitely). Quaternions have a single redundancy: q = −q, a condition that is in practice easy to enforce. 3. Computing the actual'manifold mean' on the Lie algebra requires iterative techniques with subsequent updates on the tangent space. Such iterations are computationally harmful for a differentiable GPU implementation. Definition 3 (Quaternion). A quaternion q is an element of Hamilton algebra H 1, extending the complex numbers with three imaginary units i, j, k in the form: q = q 1 1 + q 2 i + q 3 j + q 4 k = Algorithm 1: Quaternion Equivariant Dynamic Routing. q 1 ∈ R denotes the scalar part and v = (q 2, q 3, q 4) T ∈ R 3, the vector part. The conjugateq of the quaternion q is given byq:= q 1 − q 2 i − q 3 j − q 4 k. A unit quaternion q ∈ H 1 with 1! = q:= q ·q and q −1 =q, gives a compact and numerically stable parametrization to represent orientation of objects on the unit sphere S 3, avoiding gimbal lock and singularities . Identifying antipodal points q and −q with the same element, the unit quaternions form a double covering group of SO. H 1 is closed under the non-commutative multiplication or the Hamilton product: Definition 4 (Linear Representation of H 1). We follow and use the To be concise we will use capital letters to refer to the matrix representation of quaternions e.g. Q ≡ T (q), G ≡ T (g). Note that T (·), the injective homomorphism to the orthonormal matrix ring, by construction satisfies the condition in Dfn. 1 : det(Q) = 1, Q = Q −1, Q = Q i,: = Q:,i = 1 and Q − q 1 I is skew symmetric: Q + Q = 2q 1 I. It is easy to verify these properties. T linearizes the Hamilton product or the group composition: g • q T (g)q Gq. Definition 5 (Point Cloud). We define a 3D surface to be a differentiable 2-manifold embedded in the ambient 3D Euclidean space: M 2 ∈ R 3 and a point cloud to be a discrete subset sampled on Definition 6 (Local Geometry). For a smooth point cloud {x i} ∈ M 2 ⊂ R N ×3, a local reference frame (LRF) is defined as an ordered basis of the tangent space at x, T x M, consisting of orthonormal vectors: Usually the first component is defined to be the surface normal ∂ 1 n ∈ S 2: n = 1 and the second one is picked according to a modality dependent heuristic. Note that recent trends such as acknowledge the ambiguity and either employ a gauge (tangent frame) equivariant design or propagate the determination of a certain direction until the last layer . Here, we will assume that ∂ 2 can be uniquely and igure 2: Our quaternion equivariant (QE) network for processing local patches: Our input is a 3D point set X on which we query local neighborhoods {x i} with precomputed LRFs {q i}. Essentially, we learn the parameters of a fully connected network that continuously maps the canonicalized local point set to transformations t i, which are used to compute hypotheses (votes) from input poses. By a special dynamic routing procedure that uses the activations determined in a previous layer, we arrive at latent capsules that are composed of a set of orientationsq i and new activationsα i. Thanks to the decoupling of local reference frames,α i is invariant and orientationsq i are equivariant to input rotations. All the operations and hence the entire QE-network are equivariant achieving a guaranteed disentanglement of the rotation parameters. Hat symbol (q) refers to'estimated'. repeatably computed, a reasonable assumption for the point sets we consider . For the cases where this does not hold, we will rely on the network's robustness. We will explain our method of choice in Sec. 4 and visualize LRFs of an airplane object in Fig. 1. Disentangling orientation from representations requires guaranteed equivariances and invariances. Yet, the original capsule networks of cannot achieve equivariance to general groups. To this end, proposed to use a manifold-mean and a special aggregation that makes sure that the trainable transformations get pose-aligned points as input. We will extend this idea to the non-abelian SO and design capsule networks sparsely operating on a set of LRFs computed on local neighborhoods of points, parameterized by quaternions. In the following, we first explain our novel capusle layers, the main building block of our architecture. We then show how to stack those layers via a simple aggregation ing in an SO-equivariant 3D capsule network that yields invariant representations (or activations) as well as equivariant rotations (latent capsules). To construct equivariant layers on the group of rotations, we are required to define a left-equivariant averaging operator A that is invariant under permutations of the group elements, as well as a distance metric δ that remains unchanged under the action of the group. For these, we make the following choices: Definition 7 (Geodesic Distance). The Riemannian (geodesic) distance in the manifold of rotations lead to the following geodesic distance δ(·) ≡ d quat (·): Definition 8 (Quaternion Mean µ(·)). For a set of Q rotations S = {q i} and associated weights w = {w i}, the weighted mean operator A(S, w): H 1 n × R n → H 1 n is defined through the following maximization procedure : where M ∈ R 4×4 is defined as: The average quaternion is the eigenvector of M corresponding to the maximum eigenvalue. This operation lends itself to both analytic and automatic differentiation . Theorem 1. Quaternions, the employed mean A(S, w) and geodesic distance δ(·) enjoy the following properties: Intermediate Capsules Rotation Estimation Figure 3: Our entire capsule architecture. We hierarchically send all the local patches to our Qnetwork as shown in Fig. 2. At each level the points are pooled in order to increase the receptive field, gradually reducing the LRFs into a single capsule per class. We use classification and pose estimation (in the siamese case) as supervision cues to train the point-to-transform maps. A 3. The transformations g ∈ H 1 preserve the geodesic distance δ(·) given in Dfn. 7. Proof. The proofs are given in the supplementary material. We also note that the above mean is closed form, differentiable and can be implemented batchwise. We are now ready to construct the group dynamic routing (DR) by agreement that is equivariant thanks to Thm. 1. The core idea is to route from or assign the primary capsules that constitute the input LRF set, to the latent capsules by an iterative clustering which respects the group structure. At each step, we assign the weighted group mean to each output capsule. The weights w ← σ(x, y) are inversely propotional to the distance between the vote quaternion and the new quaternion (cluster center). See Alg. 1 for details. In the following, we analyze our variant of routing as an interesting case of the affine, Riemannian Weiszfeld algorithm (; 2014). q−2 the equivariant routing procedure given in Alg. 1 is a variant of the affine subspace Wieszfeld algorithm (; 2014) that is a robust algorithm for computing the L q geometric median. Proof Sketch. The proof follows from the definition of Weiszfeld iteration and the mean and distance operators defined in Sec. 3.1. We first show that computing the weighted mean is equivalent to solving the normal equations in the iteratively reweighted least squares (IRLS) scheme . Then, the inner-most loop correspond to the IRLS or Weiszfeld iterations. We provide the detailed proof in supplementary material. Note that, in practice one is quite free to choose the weighting function σ(·) as long as it is inversely proportional to the geodesic distance and concave . We leave the analyses of the variants of these algorithms as a future work. The original dynamic routing can also be formulated as a clustering procedure with a KL divergence regularization. This holistic view paves the way to better routing algorithms. Our perspective is akin yet more geometric due to the group structure of the parameter space. Thanks to the connection to Weiszfeld algorithm, the convergence behavior of our dynamic routing can be directly analyzed within the theoretical framework presented by Aftab et al. (2014; . Theorem 2. Under mild assumptions provided in the appendix, the sequence of the DR-iterates generated by the inner-most loop almost surely converges to a critical point. Proof Sketch. Proof, given in the appendix, is a direct consequence of Lemma 1 and directly exploits the connection to the Weiszfeld algorithm. The essential ingredient of our architecture, QE-Network, is shown in Fig. 2 . We also provide a corresponding pseudocode in Alg. 3 of suppl. material. The input of the QE-Network are a local patch of points with coordinates x i ⊂ R K×3, rotations parametrized as quaternions and activations . q i also represents input primary capsules and local reference frames. N c is the number of input capsule channels per point and it is equal to the number of output capsules (M) from the last layer. In the initial layer, q i represents the pre-computed LRFs and N c is equal to 1. Given points x i and rotations q i, we compute the quaternion average µ i in channel-wise as the initial pose candidates:. These candidates are used to bring the receptive field in multiple canonical orientations by rotating the points: Since the points in the local receptive field lie in continuous R 3, training a discrete set of pose transformations t i,j based on local coordinates is not possible. Instead, we employ a point-to-transform network t(·): c ×4 that maps the point in multiple canonical orientations to transformations. The network is shared over all points to compute the transformations, which are used to calculate the votes for dynamic routing as v i,j = q i • t i,j. The network t(·) consists of fully-connected layers that regresses the transformations, similar to common operators for continuous convolutions (Schütt et al., 2017; . It is the continuous alternative to directly optimizing transformations lying in a grid kernel, as it is done in the original dynamic routing by . Note that t(·) predicts quaternions by unitnormalizing the regressed output:. Although Riemannian layers of Bécigneul & or spherical predictions of can improve the performance, the simple strategy works reasonably for our case. After computing the votes, we utilize the input activation α i to iteratively refine the output capsules (weighted average of votes)q i and activationsα i by routing by agreement as shown in Alg. 1. In order to gradually increase the receptive field, we stack QE-networks creating a deep hierarchy, pooling the points and the LRFs before each layer. Note that we are allowed to do so thanks to the properties of equivariance. In particular, we input N = 64 patches to our architecture that is composed of two QE-networks. We call the centers of these patches pooling centers. In the first layer, each of those centers is linked to their immediate vicinity leading to K = 9-star local connectivity from which we compute the 64 × 64 × 4 intermediary capsules. The input LRFs of the first layer are sampled from pre-calculated LRF-set and the input activation is set to 1. The LRFs in the second layer l = 2 are the output capsules of the first layer, l = 1 and are routed to the output capsules that are as many as the number of classes C, M 2 = C. The activation of the second layer is updated by the output of the first layer as well. This construction is shown in Fig. 3. Specifically, for l = 1, we use K = 9, N l c = 1, M l = 64 and for l = 2, K = 64, N l c = 64, M l = C = 40. This way, in this last layer all the pooling centers act as a single patch (K = 64). A single QE-network acts on this patch to create the final C × 4 capsules and C activations. More details are reported in Alg. 3 of the appendix. Implementation Details We implement our network in PyTorch and use the ADAM optimizer with a learning rate of 0.001. The point-transformation mapping network is implemented by two FC-layers composed of 64 hidden units. We set the initial activation of the input LRF to 1.0. In each layer, we use 3 iterations of DR. For classification we use the spread loss and the rotation loss is identical to δ(·). Surface normals are computed by local plane fits . We compute the second axis of the LRF, ∂ 2, by FLARE , that uses the normalized projection of the point within the periphery of the support showing the largest distance, onto the tangent plane of the center: ∂ 2 = pmax−p pmax−p. Note that using other LRFs such as SHOT or the more modern GFrames of is possible. We found FLARE to be sufficient for our experiments. Prior to all operations, we flip all the LRF quaternions such that they lie on the northern hemisphere: {q i ∈ S 3 : q w i > 0}. (a), Point2Seq (b), Spherical CNNs , PRIN and PPF-FoldNet (PPF) (a). We also present a version of our algorithm (Var) that avoids the canonicalization within the QEnetwork. This is a non-equivariant network that we still train without data augmentation. While this version gets comparable to the state of the art for the NR/NR case, it cannot handle random SO variations (AR). Note that PPF uses the point-pair-feature encoding and hence creates invariant input representations. For the scenario of NR/AR, our equivariant version outperforms all the other methods, including equivariant spherical CNNs by a significant gap of at least 5% even when uses the mesh. The object rotational symmetries in this dataset are responsible for a significant portion of the errors we make. It is worth mentioning that we also trained TFNs for that task, but their memory demand made it infeasible to scale to this application. Number of Parameters. Use of LRFs helps us to restrict the rotation group to certain elements and thus we can use networks with significantly less parameters (as low as 0.44M) compared to others as shown in Tab. 1. Number of parameters in our network depends upon the number of classes, e.g. for ModelNet10 we have 0.047M parameters. Rotation estimation in 3D point clouds. Our network can estimate both the canonical and relative object rotations without pose-supervision. To evaluate this desired property, we used the well classified shapes on ModelNet10 dataset, a sub-dataset of Modelnet40 . We generate multiple instances per shape by transforming the instance with five arbitrary SO rotations. As we are also affected by the sampling of the point cloud, we resample the mesh five times and generate different pooling graphs across all the instances of the same shape. Our QE-architecture can estimate the pose in two ways: 1) by directly using the output capsule with the highest activation, 2) by a siamese architecture that computes the relative quaternion between the capsules that are maximally activated as shown in Fig. 4. Both modes of operation are free of the data augmentation and we give further schematics of the latter in our appendix Fig. 5. Our against the baselines including a naive averaging of the LRFs (Mean LRF) and principal axis alignment (PCA) are reported in Tab. 2 as the relative angular error (RAE). We further include of and IT-Net , two state of the art 3D networks that iteratively aligns two point sets. These methods are in nature similar to iterative closest point (ICP) algorithm but 1) do not require an initialization (first iteration estimates the pose), 2) learn data driven updates. Methods that use mesh inputs such as Spherical CNNs cannot be included here as the random sampling of the same surface would not affect those. We also avoid methods that are just invariant to rotations (and hence cannot estimate the pose) such as Tensorfield Networks . Finally, note that, IT-net and PointLK need to train a lot of epoches (e.g. 500) with random SO rotation augmentation in order to get the models that cover the full SO, whereas we train only for ∼ 100 epochs. We include more details about the baselines in the appendix under Fig. 8. RAE between the ground truth and the prediction is computed as the relative angle in degrees: d(q 1, q 2)/π. Note that resampling and random rotations render the job of all methods difficult. However, both our version that tries to find a canonical alignment and the siamese variant which seeks a relative rotation are better than the baselines. As pose estimation of objects with rotational symmetry is a challenging task we also report on the non-symmetric subset (No Sym). Robustness against point and LRF resampling. Density changes in the local neighborhoods of the shape are an important cause of error for our network. Hence, we ablate by applying random resamplings (patch-wise dropout) to the objects in ModelNet10 dataset and repeating the pose estimation and classification as described above. The first part (LRF-10K) of Tab. 3 shows our findings against gradual increases of the number of patches. Here, we sample 2K LRFs from the 10K LRFs computed on an input point 10K. 100% dropout corresponds to 2K points in all columns. On second ablation, we reduce the amount of points on which we compute the LRFs, to 2K and 1K respectively. As we can see from the table, our network is robust towards the changes in the LRFs as well as the density of the points. Deep learning on point sets. The capability to process raw, unordered point clouds within a neural network is introduced by the prosperous PointNet (a) thanks to the point-wise convolutions and the permutation invariant pooling functions. Many works have extended PointNet primarily to increase the local receptive field size (b; b; ;). Point-clouds are generally thought of as sets. This makes any permutationinvariant network that can operate on sets an amenable choice for processing points . Unfortunately, common neural network operators in this category are solely equivariant to permutations and translations but to no other groups. Equivariance in Neural Networks. The early attempts to achieve invariant data representations usually involved data augmentation techniques to accomplish tolerance to input transformations (; ; 2017a). Motivated by the difficulty associated with augmentation efforts and acknowledging the importance of theoretically equivariant or invariant representations, the recent years have witnessed a leap in theory and practice of equivariant neural networks (; . While laying out the fundamentals of the group convolution, G-CNNs guaranteed equivariance with respect to finite symmetry groups. Similarly, Steerable CNNs and its extension to 3D voxels considered discrete symmetries only. Other works opted for designing filters as a linear combination of harmonic basis functions, leading to frequency domain filters (; b). Apart from suffering from the dense coverage of the group using group convolution, filters living in the frequency space are less interpretable and less expressive than their spatial counterparts, as the basis does not span the full space of spatial filters. Achieving equivariance in 3D is possible by simply generalizing the ideas of the 2D domain to 3D by voxelizing 3D data. However, methods using dense grids suffer from increased storage costs, eventually rendering the implementations infeasible. An extensive line of work generalizes the harmonic basis filters to SO by using e.g., a spherical harmonic basis instead of circular harmonics (b; ;). In addition to the same downsides as their 2D, these approaches have in common that they require their input to be projected to the unit sphere , which poses additional problems for unstructured point clouds. A related line of research are methods which define a regular structure on the sphere to propose equivariant convolution operators (a;) To learn a rotation equivariant representation of a 3D shape, one can either act on the input data or on the network. In the former case, one either presents augmented data to the network (a;) or ensures rotation-invariance in the input (a; b;). In the latter case one can enforce equivariance in the bottleneck so as to achieve an invariant latent representation of the input (; ; . Further, equivariant networks for discrete sets of views (b) and crossdomain views (a) have been proposed. Here, we aim for a different way of embedding equivariance in the network by means of an explicit latent rotation parametrization in addition to the invariant feature. Vector Field Networks, which was followed by the 3D Tensor Field Networks (TFN) that are closest to our work. Based upon a geometric algebra framework, the authors did achieve localized filters that are equivariant to rotations, translations and permutations. Moreover, they are able to cover the continuous groups. However, TFN are designed for physics applications, is memory consuming and a typical implementation is neither likely to handle the datasets we consider nor can provide orientations in an explicit manner. Capsule Networks. The idea of capsule networks was first mentioned by , before proposed the dynamic routing by agreement, which started the recent line of work investigating the topic. Since then, routing by agreement has been connected to several well-known concepts, e.g. the EM algorithm , clustering with KL divergence regularization and equivariance. They have been extended to autoencoders and. Further, capsule networks have been applied for specific kinds of input data, e.g. graphs , 3D point clouds or medical images . In this work, we have presented a new framework for achieving permutation invariant and SO equivariant representations on 3D point clouds. Proposing a variant of the capsule networks, we operate on a sparse set of rotations specified by the input LRFs thereby circumventing the effort to cover the entire SO. Our network natively consumes a compact representation of the group of 3D rotations -quaternions, and we have theoretically shown its equivariance. We have also established convergence for our Weiszfeld dynamic routing by making connections to the literature of robust optimization. Our network is among the few for having an explicit group-valued latent space and thus naturally estimates the orientation of the input shape, even without a supervision signal. Limitations. In the current form our performance is severely affected by the shape symmetries. The length of the activation vector depends on the number of classes and for achieving sufficiently descriptive latent vectors we need to have a significant number of classes. On the other side, this allows us to perform with merit on problems where the number of classes are large. Although, we have reported robustness to those, the computation of LRFs are still sensitive to the point density changes and resampling. LRFs themselves are also ambiguous and sometimes non-unique. Future work. Inspired by and our feature work will involve establishing invariance to the direction in the tangent plane. We also plan to apply our network in the broader context of 3D object detection under arbitrary rotations and look for equivariances among point resampling. A PROOF OF PROPOSITION 1 Before presenting the proof we recall the three individual statements contained in Prop. 1: Operator A is invariant under permutations: A({q σ,..., q σ(Q) }, w σ ) = A({q 1, . . ., q Q}, w). 3. The transformations g ∈ H 1 preserve the geodesic distance δ(·). Proof. We will prove the propositions in order. 1. We start by transforming each element and replace q i by (g • q i) of the cost in Eq: where M i = w i q i q i and p = G q. From orthogonallity of G it follows p = G −1 q =⇒ g • p = q and hence g • A(S, w) = A(g • S, w). 2. The proof follows trivially from the permutation invariance of the symmetric summation operator over the outer products in Eq. 3. It is sufficient to show that |q 1 q 2 | = |(g • q 1) (g • q 2)| for any g ∈ H 1: where g • q ≡ Gq. The is a direct consequence of the orthonormality of G. We will begin by recalling some preliminary definitions and that aid us to construct the connection between the dynamic routing and the Weiszfeld algorithm. Definition 9 (Affine Subspace). A d-dimensional affine subspace of R N is obtained by a translation of a d-dimensional linear subspace V ⊂ R N such that the origin is included in S: Simplest choices for S involve points, lines and planes of the Euclidean space. Definition 10 (Orthogonal Projection onto an Affine Subspace). An orthogonal projection of a point x ∈ R N onto an affine subspace explained by the pair (A, c) is defined as: c denotes the translation to make origin inclusive and A is a projection matrix typically defined via the orthonormal bases of the subspace. Definition 11 (Distance to Affine Subspaces). Distance from a given point x to a set of affine subspaces {S 1, S 2 . . . S k} can be written as: Lemma 2. Given that all the antipodal counterparts are mapped to the northern hemisphere, we will now think of the unit quaternion or versor as the unit normal of a four dimensional hyperplane h, passing through the origin: q d is an added term to compensate for the shift. When q d = 0 the origin is incident to the hyperplane. With this perspective, quaternion q i forms an affine subspace with d = 4, for which the projection operator takes the form: Proof. We consider Eq for the case where c = 0 and A = (I − qq). The former follows from the fact that our subspaces by construction pass through the origin. Thus, we only need to show that the matrix A = I−qq is an orthogonal projection matrix onto the affine subspace spanned by q. To this end, it is sufficient to validate that A is symmetric and idempotent: A A = AA = A 2 = A. Note that by construction q q is a symmetric matrix and hence A itself. Using this property and the unit-ness of the quaternion, we arrive at the proof: = I − 2qq + qq qq It is easy to verify that the projections are orthogonal to the quaternion that defines the subspace by showing proj S (q) q = 0: Also note that this choice corresponds to tr(qq) = d+1 i=1 α i = 1. Lemma 3. The quaternion mean we suggest to use in the main paper is equivalent to the Euclidean Weiszfeld mean on the affine quaternion subspaces. Proof. We now recall and summarize the L q -Weiszfeld Algorithm on affine subspaces , which minimizes a q-norm variant of the cost defined in Eq: Defining M i = I − A i, Alg. 2 summarizes the iterative procedure. Note that when q = 2, the algorithm reduces to the computation of a non-weighted mean (w i = 1 ∀i), and a closed form solution exists for Eq and is given by the normal equations: Compute the weights w t = {w t i}: Solve: For the case of our quaternionic subspaces c = 0 and we seek the solution that satisfies: It is well known that the solution to this equation under the constraint x = 1 lies in nullspace M i and can be obtained by taking the singular vector of M that corresponds to the largest singular value. Since M i is idempotent, the same can also be obtained through the eigendecomposition: which gives us the unweighted Quaternion mean. Once the Lemma 1 is proven, we only need to apply the direct convergence from the literature. Consider a set of points Y = {y 1 . . . y K} where K > 2 and y i ∈ H 1. Due to the compactness, we can speak of a ball B(o, ρ) encapsulating all y i. We also define the D = {x ∈ H 1 | C q (x) < C q (o)}, the region where the loss decreases. We first state the assumptions that permit our theoretical . Note that H5 is not a strict requirement as there are multiple ways to circumvent (simplest being a re-initialization). Under these assumptions, the sequence produced by Eq will converge to a critical point unless x t = y i for any t and i. For q = 1, this critical point is on one of the subspaces specified in Eq and thus is a geometric median. Note that due to the assumption H2, we cannot converge from any given point. For randomly initialized networks this is indeed a problem and does not guarantee practical convergence. Yet, in our experiments we have not observed any issue with the convergence of our dynamic routing. As our is one of the few ones related to the analysis of DR, we still find this to be an important first step. For different choices of q: 1 ≤ q ≤ 2, the weights take different forms. In fact, this IRLS type of algorithm is shown to converge for a larger class of weighting choices as long as the aforementioned conditions are met. That is why in practice we use a simple sigmoid function. For estimation of the relative pose with supervision, we benefit from a Siamese variation of our network. In this case, latent capsule representations of two point sets X and Y jointly contribute to the pose regression as shown in Fig. 5. Figure 5: Our siamese architecture used in the estimation of relative poses. We use a shared network to process two distinct point clouds (X, Y) to arrive at the latent representations (C X, α X) and (C Y, α Y) respectively. We then look for the highest activated capsules in both point sets and compute the rotation from the corresponding capsules. Thanks to the rotations disentangled into capsules, this final step simplifies to a relative quaternion calculation. We show additional from the computation of local reference frames and the multi-channel capsules deduced from our network in Fig. 6. Finally, the overall algorithm of our network is summarized under Alg. 3. Details on the evaluation protocol. For Modelnet40 dataset used in Tab. 1, we used the official split with 9,843 shapes for training and 2,468 different shapes for testing. For rotation estimation in Tab. 2, we used the official Modelenet10 dataset split with 3991 for training and 908 shapes for testing. 3D point clouds (10K points) are randomly sampled from the mesh surfaces of each shape Qi et al. (2017a; b). The objects in training and testing dataset are different, but they are from the same categories so that they can be oriented meaningfully. During training, we did not augment the dataset with random rotations. All the shapes are trained with single orientation (well-aligned). We call this trained with NR. During testing, we randomly generate multiple arbitrary SO rotations for each shape and evaluate the average performance for all the rotations. This is called test with AR. This protocol is used in both our algorithms and the baselines. We now report the confusion matrix in the task of classification on the all the objects of ModelNet10. The classification and rotation estimation affects one another. As we can see from Fig. 7, the first five categories that exhibit less rotational symmetry has the higher classification accuracy than their rotationally symmetric counterparts. Distribution of errors reported in Tab. 2. We now provide more details on the errors attained by our algorithm as well as the state of the art. To this end, we report, in Fig. 8 the histogram of errors that fall within quantized ranges of orientation errors. It is noticeable that our Siamese architecture behaves best in terms of estimating the objects rotation. For completeness, we also included the of the variants presented in our ablation studies: Ours-2kLRF, Ours-1kLRF. They evaluate the model on the re-calculated LRFs in order to show the robustness towards to various point densities. We have also modified IT-Net and PointNetLK only to predict rotation because the original works predict both rotations and translations. Finally, note here that we do not use data augmentation for training our networks (see AR), while both for PointNetLK and for IT-Net we do use augmentation. • the worse the performance of the method.
Deep architectures for 3D point clouds that are equivariant to SO(3) rotations, as well as translations and permutations.
1,350
scitldr
Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing. However, relatively little work has explored the internal structure and properties of spaces of sentence vectors. In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization. In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document. In addition, we identify specific dimensions which are linked to effective summaries. To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties. We also compare the features of different methods of sentence embeddings. Many of these insights have applications in uses of sentence embeddings far beyond summarization. Vector semantics have been growing in popularity for many other natural language processing applications. Vector semantics attempt to represent words as vectors in a high-dimensional space, where vectors which are close to each other have similar meanings. Various models of vector semantics have been proposed, such as LSA BID10, word2vec BID14, and GLOVE BID17, and these models have proved to be successful in other natural language processing applications. While these models work well for individual words, producing equivalent vectors for sentences or documents has proven to be more difficult. In recent years, a number of techniques for sentence embeddings have emerged. One promising method is paragraph vectors (Also known as Doc2Vec), described by BID12. The model behind paragraph vectors resembles that behind word2vec, except that a classifier uses an additional'paragraph vector' to predict words in a Skip-Gram model. Another model, skip-thoughts, attempts to extend the word2vec model in a different way BID9. The center of the skip-thought model is an encoder-decoder neural network. The , skip-thought vectors, achieve good performance on a wide variety of natural language tasks. Simpler approaches based on linear combinations of the word vectors have managed to achieve state-of-the-art for non-domain-specific tasks BID20. Arora et al. BID1 offer one particularly promising such approach, which was found to achieve equal or greater performance in some tasks than more complicated supervised learning methods. The large volume of news articles published every day motivates effective forms of automatic summarization. The most basic and most well-studied form of automatic summarization is sentence extraction, in which entire unmodified sentences are selected from the original document. These selected sentences are concatenated to form a short summary, which ideally contains the most important information from the original document while avoiding redundancy. Any practical summarization system would likely include other steps after sentence selection, such as sentence reordering or text-simplification. However, effective algorithms for these tasks already exist, and we are primarily concerned with sentence selection in this paper. We are primarily concerned with multi-document summarization algorithms. The nature of summarization requires some specific properties from sentence vectors, making it a practical real world example for which current sentence embedding methods can be easily used. In particular, news summarization may not require some of the nuances in meaning to be represented, as we are primarily concerned with the topic, not the meaning, of a sentence. In particular, news articles will rarely contain sentences expressing contradictory views on the same subject. For instance, our algorithm will rarely need to differentiate between the sentence embeddings for sentences such as "John loves Mary" and "Mary loves John", which have divergent meanings but the same words. This is in sharp contrast to typical testing cases for sentence embeddings, such as the detection of paraphrased sentences. Then, summarization gives us an opportunity to compare the effectiveness of different sentence embeddings in practice. State-of-the-art extractive summarization techniques have been achieved with a wide variety of methods. Here, we provide a short overview of some recent successful techniques techniques. Cao et al. BID2 used a recursive neural network, which operates on a parsing tree of a sentence, to rank sentences for summarization. BID3 successfully used a neural-network-based sentence extractor, which considered the document encodings and the previously selected sentences, as well as the current sentence in making its decisions. Parveen et al. BID16 used a graph-based approach, modeling a document as "a bipartite graph consisting of sentence and entity nodes".Ren et al. Ren et al. achieved state-of-the-art through a regression-based approach. A variety of engineered features are used as inputs into a regression model. The single highest rated sentence is selected, and the process is repeated. To our knowledge, no one has explored the use of modern sentence embedding methods, such as Paragraph Vectors or Skip-Thought vectors, in summarization. However, some work has been done on summarization using word2vec representations. BID6 presented a version of text summarization based on vector semantics. However, instead of using the embeddings of a word as determined by a larger corpus, they attempted to calculate the embedding of a word based off of analysis only on the document in question. In addition they used LSA instead of newer techniques such as word2vec. Kageback et al. BID8 used cosine similarity between the sum of word2vec embeddings, as well as a recursive auto-encoder, to modify another standard summarization algorithm (sub-modular optimization) Ren et al. Ren et al. used "Average Word Embedding" as one of many independent variables in a regression model, but the degree to which word embeddings effected the final model was not clear. BID3 used word embeddings as the input to a neural network as part of summarization, but they did not directly compare embeddings. They used a single-layer convolutional neural network to encode sentences, and a LSTM neural network to encode documents. BID15 recently achieved state-of-the-art using a modified version of LexRank, using a combination of cosine similarity of weighted averages of vectors and named entity overlap. To explore potential sentence embeddings, we implement the four sentence embeddings above as vector functions, which convert sentences or documents to vectors. • SIF Average: The most basic sentence embedding is simply the weighted average of word vectors from Arora et al. BID1, without the common component removal. We use the Brown corpus (Francis & BID5 for word frequency information.• Arora: This method is simply the method described in Arora et al. It is equivalent to the one above, except with common component removal added. We use the Brown corpus both to compute the common component vector, and for word frequency information.• Paragraph Vectors: The paragraph vector approach described above. We used the 300-dimensional DBOW model pretrained by Lau et al. BID11 on the wikipedia corpus.• Skip-Thought Vectors The skip-thought vector approach described above. We used the 4800-dimensional combined-skip model BID9 BID0 .All sentence embeddings are normalized to produce unit vectors To explore the design space, we consider combinations of vector functions and selector functions, functions which, given vector representations for sentences, extracts a summary. We present a large variety of example selector functions in order to allow us to explore interaction effects between selector functions and vector functions.• Near: The most basic selector function, Near, selects the sentences whose sentence vectors have the highest cosine similarity with the document vector• Near Nonredundant: An attempt at balancing redundancy with salience, Near Nonredundant down-weights the cosine similarity scores by their average cosine similarity the sentences selected so far. Because this redundancy measure is strongly (quadratically) correlated with cosine similarity to the document, we fit a regression for redundancy for each vector function, and use the residual on this regression for the final algorithm.• LexRank: Our next selector is based off of the classical LexRank algorithm BID4. We use a modified version of this algorithm, where the weights of edges are determined by the cosine similarity between sentence embeddings.• Cluster: We use an Agglomerative Clustering algorithm (using cosine similarity as its distance metric) to find clusters in the set of sentence embeddings. We then find the sentence closest to the average of each cluster and add it to the summary. To ensure we find summaries which meet the word-length requirement, we increase the number of clusters we search for until we have selected sentences totaling 100 words.• Greedy: The greedy selector, at each step, selects the sentence such that the cosine similarity of the new summary (including previously selected sentences) is maximized. This is subtly different than the Near selector for average-based vector functions, but significantly different for Paragraph Vectors.• Brute Force: Another attempt at optimizing the cosine similarity between the summary and the document, this selector creates a pool of the 20 sentences with the highest cosine similarity. From this pool, every combination of sentences (with an appropriate word count) is tried, and the combination with the highest cosine similarity is selected as the summary.• Max Similarity: A proof-of-concept selector which computes for both the Greedy and Brute Force selectors and then selects the with the highest cosine similarity to the document vector.• Near-then-Redundancy: Starting with the pool from the Brute Force algorithm, this algorithm optimizes via brute force to minimize redundancy (defined as the average cosine similarity between pairs of sentences). Note that the size of the sentence pool, which is essentially a computational shortcut in the Brute Force selector, is now a performance-critical hyper-parameter.• PCA: This selector performs Principal Component Analysis (PCA) on the set of sentence vectors in a document. Then, the algorithm selects the one sentence closest to the first component, one sentence closest to the second component, and so on, until the length capacity is met.• Random: This selector simply selects sentences at random, until the word limit is reached. This provides a lower-bound on the performance of an effective algorithm, and is used for baseline comparisons Because evaluation of summarization is fundamentally a subjective task, human evaluations are ideal. However, human evaluations are often expensive and time-consuming to obtain. Luckily, some metrics of automatically evaluating summaries, by comparison to a human-written summary, have been developed. Traditionally, various forms of the ROUGE metric, which compare shared n-grams, have been used. BID13. ROUGE has been shown to correlate strongly with human judgments BID18, and is our primary metric for evaluating summaries 1. We report ROUGE-1 and ROUGE-2 statistics, which correspond to unigrams and bigrams, respectively. We split the document clusters in the DUC 2004 dataset into a testing set and a validation set of approximately equal sizes. The pre-defined training set of the DUC 2001 dataset was used as a training set for some of the graphs and data analysis presented here. We present for Multi-Document Summarization on the DUC 2004 dataset TAB1. A few notes on the :• The best performing selector, Greedy, is both very simple and based on fundamental principles of vector semantics. • Paragraph Vectors work much worse with the Clustering and Greedy algorithms, and work much better with Near and SVMs.• Many combinations of selector function and vector function do not work above the level of random chance.• In general, despite their sophistication, Paragraph Vectors and Skip-Thought vectors perform worse than much more basic approaches. Despite the poor performance of our models compared to the baselines, analyses of the underlying data provide many useful insights into the behavior of vector semantics in real-world tasks. The cosine scores between all sentence vectors and the corresponding document vectors follow a normal distribution for all vector functions FIG0 ), but this effect is most pronounced for paragraph vectors (r 2 = .996). In addition, the sentence embeddings for paragraph vectors and skip-thought vectors are far closer to the document embedding than would be expected from a random distribution, with mean cosine similarities of.65 and.84, respectively (Unsurprisingly, this also holds for Average and Arora vectors). By identifying the sentences present in an optimal summarization, we show that optimal sentences have higher cosine scores, and that this effect is increased after adjusting cosine scores for word length (Fig. 2). However, there is a lot of overlap, implying that, although this method has some power to discern good summaries from bad summaries, the power of this method alone is not high enough to product good summaries. We calculated the isolated ROUGE score of each individual sentence in the training set, and the sentence embeddings for these sentences on all four vector functions. To partially eliminate the effects of the sentence's context, we subtract the corresponding document vector from all sentence vectors before regression. Due to the large number of predictor variables, we use a Bonferroni correction, considering values significant only if they have p-values of α n, which, for α =.05, corresponds approximately to p <.00001 for the skip-thought vectors, and p <.00016 for all other vectors. Three dimensions are significant at this level for SIF Average vectors. No dimensions are significant at this level for Arora vectors, though the three values significant for SIF Average achieve p values of.0021,.0485 and.0006. 29 dimensions are significant for Paragraph Vectors. 5 dimensions are significant, at the much higher threshold, for Skip-Thought Vectors. It appears that these specific dimensions correspond to aspects of a sentence that make it somehow more suited for a summary. Despite the theoretical implications of this , the regression models do not have enough predictive power to create good summaries by themselves. The Greedy algorithm is the most successful algorithm among those we have explored. As its name implies, the Greedy algorithm appears to be simply an attempt at maximizing the following objective function:f cos (summary) = vector(summary) · vector(document)Of course, any simple objective function can only be an approximation to the informally-defined criteria for good summaries. Even so, TAB1 suggests that the performance of the greedy algorithm is not based on the accuracy of the corresponding objective function. In particular, the two other strategies which try to maximize the same objective functio (Brute force, and Maximum Similarity) consistently and significantly create summaries with higher cosine similarity to the document, outperforming the Greedy selector on its objective function, but both of these algorithms perform much worse than the Greedy algorithm. Deeper analysis into the decisions of the Greedy algorithm reveals some reasons for this discrepancy. It appears that the good performance of the Greedy algorithm not from the associated objective function, but by the way in which it maximizes this objective function. In particular, the Greedy algorithm selects sentences with low cosine similarity scores in a vacuum, but which increase the cosine similarity of the overall sentence. To understand why this is true, it we consider the step-by-step behavior of the Greedy algorithm, which can be understood as maximizing the following equation (derived in Appendix A): DISPLAYFORM0 wheres p = i j=0s j i j=0s jNote that this equation consists of three parts: id ·s 1 (a constant wrt.s),d ·s, which is simply the salience of a sentence measured by cosine similarity, and the denominator, which is essentially a measure of redundancy. Not only does this simple metric lead to a'natural' penalty for redundancy, it performs better than our handcrafted redundancy penalties. The way this expression scales when picking the i th sentence is particularity noteworthy: the behavior of this function changes as i increases. In particular, the function becomes more sensitive to redundancy, and less sensitive to salience, as the algorithm selects more sentences. In other words, the algorithm will first try to select important sentences, and then select sentences to fill in the gaps. This , and the success of the ing algorithm, has implications for balancing salience and redundancy in future summarization algorithms. In general, there are two ways to compute a document vector. The most obvious is to pass the entire text of the document into the vector function. This has two theoretical problems. The first is that the'documents' in our algorithms are really clusters of documents, and are therefore non-coherent. The second is that Skip-thought vectors are not designed to handle text longer than a sentence. However, an alternative document vector, docvec-avg, is defined as the mean of the (normalized) sentence vectors. This corresponds to treating the document as a collection of sentences, instead of a collection of words. We compare the two here, and present full in Appendix B.As expected, Skipthought vectors, which are not designed for text larger than a sentence, perform significantly better with the docvec-avg strategy. More notable is the poor performance of the docvec-avg strategy with Paragraph Vectors. The size of the performance gap here implies that Paragraph Vectors can combine information from multiple sentences in a manner more sophisticated than simple averaging. More interesting is the performance for SIF Average and Arora vectors. For these vector functions, which are based on taking the average of words, docvec-avg very closely resembles the simple strategy. And yet there is a small but significant performance gap. The difference between the two document vectors is the weighting. Docvec-avg, which normalizes vectors before adding them together, removes some weighting information present in the simple strategy. In particular, the simple strategy assigns more weight to sentences with a lot of highly-weighted words. Presumably, docvec-avg, by ignoring this weighting, leaves out useful information. This hypothesis is supported by the greater performance gap for Arora vectors, which effectively downweights certain common words and therefore could be expected to carry more information in word weightings. Similar, but much smaller, gaps exist when computing the vectors for summaries at each step in the greedy algorithm. We present a broad comparison of the properties of different sentence embedding schemes.6.6.1 SIF AVERAGE/ARORA VECTORS Three selector functions perform better with both SIF Average and Arora vectors: Near Nonredundant, Greedy, and PCA. These functions seem to be unified by their comparisons between the vectors of sentence embeddings (implicitly, in the case of the greedy algorithm). These selector functions correspond to the most basic test for sentence embeddings: Judging the similarity of two sentences. The exact difference the common component removal makes is less clear. Arora vectors hold a slight performance edge for all selectors except for Near and Greedy (the Greedy algorithm loses a full 2 points). Two selector functions perform better with Paragraph Vectors: Near and Brute Force. Both of these are very similar: They require comparing sentence vectors to the document vector. The poor performance on algorithms such as Near Nonredundant suggests that Paragraph Vectors are especially poor at comparing sentence vectors to each other. These suggest that Paragraph Vectors are especially good at computing document vectors, a hypothesis also implied by the of Section 6.5. The other distinguishing property of Paragraph Vectors is their very high correlation when regressing on the individual features. It is hard to disentangle the properties of Skipthought vectors from the high dimensionality of the pretrained vectors we used. In general, Skipthought vectors performed poorly. They only performed better than other vector functions with one selector, Clustering, although their performance with this selector was significant. We have identified differences in different forms of sentence vectors when applied to real-world tasks. In particular, each sentence vector form seems to be more successful when used in a particular way. Roughly speaking, Arora's vectors excel at judging the similarity of two sentences while Paragraph Vectors excel at representing document vectors, and at representing features as dimensions of vectors. While we do not have enough data to pinpoint the strengths of Skipthought vectors, they seem to work well in specific contexts that our work did not fully explore. These differences are extremely significant, and will likely make or break real-world applications. Therefore, special care should be taken when selecting the sentence vector method for a real-world task. This material is based upon work supported by the National Science Foundation under Grant No. 1659788 and 1359275 A APPENDIX A The first choice of the greedy algorithm is simple: it chooses the sentence with maximum cosine similarity to the document vector:s 1 = argmax s∈Ss ·d (Recall that all vectors have unit-length, so cosine similarity is equivalent to the dot product).To select the second vector, the greedy algorithm is maximizing the following equation: DISPLAYFORM0
A comparison and detailed analysis of various sentence embedding models through the real-world task of automatic summarization.
1,351
scitldr
We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes. Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. Planning is a key component for artificial agents in a variety of domains. However, a limit of classical planning algorithms is that one needs to know how to search for an optimal (or good) solution, for each type of plan. When the complexity of the planning environment and the diversity of tasks increase, this makes writing planners difficult, cumbersome, or entirely infeasible. "Learning to plan" has been an active research area to address this shortcoming BID13 BID5. To be useful in practice, we propose that methods for learning to plan should have at least two properties: they should be traces free, i.e. not require traces from an optimal planner, and they should generalize, i.e. learn planners that generalize to plans of the same type but of unseen instance and/or planning horizons. In a Reinforcement Learning (RL) setting, learning to plan can be framed as the problem of finding a policy that maximises the expected reward, where such policy is a greedy function that selects actions that will visit states with a higher value for the agent. In such cases, Value Iteration (VI) is a algorithm that is naturally used to learn to estimate the value of states, by propagating the rewards and values until a fixed point is reached. When the environment can be represented as an occupancy map (a 2D grid), it is possible to approximate this learning algorithm using a deep convolutional neural network (CNN) to propagate the value on the grid cells. This enables one to differentiate directly through the planner steps and perform end-to-end learning. One way to train such models is with a supervised loss on the trace from a search/planning algorithm, e.g. as seen in the supervised learning section of Value Iteration Networks (VIN) BID17, in which the model is tasked with reproducing the function to iteratively build values aimed at solving the shortest path task. However, this baseline violates our wished trace free property because of the required target values, and it doesn't fully demonstrate the capabilities to deal with interactive and generalized settings. That is what we set out to extend and further study. In this work we extend the formalization used in VIN to more accurately represent the structure of gridworld-like scenarios, enabling Value Iteration modules to be naturally used within the reinforcement learning framework, while also removing some of the limitations and underlying assumptions of the model. Furthermore we propose hierarchical extensions of such a model that allow agents to do multi-step planning, effectively learning models with the capacity to provide useful path-finding and planning capabilities in relatively complex tasks and comparably large scenarios. We show that our models can not only learn to plan and navigate in complex and dynamic environments, but that their hierarchical structure provides a way to generalize to navigation tasks where the required planning and the size of the map are much larger than the ones seen at training time. Our main contributions include: introducing VProp, a network module which successfully learns to solve pathfinding via reinforcement learning, demonstrating the ability to generalize, leading our models to solve large unseen maps by training exclusively on much smaller ones, and showing that our modules can learn to navigate environments with more complex dynamics than a static grid-world. We consider the control of an agent in a "grid world" environment, in which entities can interact with each other. The entities have some set of attributes, including a uniquely defined type, which describes how they interact with each other, the immediate rewards of such interactions, and how such interactions affect the next state of the world. The goal is to learn to plan through reinforcement learning, that is learning a policy trained on various configurations of the environment that can generalize to arbitrary other configurations of the environment, including larger environments, and ones with a larger number of entities. In the case of a standard navigation task, this boils down to learning a policy which, given an observation of the world, will output actions that take the agent to the goal as quickly as possible. An agent may observe such environments as 2D images of size d x × d y, with d pix input panes, which are then potentially passed through a local embedding function Φ: R dpix×dx×dy → R drew×dx×dy. The function could for instance be implemented as a 2D convolutional network that extracts the type and position of entities. This formalisation of the navigation problem is often employed in robotics to represent 3D and 2D surfaces, finding its use in, but not limited to, frameworks such as Simultaneous Localization and Mapping (SLAM), physics simulators, and more generic planning environments. This makes it appropriate for developing models and evaluating their impact on real-world planning tasks BID18. The problem of reinforcement learning is typically formulated in terms of the computation of optimal policies for a Markov Decision Problem (MDP) BID16 ). An MDP is defined by the tuple (S, A, T, R, γ), where S is a finite set of states, A is the set of actions a that the agent can take, T: s → a → s is a function describing the state-transition matrix, R is a reward function, and γ is a discount factor. In this setting, an optimal policy π * is a distribution over the state-action space that maximises in expectation the discounted sum of rewards k γ k r k, where r k is the single-step reward. A standard method (such as the one used in Q-Learning BID20) to find the optimal policy π: s → a is to iteratively compute the value function, Q π (s, a), updating it based on rewards received from the environment. Using this framework, we can view learning to plan as a structured prediction problem with the planning algorithm Value Iteration as inference procedure. When doing so, the problem generalises to learning a model of the environment that is approximating the state-transition probabilities and the immediate reward function. Beside value-based algorithms, there exist other types which are able to find optimal policies, such as policy gradient methods, which directly regress to the policy function π, instead of approximating the value function. Finally, a third type is represented by the actor-critic algorithms family, which combine the policy gradient methods' advantage of being able to compute the policy directly, with the low-variance performance estimation of value-based RL used as a more accurate feedback signal to the policy estimator BID7. Let us denote by o the current observation of the environment, and q 0 the zero tensor of dimensions (A, d x, d y). Then, the Value Iteration (VI) module proposed by BID17 is a recurrent neural network where each layer first performs max-pooling over the first dimension followed by a layer h: R drew×dx×dy × R dx×dy → R A×dx×dy, giving rise to the following recurrence, for k ≥ 1: DISPLAYFORM0 Given the agent's position (x 0, y 0) and the current observation of the environment o, the control policy π is then defined by π(o, (x 0, y 0)) = argmax a=1.. A q ax0y0.Since the state space can be identified with the coordinates in a 2-dimensional environment of size d x × d y then a discounted MDP on that state is defined by the discount factor γ together with:• a transition probability function DISPLAYFORM1 such that the matrix obtained fixing the last three coordinates sum to 1.• an immediate reward function R:= A × d x × d y, where R a,i,j represents the reward obtained by performing action a in state i, j. Given a starting Q-function Q 0 (with same size as R), value iteration defines a sequence Q k, V k of (state, action)-and state-value functions through: DISPLAYFORM2 It follows that when d rew:= A and Φ(x):= R, h is a linear layer of both of its inputs with T as parameters for the v k input: DISPLAYFORM3 Thus, a linear VI module with K iteration has the capacity to represent the application of K iterations of value iteration in an MDP where the state space is the set of 2D coordinates, requiring in the worst case a number of steps equal to the number of states -d x d y in grid worlds -to evaluate an entire plan. In practice however, much fewer iterations are needed in the considered tasks: for instance path finding in a grid-world requires planning for only the length of the shortest path, which is much smaller unless the configuration corresponds to a very complicated (and unusual) maze. We can also see that corresponds to only a very special case of linear VI modules. For instance, if the recurrence uses a fully connected layer with weights W ∈ R A×dx×dy×(drew+1)×dx×dy and biases b ∈ R A×dx×dy so that DISPLAYFORM4 then not only corresponds to a special case with extremely sparse weights, but also exhibits a specific structure where the dot product is non-trivial only on the recurrent part. This relationship ultimately forms the motivation for the development of VI modules BID17, Section 3.1). Model-based planning with end-to-end architectures has recently shown promising on a variety of tasks and environments, often using Deep Reinforcement Learning as the algorithmic framework BID14 BID11 BID21 BID3. 3D navigation tasks have also been tackled within the RL framework BID9, with methods in some cases building and conditioning on 2D occupancy maps to aid the process of localization and feature grounding BID1.Other work has furthermore explored the usage of VIN-like architectures for navigation problems: BID10 present a generalization of VIN able to learn modules on more generic graph structures by employing a graph convolutional operator to convolve through each node of the graph. BID12 demonstrate a method for multi-agent planning in a cooperative setting by training multiple VI modules and composing them into one network, while also adding an orientation state channel to simulate non-holonomic constraints often found in mobile robotics. BID4 and BID6 propose to tackle partially observable settings by constructing hierarchical planners that use VI modules in a multi-scale fashion to generate plans and condition the model's belief state. It is clear that all such work can indeed greatly benefit from employing a powerful planner with capable of learning in a larger combination of tasks and environments, motivating us to develop one which can directly substitute VIN. In this section we formalize two novel alternatives to the Value Iteration module, both of which can function as drop-in replacements for VIN and be used as low-level planners in graph-based tasks. A problematic and key limitation of the VI module is that the transition probabilities are encoded as the weights W of the convolution, which are naturally translation-invariant. This means that the transition probabilities at one state, a cell in this case, do not depend on its content or the surrounding ones'. This restriction affects the capability of the model to learn any complex dynamics, while also artificially constraining the design of the architecture and other experimental parameters. For instance, non-achievable state must be implemented so that they can be represented by a sufficiently low cost to compensate for any surrounding positive cost. We can obtain a minimal parametrization of a VI module that represents reasonable prior knowledge on the model by simply observing that the transition probabilities must depend on the state we are in, and that the rewards associated to transitions from state s to s with action a takes an additive form r in s − r out s, where r in and r out depend on the state at hand. We can now therefore consider a variant of the VI module that puts emphasis on conditioning on a different aspect of the grid's environment dynamics: state reachability. Our new module, which we call Value Propagation module (VProp), takes as input three embeddings: a value r in (o), a cost r out (o), and a propagation p(o), all of which are scalar maps of size R dx×dy. Using these functions as input, the ing output is then computed as DISPLAYFORM0, max DISPLAYFORM1 where v (k), r in, r out and p depend on the observation o, and N (i, j) represents the coordinates of the cells adjacent to the cell (i, j). Note that here we made the convolution and the maximum explicit, since they can be done in a single step. VProp's model corresponds to a deterministic model in which the reward propagates from adjacent states to current states, capturing the prior that the reachability of a state does not depend on the adjacent cell the agent is in, but by the underlying -and less constraining -observed transition dynamics. As such, instead of setting the probability to get to a unachievable state to 0 from its adjacent cells, the model represents unachievable states by setting their propagation to 0 and r in to some negative value.2 Goal states, which are also absorbing for the underlying MDP, are represented with a propagation close 0, and a positive r in, whereas other types of cell are to have high propagation value bounded by the discount factor, while the cost of their traversal is represented by either a negative r in or a positive r out. Finally, given the agent's coordinates s, the agent policy becomes a function that takes as input the vector DISPLAYFORM2, and either outputs an action or a distribution over actions. Since both VI module and VProp module described above are purely convolutional, they should be able to exploit the lack of size constraints in the input to generalize to larger environments than seen during training time by simply increasing the recursion depth. However, despite the convolutions effectively hard-coding the local planning algorithm, these models cannot be proven to generalize properly across the entire distribution of environment configurations. To see this, let us consider the simple problem of pathfinding. Even in a simple grid-world with blocks and a single goal, there are many sets of parameters of the VI / VProp module that are equally good for pathfinding with a fixed environment size. For instance, in an environment of size 20 × 20 in which paths are of length 40 with very high probability (over the randomized configurations of the environment), we can trivially check that a VProp module with a r in value of 1 for the goal entity and 0 otherwise, r out to −0.021 Note that while we use a minus sign, we do not impose specific sign constraints on r in and r out, since natural environments can easily be represented with both values as positive.2 Absorbing states can be represented in the same way.for empty cells, −1 for blocks and −1 for the goal state, and with propagation set to 1, everywhere, will effectively create a value map v (K) which is affinely related to the shortest path to the goal on empty cells. Thus, as far as the training distribution of environments is concerned, this would be an acceptable set of parameters. However, if we increase the size of the environment such that paths are strictly longer than 50, the goal value would become 1 − 51 × 0.02, which is less than 0, hence stopping the propagation of the path after the 51st step. The model is therefore bound to the capacity of the additive rollout in the value map, which is responsible for both identifying the goal and computing the shortest path. To solve this problem, we further introduce the Max-Value Propagation module (MVProp), which constrains the network to represent goals with a high base reward, propagates them multiplicatively through cells with lower values but high propagation, and learns reward and propagation maps depending on the content of the individual cells in the same fashion as VProp. More precisely, denoting again the initial reward map by r(o) ∈ dx×dy and the propagation map by p(o) ∈ dx×dy, MVProp iteratively computes value maps v (k) (o) as follows: DISPLAYFORM0 The input to the policy, given the agent position s = (i 0, j 0), is then the 3 × 3 map of (v DISPLAYFORM1, padded with 0 at the boundaries. This propagation system guarantees that values are propagated in the direction of lower-value cells at all times (even at the start of training), and that costs of individual cells are dealt with the 1 − p map. In other words, the path length is propagated multiplicatively, whereas the reward map is used to distinguish between goal cells and other cells. Given this setup, the optimal policy should therefore be able to locally follow the direction of maximum value. we focus on evaluating our our modules strictly using Reinforcement Learning, since we are interested in moving towards environments that better simulate tasks requiring interaction with the environment, hence limiting the chance to gather representative datasets. For training, we use an actor-critic architecture with experience replay. We collect transition traces of the form (s t, a t, r t, p t, s t+1), where s t is the state at time step t, a t is the action that was chosen, p t is the vector of probabilities of actions as given by the policy, and r t is the immediate reward. The state is represented by the coordinates of the agent and 2D environment observation, excluding any agent channel. Terminal states are instead represented by some value ∅ when sampled as s t+1. The architecture contains the policy π θ described in the previous sections, together with a value function V w, which takes the same input as the softmax layer of the policy, concatenated with the 3x3 neighborhood of the agent. Note that w and θ share all their weights until the end of the convolutional recurrence. At training time, given the stochastic policy at time step t denoted by π θ t, we sample a minibatch ofB transitions, denoted B, uniformly at random from the last L transitions, and perform gradient ascent over importance-weighted rewards (more details on the constants η, η, λ, C below): DISPLAYFORM0 where 1 {s =∅} is 1 if s is terminal and 0 otherwise. The capped importance weights min DISPLAYFORM1 p(a), C are standard in off-policy policy gradient (see e.g., BID19 and references therein). The capping constant (C = 10 in our experiments) allows control of the variance of the gradients at the expense of some bias. The second term of the update acts as a regularizer and forces the current predictions to be close enough the the ones that were made by the older model. It is supposed to play the role of TRPO-like regularizations as implemented in BID19, where they use probabilities of an average model instead of the previous probabilities. We observed that by keeping only the last 50000 transitions (to avoid trying to fit predictions of a bad model) worked well on our task. The update rule for the parameters of the value function also follows the standard rule for off-policy actor-critic. The learning rates η, λ and η also control the relative weighting of the different objectives when the weights are shared. In practice, we use RMSProp rather than plain SGD, with relative weights λ = η = 100.0η. Comparison between a random map of the VIN dataset, and a few random configuration of our training environment. In our custom grid-worlds, the number of blocks increases with size, but their percentage over the total available space is kept fixed. Agent and goal are shown as circles for better visualization, however they still occupy a single cell. Our grid-world experimental setting is mostly identical to the ones used in previous work on the same topic: an agent and a goal are placed at random positions in a 2d world of fixed dimensions. The agent is allowed to move in all 8 directions at each step, and a terminal state is reached when the agent either reaches the goal or hits one of the walls randomly placed when the task is sampled from the task distribution. The ratio of unwalkable blocks over total space is fixed to 30%, and the blocks are sampled uniformly within the space, unless specified otherwise (Figure 1). This setting provides environments with a good distribution of configurations, with chunks of obstacles as well as narrow and dangerous paths. Unless specified otherwise, attempting to walk into walls yields a reward of −1, valid movements give a reward of −0.01 × f (a t, s t), where f (a, s) is the cost of moving in the direction specified by action a in state s, and reaching the goal provides a positive reward of 1 to the agent. In our experiments we define f as the L2 distance between the agent position at state s t, and the one at s t+1. The agent is tasked to navigate the maze as fast as possible, as total cost increases with time since noop actions are not allowed. Episodes terminate whenever the agent takes any illegal action such as hitting a wall, or when the maximum number of steps is reached (which is set to 50). We use MazeBase BID15 to generate the configurations of our world and the agent interface for both training and testing phases. Additionally we also evaluate our trained agents on maps uniformly sampled from the 16 × 16 dataset originally used by BID17, so as to get a direct comparison with the previous work, and to confirm the quality of our baseline. We tested all the models on the other available datasets (8 × 8 and 28 × 28) too, without seeing significant changes in relative performance, so they are omitted from our evaluation. Similar to BID17, we employ a curriculum where the map size doesn't change at training time, but where the average length of the optimal path from the starting agent position is bounded by some value which gradually increases after a few training episodes. This allows the agent to more easily encounter the goal at initial stages of training, allowing for easier conditioning over the goal feature. Across all our tests, both VProp and MVProp greatly outperformed our implementation of VIN (15.6% ± 5.3% 68.8% ± 27.2% 100% 0.8 ± 0.3 0.4 ± 0.3 0.0 ± .0 64×64 4.0% ± 4.1% 53.2% ± 31.8% 100% 1.5 ± 0.4 0.5 ± 0.4 0.0 ± .0 TAB0 : Average performance at the end of training of all tested models on the static grid-worlds with 90% confidence value, across 5 different training runs (with random seeding). v16x16 correspond to the maps sampled from VIN's 16x16 grid test dataset, while the rest of the maps are sampled uniformly from our generator using the same parameters employed at training time. The distance to the optimal path is averaged only for successful episodes.randomly. The original VIN architecture was mostly tested in a fully supervised setting (via imitation learning), where the best possible route was given to the network as target. In the appendix, however, BID17 show that VIN can perform in a RL setting, obtaining an 82.5% success rate, versus the 99.3% success rate of the supervised setting on a map of 16 × 16. The authors do not show for the larger 28 × 28 map dataset, nor do they provide learning curves and variance, however overall these are consistent with the best performance we obtained from testing our implementation. That said, on average the model didn't perform as well as expected. Figure 2: Average reward of all the models as they train on our curriculum. Note again that in both cases the map size is 32 × 32. a and c demonstrate performances respectively on the VIN dataset and our generated 64x64 maps. b shows performance on evaluation maps constrained by the curriculum settings (segmented line), and without (continuous line).The final average performances of each model against the static-world experiments TAB0 clearly demonstrate the strength of VProp and MVProp. In particular, MVProp very quickly learns the correctly approximate transition dynamics, ing into strong generalization right from the first few hundreds episodes, hence obtaining near-optimal policies during the first thousand training episodes. Finally, we propose a set of experiments in which we allow our environment to spawn dynamic adversarial entities. Such entities at each step query a custom policy, which is executed in parallel to the agent's. Examples of these policies might include a -noop strategy, which makes the entity move in random direction with probability or do nothing, or a -direction policy, which makes the entity move to a specific direction with probability or do nothing. We use the first category of policies to augment our standard path-planning experiments, generating enemies_only environments where 20% of the space is occupied by agents with = 0.5, and mixed environments with the same amount of entities, half consisting of fixed walls, and the remaining of agents with = 0.2 The latter type of policies is instead used to generate a deterministic but continuously changing environment which we call avalanche, in which the agent is tasked to reach the goal as quickly as possible while avoiding "falling" entities (which also are also uniformly sampled up to 20% of the walkable area).To deal with these new environments, the agent needs to re-plan at each step because the agent does not have any prior knowledge about the entities policies, which forces us to train on 8x8 maps to reduce the time spent rolling-out the convolutional modules. This however allows us to train without curriculum, as the agent is more likely to successfully hit the goal in a smaller area. FIG1 shows that VProp can successfully learn to handle the added stochasticity in the environment dynamics, and its generalization capabilities allow it to also successfully learn to tackle the avalanche maps on larger sizes FIG2 ). VIN agents however while managing to learn policies able to tackle small sizes, gradually lose performance on larger sizes, which become significantly harder to deal with unless the dynamics are modelled correctly. Architectures that try to solve the large but structured space of navigation tasks have much to benefit from employing planners that can be learnt from data, however these need to quickly adapt to local environment dynamics so that they can provide a flexible planning horizon without the need to collect new data and training again. Value Propagation modules' performances show that, if the problem is carefully formalized, such planners can be successfully learnt via Reinforcement Learning, and that great generalization capabilities can be expected when these models are built on convnets and are correctly applied to 2D path-planning tasks. Furthermore, we have demonstrated that our methods can even generalize when the environments are dynamics, enabling them to be employed in complex, interactive tasks. In future we expect to test our methods on a variety of tasks that can be embedded as graph-like structures (and for which we have the relevant convolutional operators). We also plan to evaluate the effects of plugging VProp into architectures that are employing VI modules (see Section 3), since most of these models could make use of the ability to propagate multiple channels to tackle more complex interactive environments. Finally, VProp architectures could be applied to algorithms used in mobile robotics and visual tracking BID2, as they can learn to propagate arbitrary value functions and model a wide range of potential functions.
We propose Value Propagation, a novel end-to-end planner which can learn to solve 2D navigation tasks via Reinforcement Learning, and that generalizes to larger and dynamic environments.
1,352
scitldr
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is crucial for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to extract feature interaction interpretations from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, i.e., significantly outperforming existing recommender models. What's more, the same approach to interpreting interactions can provide new insights into domains even beyond recommendation. Despite their impact on users, state-of-the-art recommender systems are becoming increasingly inscrutable. For example, the models that predict if a user will click on an online advertisement are often based on function approximators that contain complex components in order to achieve optimal recommendation accuracy. The complex components come in the form of modules for better learning relationships among features, such as interactions between user and ad features (; ; ; ;). Although efforts have been made to understand the feature relationships, there is still no method that can interpret the feature interactions learned by a generic recommender system, nor is there a strong commercial incentive to do so. In this work, we identify and leverage feature interactions that represent how a recommender system generally behaves. We propose a novel approach, Global Interaction Detection and Encoding for Recommendation (GLIDER), which detects feature interactions that span globally across multiple data-instances from a source recommender model, then explicitly encodes the interactions in a target recommender model, both of which can be black-boxes. GLIDER achieves this by first utilizing feature interaction detection with a data-instance level interpretation method called LIME over a batch of data samples. GLIDER then explicitly encodes the collected global interactions into a target model via sparse feature crossing. In our experiments on ad-click recommendation, we found that the interpretations generated by GLIDER are informative, and the detected global interactions can significantly improve the target model's prediction performance, even in a setting where the source and target models are the same. Because our interaction interpretation method is very general, we also show that the interpretations are informative in domains outside of recommendation, such as image and text classification. Our contributions are as follows: 1. We propose GLIDER to detect and explicitly encode global feature interactions in blackbox recommender systems. Through experiments, we demonstrate the overall interpretability of detected feature interactions and show that they can be leveraged to improve recommendation accuracy. A simplified overview of GLIDER. 1 GLIDER uses interaction detection and LIME together to interpret feature interactions learned by a source black-box (recommender) model at a data instance, denoted by the large green plus sign. 2 GLIDER identifies interactions that consistently appear over multiple data samples, then explicitly encodes these interactions in a target black-box recommender model f rec. Interaction Interpretations: A variety of methods exist to detect feature interactions learned in specific models but not black-box models. For example, RuleFit , Additive Groves , and Tree-Shap detect interactions learned in trees, and Neural Interaction Detection detects interactions learned in a multilayer perceptron. Some methods have attempted to interpret feature groups in black-box models, such as Anchors , Agglomerative Contextual Decomposition , and Context-Aware methods ; however, these methods were not intended to identify feature interactions. Explicit Interaction Representation: There are increasingly methods for explicitly representing interactions in models. , , , and directly incorporate multiplicative cross terms in neural network architectures and use attention as an interaction module, all of which are intended to improve the neural network's function approximation. This line of work found that predictive performance can improve with dedicated interaction modeling. followed up by detecting interactions in data then explicitly encoding them via feature crossing. Our work approaches this problem from a model interpretation standpoint to show that interaction interpretations are also useful in explicit encoding. Black-Box Local vs. Global Interpretations: Data-instance level local interpretation methods are more flexible at explaining general black-box models; however, global interpretations, which cover multiple data instances, have become increasingly desirable to better summarize model behavior. Locally Interpretable Model-Agnostic Explanations (LIME) and Integrated Gradients are some of most used methods to locally interpret any classifier and neural predictor respectively. There are some methods for global black-box interpretations, such as shuffle-based feature importance , submodular pick , and visual concept extraction . §4.1 of this paper discusses local interaction interpretations, and §4.2-4.4 explains how we extract and utilize global interaction interpretations. Notations: Vectors are represented by boldface lowercase letters, such as x or w. The i-th entry of a vector x is denoted by x i. For a set S, its cardinality is denoted by |S|. Let d be the number of features in a dataset. An interaction, I, is the indices of a feature subset: I ⊆ {1, 2, . . ., d}, where interaction order |I| is always ≥ 2. A higher-order interaction always has order ≥ 3. For a vector x ∈ R d, let x I ∈ R |I| be restricted to the dimensions of x specified by I. Let a black-box model be f (·): R p → R. A black-box recommender model uses tabular feature types, as discussed later in this section. In classification tasks, we assume f is a class logit. p and d may be different depending on feature transformations. Feature Interactions: By definition, a model f learns a statistical (non-additive) feature interaction I if and only if f cannot be decomposed into a sum of |I| arbitrary subfunctions δ i, each not depending on a corresponding interaction variable (; ;), i.e., f (x) = i∈I δ i (x {1,...,d}\i ). For example, a multiplication between two features, x 1 and x 2, is a feature interaction because it cannot be represented as an addition of univariate functions, i.e., x 1 x 2 = δ 1 (x 2) + δ 2 (x 1). Recommendation Systems: A recommender system, f rec (·), is a model of two feature types: dense numerical features and sparse categorical features. Since the one-hot encoding of categorical feature x c can be high-dimensional, it is commonly represented in a low-dimensional embedding e c = one hot(x c)v c via embedding matrix v c. We now discuss the different components of GLIDER, starting from data-instance level (local) interpretations of interactions in §4.1, then global interaction detection in §4.2, and finally explicitly encoding the global interactions in §4.3. While our methodology is focused on recommender systems, it is not necessarily limited to this model type. Nonetheless, recommender systems are interesting because they have pervasive application in real-world systems, and their features are often very sparse. By sparse features, we mean features with many categories, e.g., millions of user IDs. The sparsity makes interaction detection challenging especially when applied directly on raw data because the one-hot encoding of sparse features creates an extremely large space of potential feature combinations . We start by explaining how to detect feature interactions in a black-box model at the data-instance level via interaction detection on feature perturbations. LIME Perturbation and Inference: Given a data instance x ∈ R p, LIME proposed to perturb the data instance by sampling a separate binary representation x ∈ {0, 1} d of the same data instance. Let ξ: {0, 1} d → R p be the map from the binary representation to the perturbed data instance. Starting from a binary vector of all ones that map to the original features values in the data instance, LIME uniformly samples the number of random features to switch to 0 or the "off" state. In the data instance, "off" could correspond to a 0 embedding vector for categorical features or mean value over a batch for numerical features. It is possible for d < p by grouping features in the data instance to correspond to single binary features in x. A key step in LIME interpretations is obtaining black-box predictions for the perturbed data instances to generate a dataset with binary inputs and prediction targets: Feature Interaction Detection: Feature interaction detection is concerned with identifying feature interactions in a dataset (; ; ;). Typically, proper interaction detection requires an expensive pre-processing step of feature selection to remove correlated features that adversely affect detection performance . Since the features in D are sampled randomly, they are uncorrelated by default, so we can directly use dataset D to detect feature interactions from black-box model f at a data instance x. f can be an arbitrary function and can generate highly nonlinear targets in D, so we focus on detecting interactions that could have generic forms. In light of this, we use a state-of-the-art method called Neural Interaction Detection (NID), which accurately and efficiently detects generic nonadditive and arbitrary-order statistical feature interactions . NID detects these interactions by training a lasso-regularized multilayer perceptron (MLP) on a dataset, then identifying the features that have high-magnitude weights to common hidden units. NID is efficient by greedily testing the top-interaction candidates of every order at each of h first-layer hidden units, enabling arbitrary-order interaction detection in O(hd) tests within one MLP. We can now define a function, MADEX(f, x), that inputs black-box f and data instance x, and outputs, a set of top-k detected feature interactions. MADEX stands for "ModelAgnostic Dependency Explainer". As the name suggests, MADEX is not limited to recommender models; it can also be used for general black-box models. In some cases, it is necessary to identify a k threshold. Because of the importance of speed for local interpretations, we simply use a linear regression with additional multiplicative terms to approximate global interactions I i and their counts c i over the dataset 1: G ← initialize occurrence dictionary for global interactions 2: for each data sample x within dataset B do 3: G ← increment the occurrence count of I j ∈ S, ∀j = 1, 2,..., |S| 5: sort G by most frequently occurring interactions 6: [optional] prune subset interactions in G within a target number of interactions K the gains given by interactions in S, where k starts at 0 and is incremented until the linear model's predictions stop improving. In this section, we discuss the first step of GLIDER. As defined in §4.1, MADEX takes as input a blackbox model f and data instance x. In the context of this section, MADEX inputs a source recommender system f rec and data instance x = [x 1, x 2, . . ., x p]. x i is the i-th feature field and is either a dense or sparse feature. p is both the total number of feature fields and the number of perturbation variables (p = d). We define global interaction detection as repeatedly running MADEX over a batch of data instances, then counting the occurrences of the same detected interactions, shown in Alg. 1. The occurrence counts are not only a useful way to rank global interaction detections, but also a sanity check to rule out the chance that the detected feature combinations are random selections. One potential concern with Alg. 1 is that it could be slow depending on the speed of MADEX. In our experiments, the entire process took less than 3 hours when run in serial over a batch of 1000 samples with ∼ 40 features on a 32-CPU server. In addition, this algorithm is fully parallelizable and only needs to be run once to obtain the summary of global interactions. Each global interaction I i from Alg. 1 is used to create a synthetic feature x Ii for a target recommender system. The synthetic feature x Ii is created by explicitly crossing sparse features indexed in I i. If interaction I i involves dense features, we bucketize the dense features before crossing them. The synthetic feature is sometimes called a cross feature or conjunction feature . In this context, a cross feature is an n-ary Cartesian product among n sparse features. If we denote X 1, X 2,..., X n as the set of IDs for each respective feature x 1, x 2,..., x n, then their cross feature x {1,...,n} takes on all possible values in Accordingly, the cardinality of this cross feature is |X 1 | × · · · × |X n | and can be extremely large, yet many combinations of values in the cross feature are likely unseen in the training data. Therefore, we generate a truncated form of the cross feature with only seen combinations of values, x (j) I, where j is a sample index in the training data, and x (j) I is represented as a sparse ID in the cross feature x I. We further reduce the cardinality by requiring the same cross feature ID to occur more than T times in a batch of samples, or set to a default ID otherwise. These truncation steps significantly reduce the embedding sizes of each cross feature while maintaining their representation power. Once cross features {x Ii} i are included in a target recommender system, it can be trained as per usual. There are dual perspectives of GLIDER: as a method for model distillation or model enhancement. If a strong source model is used to detect global interactions which are then encoded in more resourceconstrained target models, then GLIDER adopts a teacher-student type distillation process. If interaction encoding augments the same model where the interactions were detected from, then GLIDER tries to enhance the model's ability to represent the interactions. In our experiments, we study the effectiveness of GLIDER on real-world data. The hyperparameters for local interaction interpretation in our experiments are as follows. For all experiments, we use 5000 perturbation samples to train the models used for interaction detection. We use NID as the interaction detector, which requires training an MLP to detect each set of interactions. The MLPs for §5.3 have architectures of 50-30-10 first-to-last hidden layer sizes, and in §5.2, architectures of 256-128-64. We apply an 1 regularization of λ 1 = 5e−5, and the learning rate is 5e−3. In general, models are trained with early stopping on the validation set. For LIME perturbations, we need to establish what a binary 0 maps to via ξ in the raw data instance (§4.1). In domains involving embeddings, i.e., sparse features and word embeddings, the 0 ("off") state is the zeroed embedding vector. For dense features, it is the mean feature value over a batch; for images, the mean of each RGB of the image. For our DNA experiment, we use a random nucleotide other than the original one. These settings correspond to what is used in literature (; . In our graph experiment, the nodes within the neighborhood of a test node are perturbed, where each node is zeroed during perturbation. In this section, we provide experiments with GLIDER on models trained for clickthrough-rate (CTR) prediction. The recommender models we study include commonly reported baselines, which all use neural networks: Wide&Deep , DeepFM , Deep&Cross , xDeepFM , and AutoInt . AutoInt is the reported state-of-the-art in academic literature, so we use the model settings and data splits provided by AutoInt's official public repository 1. For all other recommender models, we use public implementations 2 with the same original architectures reported in literature, set all embedding sizes to 16, and tune the learning rate and optimizer to try to reach or surpass the test logloss reported by the AutoInt paper (on AutoInt's data splits). From tuning, we use the Adagrad optimizer with learning rates in {0.01, 0.001}. The datasets we use are benchmark CTR datasets with the largest number of features: Criteo 3 and Avazu 4, whose data statistics are shown in Table 1. Criteo and Avazu both contain 40+ millions of user records on clicking ads, with Criteo being the primary benchmark in CTR research (Cheng Table 3 : Test prediction performance by encoding top-K global interactions in baseline recommender systems on the Criteo and Avazu datasets (5 trials). K are 40 and 20 for Criteo and Avazu respectively. "+ GLIDER" means the inclusion of detected global interactions to corresponding baselines. The "Setting" column is labeled relative to the source of detected interactions: AutoInt.;;;; ). For each dataset, we train a source AutoInt model, f rec, then run global interaction detection via Algorithm 1 on a batch of 1000 samples from the validation set. A full global detection experiment finishes between 2-3 hours when run in serial on either Criteo or Avazu datasets in a 32-CPU Intel Xeon E5-2640 v2 @ 2.00GHz server, and significant speed-ups can be achieved by fully parallelizing Algorithm 1. The detection across datasets are shown in Figure 2 as plots of detection counts versus rank. Because the Avazu dataset contains non-anonymized features, we directly show its top-10 detected global interactions in Table 2. From Figures 2, we see that the top interactions are detected very frequently across data instances, once appearing across more than half of the batch. In Table 2, the top-interactions can be explained. For example, the interaction between "hour" (in UTC time) and "device ip" makes sense because users -here identified by an IP address -have ad-click behaviors dependent on their time zones. We hypothesize that the global interaction detections are also informative for modeling purposes. Based on our from the previous section (§5.2.1), we turn our attention to explicitly encoding the detected global interactions in target baseline models via truncated feature crosses (detailed in §4.3). In order to generate valid cross feature IDs, we bucketize dense features into a maximum of 100 bins before crossing them and require that final cross feature IDs occur more than T = 100 times over a training batch of one million samples. We take AutoInt's top-K global interactions on each dataset from §5.2.1 with subset interactions excluded (Algorithm 1, line 6) and encode the interactions in each baseline model including AutoInt Table 5: Prediction performance (mean-squared error; lower is better) with (k > 0) and without (k = 0) interactions for random data instances in the test sets of respective black-box models. k = L corresponds to the interaction at a rank threshold. 2 ≤ k < L are excluded because not all instances have 2 or more interactions. Only with detected interactions are shown. At least 80% (≥ 320) of the data instances possessed interactions over 10 trials for each model/performance statistic. linear LIME 0 9.8e−3 ± 9e−4 0.101 ± 7e−3 0.25 ± 0.07 0.080 ± 3.0e−4 MADEX 1 8e−3 ± 1e−3 0.056 ± 9e−3 0.22 ± 0.06 0.062 ± 8.1e−3 MADEX L 6e−3 ± 1e−3 0.024 ± 7e−3 0.16 ± 0.05 0.038 ± 9.6e−3 itself. There is consensus that 0.001 logloss or AUC improvements are significant in CTR prediction tasks (; ; ;). K is tuned on valiation sets, and model hyperparameters are the same between a baseline and one with encoded interactions. We set K = 40 for Criteo and K = 20 for Avazu. We found that using GLIDER can often reach or exceed the 0.001 significance level, especially for the main Criteo benchmark dataset, as shown in Table 3. These performance gains are obtained at limited cost of extra model parameters (Table 4) thanks to the truncations applied to our feature crosses. In Figure 3, we also show how the test performance of AutoInt varies with different K on the Criteo dataset. One one hand, the evidence that AutoInt's detected interactions can improve other baselines' performance suggests the viability of interaction distillation. On the other hand, evidence that AutoInt's performance on Criteo can improve using its own detected interactions suggests that AutoInt may benefit from learning interactions more explicitly. In either model distillation or enhancement settings, we found that GLIDER performs especially well on industry production models trained on large private datasets with thousands of features. Since the proposed interaction interpretations by GLIDER are not entirely limited to recommender systems, we demonstrate interpretations on more general black-box models. Specifically, we experiment with the function MADEX(·) defined in §4.1, which inputs a black-box f, data-instance x, and outputs a set of interactions. The models we use are trained on very different tasks: ResNet152-an image classifier pretrained on ImageNet'14 , Sentiment-LSTM-a 2-layer bi-directional long short-term memory network (LSTM) trained on the Stanford Sentiment Treebank (SST) , DNA-CNN-a 2-layer 1D convolutional neural network (CNN) trained on MYC-DNA binding data (; ; ; ;), and GCN-a 3-layer Graph Convolutional Network trained on the Cora dataset . We first provide quantitative validation for the detected interactions of all four models in §5.3.1, followed by qualitative for ResNet152, Sentiment-LSTM, and DNA-CNN in §5.3.2. To provide quantitative validation of interaction interpretations of black-box models, we evaluate the predictive power of the interactions at the data instance level. As suggested in §4.1 and §4.3, encoding feature interactions is a way to increase a model's function representation, but this also means that prediction performance gains over simpler first-order models (e.g., linear regression) is a way to test the significance of the detected interactions. In this section, we use neural network function approximators for each top-interaction from the ranking {I i} given by MADEX's interaction detector (in this case NID). Similar to the k-thresholding description in §4.1, we start at k = 0, which is a linear regression, then increment k with added MLPs for each until validation performance stops improving, denoted at k = L. The MLPs all have architectures of 30-10 first-to-last hidden layer sizes and use the binary perturbation dataset D (introduced in §4.1). Test prediction performances are shown in Table 5 for k ∈ {0, 1, L}. The average number of features of D among the black-box models ranges from 16 to 189. Our quantitative validation shows that adding feature interactions for DNA-CNN, Sentiment-LSTM, and ResNet152, and adding node in-teractions for GCN in significant performance gains when averaged over 40 randomly selected data instances in the test set. Figure 4: Qualitative of the detected interactions by MADEX and the selected features by LIME's original linear regression ("LIME selection") on (a) ResNet152 and (b) Sentiment-LSTM. The interpretations between MADEX and LIME selection are complementary. For our qualitative analysis, we provide interaction interpretations via MADEX(·) of ResNet152, Sentiment-LSTM, and DNA-CNN on test examples. The interpreations are given by, a set of k detected interactions, which are shown in Figures 4a and 4b for ResNet152 and Sentiment-LSTM respectively. Interactions that have majority overlap among S are merged, i.e., overlap coefficient > 0.5 . For reference, we also show the selected features by LIME's original linear regression, which takes the top-5 features that attribute towards the predicted class 5. In Figure 4a, the MADEX columns show selected features from the detected interactions between Quickshift superpixels . We see that the interactions can form a single region or multiple regions of the image, and they are complementary to LIME's feature selection. For example, the interpretations of the "deskop computer" classification show that interaction detection finds one of the computers and feature selection finds the other. For Sentiment-LSTM interpretations in Figure 4b, we also see that MADEX's interactions can complement LIME's selected features. Here, the interactions show salient combinations of words, such as "science fiction" and "I like pug". In our experiments on DNA-CNN, we consistently detected the interaction between "CACGTG" nucleotides, which form a canonical DNA sequence . The interaction was detected 76.5% out of 187 CACGTG appearances in the test set. We proposed GLIDER that detects and explicitly encodes global feature interactions in black-box recommender systems. In our experiments, we found that the detected global interactions are informative and that explicitly encoding interactions can improve the accuracy of CTR predictions. We further validated interaction interpretations on image, text, and graph classifiers. We hope GLIDER encourages investigation into the complex interaction behaviors of recommender models to understand why certain feature interactions are very predictive. For future research, we wish to understand how feature interactions play a role in the integrity of automatic recommendations. In this section, we study whether increasing embedding size can obtain similar prediction performance gains as explicitly encoding interactions via GLIDER. We increase the embedding dimension sizes of every sparse feature in baseline recommender models to match the total number of model parameters of baseline + GLIDER as close as possible. The embedding sizes we used to obtain similar parameter counts are shown in Table 6. For the Avazu dataset, most of the embedding sizes remain unchanged because they were already the target size. The corresponding prediction performances of all models are shown in Table 7. We observed that directly increasing embedding size / parameter counts generally did not give the same level of performance gains that GLIDER provided. Table 6: Comparison of # model parameters between baseline models with enlarged embeddings and original baselines + GLIDER (from Tables 3 and 4). The models with enlarged embeddings are denoted by the asterick (*). The embedding dimension of sparse features is denoted by "emb. size". Percent differences are relative to baseline* models. M denotes million, and the ditto mark (") means no change in the above line. Table 7: Test prediction performance coresponding to the models shown in Table 6 Model Criteo Avazu We examine the effect of dense feature bucketization on the parameter efficiency and prediction performance of AutoInt. Results are provided for the Criteo dataset, which contains 13 dense features. Figure 5 shows the effects of varying the number of dense feature buckets on the total number of parameters and the test logloss of AutoInt. Figure 6 shows the effects of varying the number of dense buckets on the embedding sizes of the cross features involving dense features. Both the effects on the average and individual embedding size are shown. 20 of the cross features involved a dense feature. Patterns to note include the largely asymptotic behavior of the parameter plots as the number of buckets increases (Figures 5a and 6). Our requirement that a valid cross feature ID occurs more than T times (§4.3) restricts the growth in parameters. In some cases, the number of cross feature IDs (embedding size) decreases (Figure 6b), which happens when the dense bucket size becomes too small to satisfy the T occurrence restriction. In Figure 5b, prediction performance degrades beyond 100 buckets, yet it is still an improvement over the baseline without cross features (0.4434 in Table 3). The degradation may be caused by overfitting. In this section, we show ranked for interactions discovered by MADEX in Sentiment-LSTM. The top-1 interactions are provided on random sentences from the SST test set. For every sentence, we preprocess it to remove stop words. If interactions are not detected in a sentence, that sentence is excluded. Results are shown in Table 8. We use the same stop words suggested by , i.e., {a, an, and, are, as, at, be, by, for, from, has, he, in, is, it, its, of, on, that, the, to, was, were, will, with}. In order to obtain occurrence counts of word interactions, we need to detect the same word interactions across different sentences. Naturally, different sentences will often have different words, so it is nontrivial to identify consistent word interactions. We start by collecting interaction candidates by running MADEX over all sentences in the SST test set, then identifying the word interactions that appear multiple times. Here, we make the assumption that word interactions are ordered but not necessarily adjacent or positionally bound. Therefore, a hypothetical interaction, (not, good), could be found in either sentence: "This is not good", or "This movie is not so good". The order of the words matter, so (not, good) = (good, not). After collecting interaction candidates, we then look for sentences in the larger IMDB dataset that contain the same ordered words of an interaction candidate. Therefore, each interaction candidate I i will have its own set of sentences W i that could potentially yield that interaction. Let W i contain a random selection of 40 viable sentences for I i. The interactions {I i} with the highest detection counts across corresponding W i are shown in We compare the detection performances between MADEX and baselines on identifying feature interactions learned by complex models, i.e., XGBoost , Multilayer Perceptron (MLP), and Long Short-Term Memory Network (LSTM) . The baselines are Tree-Shap: a method to identify interactions in tree-based models like XGBoost , MLP-ACD+: a modified version of ACD to search all pairs of features in MLP to find the best interaction candidate, and LSTM-ACD+: the same as MLP-ACD+ but for LSTMs. All baselines are local interpretation methods. For MADEX, we sample continuous features from a truncated normal distribution N (x, σ 2 I) centered at a specified data instance x and truncated at σ. We evaluate interaction detection performance by using synthetic data where ground truth interactions are known . We generate 10e3 samples of synthetic data using functions F 1 − F 4 (Table 10) with continuous features uniformly distributed between −1 to 1. Next, we train complex models (XGBoost, MLP, and LSTM) on this data. Lastly, we run MADEX and the baselines on 10 trials of 20 data instances at randomly sampled locations on the synthetic function domain. Between trials, the complex models are trained with different random initialization to test the stability of each interpretation method. Interaction detection performance is computed by the average R-precision 6 of interaction rankings across the sampled data instances. Results are shown in Table 11. On the tree-based model, MADEX can compete with the tree-specific baseline Tree-Shap, which only detects pairwise interactions. On MLP and LSTM, MADEX performs significantly better than ACD+. The performance gain is especially large in the LSTM setting. Table 11: Detection Performance in R-Precision (higher the better). σ = 0.6 (max: 3.2). "Tree" is XGBoost. *Does not detect higher-order interactions. †Requires an exhaustive search of all feature combinations. Tree-Shap MADEX MLP-ACD+ MADEX LSTM-ACD+ MADEX F 1 (x) 1 ± 0 1 ± 0 0.63 ± 0.08 1 ± 0 0.3 ± 0.2 1 ± 0 F 2 (x) 1 ± 0 0.14 ± 0.09 0.41 ± 0.06 0.97 ± 0.03 0.01 ± 0.02 0.96 ± 0.03 F 3 (x) 1 ± 0 1 ± 0 0.3 ± 0.2 1 ± 0 0.05 ± 0.08 1 ± 0 F 4 (x) * 0.2 ± 0.1 † 0.61 ± 0.07 † 0.54 ± 0.09 This section shows how often higher-order interactions are identified by GLIDER / MADEX. Figure 8 plots the occurrence counts of global interactions detected in AutoInt for the Criteo and Avazu dataset, which correspond to the in Figure 2. Here we only show the occurrence counts of higher-order interactions, where the exact interaction order is annotated besides each data point. 3rd-order interactions are the most common type, and interestingly, an 8th-order interaction appears 13 times in the Avazu dataset. Figure 9 plots histograms of interaction orders for all interactions detected from ResNet152 and Sentiment-LSTM across 1000 random samples in their test sets. The average number of features are 67 and 19 for ResNet152 and Sentiment-LSTM respectively. Higherorder interactions are especially common in ResNet152.
Proposed a method to extract and leverage interpretations of feature interactions
1,353
scitldr
Rectified linear units, or ReLUs, have become a preferred activation function for artificial neural networks. In this paper we consider the problem of learning a generative model in the presence of nonlinearity (modeled by the ReLU functions). Given a set of signal vectors $\mathbf{y}^i \in \mathbb{R}^d, i =1, 2, \dots, n$, we aim to learn the network parameters, i.e., the $d\times k$ matrix $A$, under the model $\mathbf{y}^i = \mathrm{ReLU}(A\mathbf{c}^i +\mathbf{b})$, where $\mathbf{b}\in \mathbb{R}^d$ is a random bias vector, and {$\mathbf{c}^i \in \mathbb{R}^k$ are arbitrary unknown latent vectors}. We show that it is possible to recover the column space of $A$ within an error of $O(d)$ (in Frobenius norm) under certain conditions on the distribution of $\mathbf{b}$. Rectified Linear Unit (ReLU) is a basic nonlinear function defined to be ReLU: R → R + ∪ {0} as ReLU(x) ≡ max(0, x). For any matrix X, ReLU(X) denotes the matrix obtained by applying the ReLU function on each of the coordinates of the matrix X. ReLUs are building blocks of many nonlinear data-fitting problems based on deep neural networks (see, e.g., for a good exposition). In particular, showed that supervised training of very deep neural networks is much faster if the hidden layers are composed of ReLUs. Let Y ⊂ R d be a collection of signal vectors that are of interest to us. Depending on the application at hand, the signal vectors, i.e., the constituents of Y, may range from images, speech signals, network access patterns to user-item rating vectors and so on. We assume that the signal vectors satisfy a generative model, where each signal vector can be approximated by a map g: R k → R d from the latent space to the ambient space, i.e., for each y ∈ Y, y ≈ g(c) for some c ∈ R k. In this paper we consider the following specific model (single layer ReLU-network), with the weight (generator) matrix A ∈ R d×k and bias b ∈ R d: The generative model in raises multiple interesting questions that play fundamental role in understanding the underlying data and designing systems and algorithms for information processing. Here, we consider the following network parameter learning problem under the specific generative model of. Learning the network parameters: Given the n observations {y i} i∈[n] ⊂ R d from the model (cf.), recover the parameters of the model, i.e., A ∈ R d×k such that with latent vectors {c i} i∈[n] ⊂ R k. We assume that the bias vector b is a random vector comprising of i.i.d. coordinates with each coordinate distributed according to the probability density function 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. p(·). This question is closely related to the dictionary-learning problem. We also note that this question is different from the usual task of training a model (such as, ), in which case the set {c i} i∈[n] is also known (and possibly chosen accordingly) in addition to {y i} i∈[n]. Related works. There have been a recent surge of interest in learning ReLUs, and the above question is of basic interest even for a single-layer network (i.e., nonlinearity comprising of a single ReLU function). It is conceivable that understanding the behavior of a single-layer network would allow one to use some iterative peeling off technique to develop a theory for the generative models comprising of multiple layers. To the best of our knowledge, the network parameter learning problem, even for single-layer networks has not been studied as such, i.e., theoretical guarantees do not exist. Only in a very recent paper the unsupervised problem was studied when the latent vectors {c i} i∈ [n] are random Gaussian. The principled approaches to solve this unsupervised problem in practice reduce this to the'training' problem, such as the autoencoders that learn features by extensive end-to-end training of encoderdecoder pairs; or use the recently popular generative adversarial networks (GAN) that utilize a discriminator network to tune the generative network. The method that we are going to propose here can be seen as an alternative to using GANs for this purpose, and can be seen as an isolated'decoder' learning of the autoencoder. Note that the problem bears some similarity with matrix completion problems, a fact we greatly exploit. In matrix completion, a matrix M is visible only partially, and the task is to recover the unknown entries by exploiting some prior knowledge about M. In the case of, we are more likely to observe the positive entries of the matrix M, which, unlike a majority of matrix completion literature, creates the dependence between M and the sampling procedure.. With this notion, we can concisely represent the n observation vectors as where M = AC and 1 ∈ R n denotes the all-ones vector and ⊗ denotes the Kronecker product. Recall our assumption that b is a random vector comprising of i.i.d. coordinates with each coordinate distributed according to the probability density function p(·). Note that this model ensures that the bias corresponding to each coordinate is random, but does not change over different signal vectors. We employ a natural approach to learn the underlying weight matrix A from the observation matrix Y. As the network maps a lower dimensional latent vector c ∈ R k to obtain a signal vector y = ReLU(Ac + b) in dimension d > k, the matrix M = AC (cf.) is a low-rank matrix as long as k < min{d, n}. In our quest of recovering the weight matrix A, we first focus on estimating the matrix M, when given access to Y. This task can be viewed as estimating a low-rank matrix from its partial (randomized) observations. One of the main challenges that we face here is that while an entry of the matrix Y is a random variable (since b is a random bias), whether that is being observed or being cut-off by the ReLU function (for being negative) depends on the value of the entry itself. In general matrix completion literature, the entries of the matrix being observed are sampled independent of the underlying matrix itself (see, e.g., and references therein). For this reason, we cannot use most of these off-the-shelf. However, similar predicament is (partially) present in, where entries are quantized while being observed. This motivates us to employ a maximum-likelihood method inspired by. That said, our observation model differs from in a critical way: in our case the bias vector, while random, does not change over observations. This translates to less freedom during the transformation of the original matrix to the observed matrix, leading to dependence among the elements in a row. Furthermore, the analysis becomes notably different since the positive observations are not quantized. Furthermore, although our formulation is close to, because of the aforementioned differences in the observation models, we get much stronger guarantee on the recovery of the matrix. Indeed, our are comparable to analogous of that also study the quantized matrix completion problem. We show that our method guarantees the recovery of the matrix AC from Y with an error in Frobenius norm at most O(√ d) with high probability (see Theorem 1 for the formal statement). Then leveraging the well known on matrix perturbation, it is possible to also recover the column space of A with a similar guarantee. Extension to multi-layer networks. Our aim is to use a'peeling' technique to extend our to multi-layer networks. While rigorous theoretical guarantees for the network parameter learning problem do not extend to multi-layer networks as of now, we can still use the peel-off decoding for this case. Note that, even for the'training' problem of such network, which is a less demanding task than our problem, no rigorous guarantee exists beyond only two-layers. For example, the state-of-the art theoretical for training such as in, hold only for one layer of nonlinearity (the second layer is assumed to have a linear activation function in ). In fact most recent theoretical guarantees in this domain are restricted to one or two layer networks e.g.,. Given this, obtaining provable guarantees for multi-layer case of network parameter learning problem seems to be a challenging future work. We now focus on the task of estimating M from the observation matrix Y (cf. where M i,j denotes the (i, j)-th entry of M, and For i ∈ [d] and j ∈ [n], let M i,(j) denote the j-th largest element of the i-th row of M, i.e., for It is straightforward to verify from. Similarly, it follows from that whenever we have ). Based on these observation, we define the set of matrices X Y,ν,γ ⊂ R d×n as Recall that, p: R → R denotes the probability density function of each bias RV B. We use the notation F (x 1, x 2) = P(−x 1 ≤ B ≤ −x 2) and X * i = max j∈[n] X i,j. Thus, the (normalized) log-likelihood of observing Y given that X is the original matrix takes the following form. In order to recover the matrix M from the observation matrix Y, we employ the following program. Define ω p,γ,ν to be such that F (x, y) ≥ ω p,γ,ν for all x, y ∈ [−γ, γ] with |x − y| > ν. In what follows, we simply refer this quantity as ω p given that γ and ν are clear from context. Further define the following flatness and Lipschitz parameters associated with a function f: R → R: The following characterizes the performance of the program proposed in. Theorem 1. Assume that M ∞ ≤ γ and the observation matrix Y is related to M according to. Let M be the solution of the program specified in, and the bias density function p(x) is differentiable with bounded derivative. Then, the following holds with probability at least 1 − where, C 0 is a constant. The quantities β γ (p) and L γ (p) depend on the distribution of the bias and are defined in and, respectively. The full proof of Theorem 1 is omitted due to page limit. In the first step of the proof, we show that given the observation matrix Y, for any X ∈ X Y,ν,γ (cf.), we have This fact follows from the regularity assumptions on the probability density p and a sequence of inequalities similar to ones that relate KL divergence and the Hellinger divergence. In the next step, we show that, To upper bound the right-hand side we resort to standard techniques to bound the supremum of an empirical process such as the symmetrization and the contraction principle. The detail can be found in the full version. Recovering the network parameters. Let us denote the recovered matrix M as M = M + E, where E is the perturbation matrix that has bounded Frobenius norm (cf.). Now the task of recovering the parameters of the single-layer ReLU-network is equivalent to solving for A given M = M + E = AC + E. Note that, even without the perturbation E we could only hope to recover the column space of A and not the exact matrix A. Let U k and U k be the top k left singular vectors of M and M, respectively. Let σ k, the smallest non-zero singular value of M, is at least δ > 0. Then, it follows from standard from matrix-perturbation theory (cf. ) that there exists an orthogonal matrix O ∈ R k×k such that which is a guarantee that the column space of A is recovered by SVD within an error of O(d) in Frobenius norm by the column space of U k. Note that σ 1 is the largest singular value of M. Future directions: Extension to multi-layer networks. To learn a multi-layer ReLU network, we propose a'peeling' decoder as defined below. For better understanding, let us consider a two-layer model as follows: where Our'peeling' decoder will approach this layer-by-layer. First we use the likelihood-based matrix completion technique outlined in to recover M 2 as in the case of a one-layer network. Note that we will not be able to recover M 2 exactly, as Theorem 1 guarantees recovery only up to a Frobenius norm error. This creates a hurdle to recover A 1 C, since our current method does not handle dense bounded norm noise while learning network parameters. Also it will not be realistic to assume a probabilistic model for this bounded noise, since in the peeling off process the noise is coming from decoding of the previous layer. Under certain coherence conditions on the matrix to be learned, it is possible to handle such situations (see e.g., ). Note that, such coherence condition must be satisfied by the parameters of every layer of the network. An amenable but practical algorithm that is resilient to dense noise, under reasonable assumption on the network parameters, is an immediate area of interest. Even when this first hurdle can be crossed, we still have a task of factorization of M 2 to find A 2 and ReLU A 1 C + b 1 ⊗ 1 T, and in general we cannot find a unique factorization. Here as well, with additional reasonable assumptions the factorization can be made unique. Note that this might be simpler that the factorization step of the one-layer network, since there is already a lot of structure in the latter matrix (such as nonnegativity due to being output of a ReLU and about ∼ 50% sparsity). Provided this can be made to work analytically for two layers, there should not be any theoretical issue left to extend the process to multiple layers.
We show that it is possible to recover the parameters of a 1-layer ReLU generative model from looking at samples generated by it
1,354
scitldr
Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation. We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels. Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space. In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start problem. In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data. We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types. Informally, in machine learning a dense representation, or embedding of a vector x ∈ R n is another vector y ∈ R r that has much lower dimensionality (r n) than the original representation, and can be used to replace the original vector in downstream prediction tasks. Embeddings have multiple advantages, as they enable more efficient training BID17, and unsupervised learning BID25. For example, when applied to text, semantically similar words are mapped to nearby points. We consider two kind of algorithms that use embeddings: Word2Vec BID17, are designed to provide embeddings that are useful for a wide-array of predictions tasks. For example, the loss function of the continuous bag of words (CBOW) algorithm of Word2Vec is tuned to predict the next word of a sequence; however, in practice, the embeddings produced are mostly used for other tasks, such as analogy solving BID17, or sentiment analysis BID15. In the context of this paper, we refer to the embeddings of an unsupervised method that can be used for a variety of auxiliary prediction tasks as general-purpose. 2. Supervised methods, like matrix factorization, produce embeddings that are highly tuned to a prediction task. These embeddings may be interpretable but do not usually generalize to other tasks. We refer to these embeddings as task-specific. Matrix factorization and Word2Vec are unable to calculate embeddings for items that are not available during training ("cold-start" problem). While recent work using n-gram features BID2 have addressed this limitation for supervised and unsupervised tasks, it can only be used for a single feature type-words. In this paper we propose Feat2Vec as a novel method that allows calculating embeddings of arbitrary feature types from both supervised and unsupervised data. Our main contributions are:• Unsupervised Feat2Vec. Existing general-purpose dense representation methods are largely restricted to one or two feature types. For example, the Word2Vec methods can only calculate embeddings for words, while follow-up work has enabled embeddings for both words and documents BID15. To our knowledge, Feat2Vec is the first algorithm that is able to calculate general-purpose embeddings that are not tuned for a single specific prediction task for arbitrary feature types.• Supervised Feat2Vec. Task-specific methods can use arbitrary feature types, but are restricted in that embeddings must be calculated for each individual feature, while sometimes higher-level of abstractions may be desirable-for example, we may want to have embeddings of documents instead of simply words. This capability makes Supervised Feat2Vec extremely flexible. We demonstrate that our method can be used to calculate embeddings of unseen (cold-start) items when there is an alternative textual description. Factorization Machine BID22 ) is one of the most successful methods for general-purpose factorization. BID22 formulated it as an extension to polynomial regression. Consider a degree-2 polynomial (quadratic) regression, where we want to predict a target variable y from a vector of inputs x ∈ R n: DISPLAYFORM0 In words, n is the total number of features, the term b 0 is an intercept, b i is the strength of the i-th feature, and w i,j is the interaction coefficient between the i-th and j-th feature. The function ω is an activation. Choices for ω include a linear link (ω(x) = x) for continuous outputs, or a logistic link (ω(x) =exp (x) exp(x)+1 ) for binary outputs. Factorization Machine replaces the two-way individual pairwise parameters w i,j for each interaction with a vector of parameters w i for each feature. This is a rank-r vector of latent factors-embeddings in the neural literature-that encode the interaction between features and replaces the quadratic regression model with the following: DISPLAYFORM1 Intuitively, the dot product (·) returns a scalar that measures the (dis)similarity between the latent factors of features x i and x j. Polynomial regression has n 2 interaction parameters, and Factorization Machine has n×r. While setting r n makes the model less expressive, factorization will typically exploit features having some shared latent structure. Factorization Machine may dramatically reduce the number of parameters to estimate. BID22 shows that when the feature vector x consists only of two categorical features in one-hot encoding, Factorization Machine is equivalent to the popular Matrix Factorization algorithm BID14 ). We now describe how Feat2Vec extends the Factorization Machine model by allowing grouping of features, and enabling arbitrary feature extraction functions (§ 3.1). We also report a supervised method to learning Feat2Vec (§ 3.2), as well as a novel unsupervised training procedure (§ 3.3). We propose a framework for extending factorization machine with neural methods, by introducing structure into the feature interactions. Specifically, we do this by defining feature groups, κ, where each group contains features of a particular type. Explicitly, κ is a partition of the set of feature columns in a dataset and each set within the partition is a feature group. The embeddings of a feature group are then learned via a feature extraction function, φ i, defined for each feature group. Feat2Vec will then extract features from each feature group, and build r latent factors from them. In Factorization Machine, all the feature embeddings interact with each other, while in Feat2Vec, the interactions only occur between different feature groups. Formally, the addition of deep extraction methods yields the following statistical model: DISPLAYFORM0 In this notation, x κi is a subvector that contains all of the features that belong to the group κ i. Thus, DISPLAYFORM1 The intuition is that by grouping (sub-)features as a single entity, we can can reason on a higher level of abstraction. Instead of individual sub-features interacting among each other, the embeddings of feature groups interact with those of other groups. φ i is a feature extraction that inputs the i-th feature group of the instance, and returns an r-dimensional embedding. The feature extraction function φ i can allow for an arbitrary processing of its subfeatures. Across groups, entities interact with each other via the output of φ only. As a concrete example of an application of this grouping/feature extraction, we might group the individual words of a document into a "document" feature group, and allow this document embedding to then interact with learned embeddings of other document metadata (such as author id). We might expect the extraction function φ for the words in a document to extract features that characterize the attributes of the document taken as a whole, rather than simply the sum of its individual words. Figure 1 compares existing factorization methods with our novel model. In this example, Feat2Vec is using two feature groups: the first group only has a single feature which is projected to an embedding (just like a regular Factorization Machine); the second group has multiple features, which are together projected to a single embedding. The simplest implementation for φ i is a linear fully-connected layer, where the output of the r-th entry is: DISPLAYFORM2 DISPLAYFORM3 Note that without loss of generality, we could define a model that is equivalent to a shallow Factorization Machine by allowing each feature group to be a singleton: κ = {{x 1}, {x 2}... {x n}} and the linear extraction function presented in Equation 4.We can use Feat2Vec to both use large feature sets and overcome the cold-start problem. This is only possible when there is an alternative description of the item available (for example an image or a passage of text). In FIG1, we show how we address this problem by treating the words as indexed features, but placed within a structured feature group κ w, the group of word features. A feature extraction function φ acts on the features in κ w, and the other features interact with the words only via the output of φ. Notice that this implies we can precompute and store the latent factors of the target task seen during training, so that predictions during inference can be sped-up. For example if we have two feature groups (e.g, a label and an item), first we compute the feature extraction function to the unseen items and their embeddings, and then we simply apply a dot product over the stored vectors of the labels. FIG0 shows an approach of using neural networks within factorization machines that has been proposed multiple times BID5 BID7. It replaces the dot product of factors with a learned neural function, which has been shown to improve predictive accuracy for various tasks. In this case, fast inference for cold-start documents using pre-computed label embeddings is no longer possible. It needs to store the entire neural function that takes the embeddings as inputs. Another shortcoming of replacing the dot product with a neural function is that it would no longer be possible to interpret the embeddings as containing latent factors related to the target task; There may be highly complex mappings from the embeddings to the final output via this neural function. However, it would be straightforward to combine this approach with Feat2Vec. This is not explored in this work. We can learn the the parameters of a deep factorization model θ using training data by minimizing a loss function L: DISPLAYFORM0 Here, y(x) is the true target value for x obtained from training data, andŷ(x) is the one estimated by the model; the hyperparameter γ controls the amount of regularization. For the labeling and classification tasks, we optimize the binary cross-entropy for y ∈ {0, 1}: DISPLAYFORM1 For the regression tasks where the target value is continuous, we optimize the mean squared error (MSE): DISPLAYFORM2 Neural models are typically learned using mini-batch updates, where the incremental descent is performed with respect to several instances at a time. For the implementation of this paper, we built our models using the Keras programming toolkit BID3, that is now part of Tensorflow BID0. It enables automatic differentiation, and is bundled with a generalpurpose optimization algorithm called ADAM BID12 ) that needs no tuning of gradient step sizes. It is straightforward to optimize Equation 5 directly for multiclass or binary classification. However, when the number of labels is very large, it is common practice use a binary classifier and sample the negative examples BID4. For the multi-label classification tasks, we use Feat2Vec with a binary output. In this case we would have at least two feature groups-one of the feature groups is the label that we want to predict, and the other group(s) is the input from which we want to make the prediction. The output indicates whether the label is associated with the input (y = +1), or not (y = 0). The datasets we use for our labeling experiments only contains positive labels, thus for each training example we sample a set of negative labels equal to the number of positive labels. It is typical to use one of the following sampling strategies according to the best validation error, in each case excluding the actual positive labels for each training example -(i) uniformly from all possible labels, or (ii) from the empirical distributions of positive labels. Other sampling strategies have been proposed BID24 BID23. We now discuss how Feat2Vec can be used to learn embeddings in an unsupervised setting with no explicit target for prediction. The training dataset for a Feat2Vec model consists of only the observed data. In natural language, these would be documents written by humans. Since Feat2Vec (Equation 3) requires positive and negative examples, we also need to supply unobserved data as negative examples. Consider a feature group κ i, that exists in very high dimensional space. For example, this could happen because we are modeling with one-hot encoding a categorical variable with large number of possible values. In such scenario, it is overwhelmingly costly to feed the model all negative labels, particularly if the model is fairly sparse. A shortcut around this is a concept known as implicit sampling, where instead of using all of the possible negative labels, one simply samples a fixed number (k) from the set of possible negative labels for each positively labelled record. Word2Vec makes use of an algorithm called Negative Sampling, that has little theoretical guarantees BID4. In short, their approach samples a negative observation from a noise distribution Q w2v, that is proportional to the empirical frequency of a word in the training data. We introduce a new implicit sampling method that enables learning unsupervised embeddings for structured feature sets. We can learn the correlation of features within a dataset by imputing negative labels, simply by generating unobserved records as our negative samples. Unlike Word2Vec, we do not constraint features types to be words. Features groups can be individual columns in a data matrix, but they need not to be. By grouping subfeatures using the parameter κ in Equation 3, the model can reason on more abstract entities in the data. By entity, we mean a particular feature group value. For example, in our experiments on a movie dataset, we use a "genre" feature group, where we group non-mutually exclusive indicators for movie genres including comedy, action, and drama films. We start with a dataset S + of records with | κ| feature groups. We then mark all observed records in the training set as positive examples. For each positive record, we generate k negative labels using the following 2-step algorithm:Algorithm 1 Implicit sampling algorithm for unsupervised Feat2Vec: DISPLAYFORM0 for j ∈ {1, . . ., k} do 6:x − ← x + set initially to be equal to the positive sample 7:Draw a random feature group valuex ∼ Q 2 (Xκ i, α2) 8:x − κ i ←x substitute the i-th feature type with the sampled one 9: DISPLAYFORM1 end for 11:end for 12:return S − Explained in words, our negative sampling method for unsupervised learning iterates over all of the observations of the training dataset. For each observation x +, it randomly selects the i-th feature group from a noise distribution Q 1 (·). Then, it creates a negative observation that is identical to x +, except that its i-th feature group is replaced by a value sampled from a noise distribution Q 2 (·). In our application, we use the same class of noise distributions (flattened multinomial) for both levels of sampling, but this need not necessarily be the case. We now describe the two noise distributions that we use. We use P Q (x) to denote the probability of x under a distribution Q.Sampling Feature Groups. The function params calculates the complexity of a feature extraction function φ i. To sample a feature group, we choose a feature group κ i from a multinomial distribution with probabilities proportional a feature's complexity. By complexity, we mean the number of parameters we need to learn that are associated with a particular feature group. This choice places more weight on features that have more parameters and thus are going to require more training iterations to properly learn. The sampling probabilities of each feature group are: DISPLAYFORM0 For categorical variables using a linear fully-connected layer, the complexity is simply proportional to the number of categories in the feature group. However, if we have multiple intermediate layers for some feature extraction functions (e.g., convolutional layers), these parameters should also be counted towards a feature group's complexity. The hyper-parameter α 1 helps flatten the distribution. When α 1 = 0, the feature groups are sampled uniformly, and when α 1 = 1, they are sampled proportional to their complexity. Figure A.1 in the Appendix provides a visualization of how the feature sampling rate varies with the hyperparameter for features with differing levels of complexity. Sampling Feature Group Values. To sample a value from within a feature groups κ i, we use a similar strategy to Word2Vec and use the empirical distribution of values: DISPLAYFORM1 Here, count(x) is the number of times a feature group value x appeared in the training dataset S +, and α 2 is again a flattening hyperparameter. This method will sometimes by chance generate negatively labeled samples that do exist in our sample of observed records. The literature offers two possibilities: in the Negative Sampling that Word2Vec follows, the duplicate negative samples are simply ignored BID4. Alternatively, it is possible to account for the probability of random negative labels that are identical to positively labeled data using Noise Contrastive Estimation (NCE) BID8. For our unsupervised learning of embeddings, we optimize a NCE loss function, to adjust the structural statistical modelŷ = p(y = 1| x, φ, θ), expressed in Equation 3 to account for the possibility of random negative labels that appear identical to positively labeled data. θ here represents the parameters learned in during training (i.e. the b i terms and parameters associated with the extraction functions φ i in Equation 3). Since we only deal with a dichotomous label, indicating a positive or negative sample, for unsupervised learning, we restrict our attention to usage of Equation 3 with ω as a logistic link function. An additional burden of NCE is that we need to calculate a partition function Z x for each unique record type x in the data that transforms the probabilityŷ of a positive or negative label into a wellbehaved distribution that integrates to 1. Normally, this would introduce an astronomical amount of computation and greatly increase the complexity of the model. As a work-around, we appeal to the work of BID18, who showed that in the context of language models that setting the Z x = 1 in advance effectively does not change the performance of the model. The intuition is that if the underlying model has enough free parameters that it will effectively learn the probabilities itself. Thus, it does not over/under predict the probabilities on average (since that will in penalties on the loss function).Written explicitly, the new structural probability model is: DISPLAYFORM0 where s denotes the score of a record x given parameter values/extraction functions: DISPLAYFORM1 and P Q denotes the total probability of a record x i being drawn from our negative sampling algorithm, conditional on the positively labeled record x + the negative sample is drawn for: DISPLAYFORM2 Our loss function L optimizes θ, the parameters of the feature extraction functions φ, while accounting for the probability of negative samples. DISPLAYFORM3 Feat2Vec has interesting theoretical properties. For example, it is well known that Factorization Machines can be used as a multi-label classifier: with at least two features, one can use one of the feature as the target label, and the other as the input feature to make a prediction. In such setting, the output indicates whether the label is associated with the input (y = +1), or not (y = 0), and therefore the input can be associated with more than one label. With n feature types, Feat2Vec is equivalent to optimizing a convex combination of the loss functions from n individual Factorization Machines. In other words, it optimizes n multi-label classifiers, where each classifier is optimized for a different target (i.e.,a specific feature group). We show the proof of this in the Appendix 1. We now address our working hypotheses for evaluating supervised embeddings. For all our experiments we define a development set and a single test set which is 10% of the dataset, and a part of the development set is used for early stopping or validating hyper-parameters. Since these datasets are large and require significant time to train on an Nvidia K80 GPU cluster, we report on only a single training-test split. For the multi-label classification task in 4.1.1 we predict a probability for each document-label pair and use an evaluation metric called Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC). Since we only observe positive labels, for each positive label in the test set we sample negative labels according to the label frequency. This ensures that if a model merely predicts the labels according to their popularity, it would have an AUC of 0.5. A caveat of our evaluation strategy is that we could be underestimating the performance of our models-there is a small probability that the sampled negatives labels are false negatives. However, since we apply the same evaluation strategy consistently across our methods and baselines, the relative difference of the AUC is meaningful. We choose the AUC as a metric because it is popular for both classification and ranking problems. For the regression task in 4.1.2, we use mean squared error (MSE) as the evaluation metric. In preliminary experiments we noticed that regularization slows down convergence with no gains in prediction accuracy, so we avoid overfitting only by using early stopping. We share most of the code for the experiments online 1 for reproducibility. For our feature extraction function φ for text, we use a Convolutional Neural Network (CNN) that has been shown to be effective for natural language tasks BID11 BID28. In Appendix B we describe this network and its hyper-parameters. Instead of tuning the hyper-parameters, we follow previously published guidelines BID30. We compare Feat2Vec with an extension of matrix factorization that can generalize to unseen items for text documents, Collaborative Topic Regression , a method with an open-source Python implementation 2. We evaluate them on the CiteULike dataset which consists of pairs of scientific articles and the users who have added them to their personal libraries, and it contains 16,980 unique articles and 5,551 unique users. We use the models to predict users who For CTR we use the hyper-parameters reported by the authors as best, except for r which we found had a significant impact on training time. We only consider r ∈ {5, 10, 15} and choose the value which gives the best performance for CTR (details in Appendix A.2). On the warm-start condition, CTR has an AUC of 0.9356; however, it shows significant degradation in performance for unseen documents and it only performs slightly better than random chance with an AUC of 0.5047. On the other hand, Feat2Vec achieves AUC of 0.9401 on the warm-start condition, and it only degrades to 0.9124 on unseen documents. Feat2Vec can also be trained over ten times faster, since it can leverage GPUs. 4 We also note that we have not tuned the architecture or hyper-parameters of the feature extraction function φ and greater improvements are possible by optimizing them. We now compare with a method called DeepCoNN, a deep network specifically designed for incorporating text into matrix factorization BID31 -which reportedly, is the state of the art for predicting customer ratings when textual reviews are available. For Feat2Vec we use the same feature extraction function (see Appendix B.1 for details) used by DeepCoNN. We evaluate on the Yelp dataset 5, which consists of 4.7 million reviews of restaurants. For each user-item pair, DeepCoNN concatenates the text from all reviews for that item and all reviews by that user. The concatenated text is fed into a feature extraction function followed by a factorization machine. In contrast, for Feat2Vec, we build 3 feature groups: item identifiers (in this case, restaurants), users and review text. TAB0 compares our methods to DeepCoNN's published because a public implementation is not available. We see that Feat2Vec provides a large performance increase when comparing the reported improvement, over Matrix Factorization, of the mean squared error. Our approach is more general, and we claim that it is also more efficient. Since DeepCoNN concatenates text, when the average reviews per user isn u and reviews per item isn i, each text is duplicated on averagen i ×n u times per training epoch. In contrast, for Feat2Vec each review is seen only once per epoch. Thus it can be 1-2 orders of magnitude more efficient for datasets wheren i ×n u is large. Ex ante, it is unclear to us how to evaluate the performance of an unsupervised embedding algorithm, but we felt that a reasonable task would be a ranking task one might practically attempt using our datasets. This task will assess the similarity of trained embeddings using unseen records in a left-out dataset. In order to test the relative performance of our learned embeddings, we train our unsupervised Feat2Vec algorithm and compare its performance in a targeted ranking task to Word2Vec's CBOW algorithm for learning embeddings. In our evaluation approach, we compare the cosine similarity of the embeddings of two entities where these entities are known to be associated with each other since they appear in the same observation in a test dataset. In particular, in the movie dataset 3 For warm-start we use https://www.cs.cmu.edu/˜chongw/data/citeulike/folds/ cf-train-1-items.dat and for cold-start predictions, we use the file ofm-train-1-items.dat and the corresponding test sets for each 4 Feat2Vec and MF were trained on an Nvidia K80 GPU, while CTR was trained on a Xeon E5-2666 v3 CPU.5 https://www.yelp.com/dataset/challenge we compare the similarity of movie directors to those of actors who were cast in the same film for a left-out set of films. For our educational dataset, we compare rankings of textbooks by evaluating the similarity of textbook and user embeddings. We evaluate the rankings according to their mean percentile rank (MPR): DISPLAYFORM0 where R i is the rank of the entity under our evaluation procedure for observation i. This measures on average how well we rank actual entities. A score of 0 would indicate perfect performance (i.e. top rank every test sample given), so a lower value is better under this metric. See the appendix §A.1 for further details on the experimental setup. Movies The Internet Movie Database (IMDB) is a publicly available dataset 6 of information related to films, television programs and video games. Though in this paper, we focus only on data on its 465,136 movies. TAB2.1 in the appendix (§A.1) summarizes the feature types we use. It contains information on writers, directors, and principal cast members attached to each film, along with metadata. Education We use a dataset from an anonymized leading technology company that provides educational services. In this proprietary dataset, we have 57 million observations and 9 categorical feature types which include textbook identifier, user identifier, school identifier, and course the book is typically used with, along with other proprietary features. Here, each observation is an "interaction" a user had with a textbook. After training, we use the cast members associated with the movies of the test set and attempt to predict the actual director the film was directed. We take the sum of the cast member embeddings, and rank the directors by cosine similarity of their embeddings to the summed cast member vector. If there is a cast member in the test dataset who did not appear in the training data, we exclude them from the summation. For the educational dataset, we simply use the user embedding directly to get the most similar textbooks. TAB1 presents the from our evaluation. Feat2Vec sizably outperforms CBOW in the MPR metric. In fact, Feat2Vec predicts the actual director 2.43% of the times, while CBOW only does so 1.26% of the time, making our approach almost 2 times better in terms of Top-1 Precision metric. We explore in greater detail the distribution of the rankings in the appendix in §A.2. We now focus on how well Feat2Vec performs on a real-valued feature with a complex feature extraction function. We expect this task to highlight Feat2Vec's advantage over token-based embedding learning algorithms, such as Word2Vec, since our rating embedding extraction function will require embeddings of numerically similar ratings to be close, while Word2Vec will treat two differing ratings tokens as completely different entities. We evaluate the prediction of the real-valued rating of movies in the test dataset by choosing the IMDB rating embedding most similar 7 to the embedding of the movie's director, and compute the Root Mean Squared Error (RMSE) of the predicted rating in the test dataset. We also vary α 1, the flattening hyperparameter for feature group 6 http://www.imdb.com/interfaces/ 7 As before, the metric is cosine similarity. sampling, to see what effect this hyperparameter has on our performance. Intuitively, a low α 1 will greatly improve the quality of the ratings embeddings learned, since it has relatively few parameters and is otherwise sampled infrequently. At the same time, with low α 1 the director feature will be sampled less since it is one of the most complex features to learn, so the learned director embeddings may be of poorer quality. FIG2 displays the of our experiment, benchmarked against the performance of Word2Vec's CBOW algorithm in the prediction task. We also show as a baseline the RMSE of a random uniform variable over the range of possible ratings (0 to 10). As is evident from the plot, CBOW performs a bit better than a random prediction, but is also handily outperformed by Feat2Vec across all hyper-parameter settings. The algorithm's performance does not seem very sensitive to the hyperparameter choice. The original Factorization Machine formulation has been extended for multiple contexts. For example, Field-Aware Factrorization Machine BID10 allows different weights for some feature interactions, but does not allow feature groups or feature extraction functions like Feat2Vec does. Algorithms that calculate continuous representations of entities other than words have been proposed for biological sequences BID1, of vertices in network graphs BID19 or in machine translation for embeddings of complete sentences BID13. Generative Adversarial Networks BID6 )(GANs) have been used to produce unsupervised embeddings of images effective for classification BID21 and for generating natural language BID20. To our knowledge, GANs have not been used for jointly embedding multiple feature types. Adversarial training could be an alternative to NCE for unsupervised learning, but we leave this for future study. We recently discovered a promising direction for an algorithm still in development called StarSpace BID29 with similar goals from ours. Even though they intend to be able to embed all types of features, at the time of the writing of this paper, their pre-print method was limited to only work for bag of words. While Feat2Vec can jointly learn embeddings for all feature values in a dataset, StarSpace samples a single arbitrary feature. Our preliminary experiments suggest that sampling a single feature does not produce embeddings that generalize well. Nonetheless, a limitation of our work is that we do not compare with StarSpace, which future work may decide to do. Embeddings have proven useful in a wide variety of contexts, but they are typically built from datasets with a single feature type as in the case of Word2Vec, or tuned for a single prediction task as in the case of Factorization Machine. We believe Feat2Vec is an important step towards generalpurpose methods, because it decouples feature extraction from prediction for datasets with multiple feature types, it is general-purpose, and its embeddings are easily interpretable. In the supervised setting, Feat2Vec is able to calculate embeddings for whole passages of texts, and we show experimental outperforming an algorithm specifically designed for text-even when using the same feature extraction CNN. This suggests that the need for ad-hoc networks should be situated in relationship to the improvements over a general-purpose method. In the unsupervised setting, Feat2Vec's embeddings are able to capture relationships across features that can be twice as better as Word2Vec's CBOW algorithm on some evaluation metrics. Feat2Vec exploits the structure of a datasets to learn embeddings in a way that is structurally more sensible than existing methods. The sampling method, and loss function that we use have interesting theoretical properties. To the extent of our knowledge, Unsupervised Feat2Vec is the first method able to calculate continuous representations of data with arbitrary feature types. Future work could study how to reduce the amount of human knowledge our approach requires; for example by automatically grouping features into entities, or by automatically choosing a feature extraction function. These ideas can extend to our codebase that we make available 8. Overall, we evaluate supervised and unsupervised Feat2Vec on 2 datasets each. Though further experimentation is necessary, we believe that our are an encouraging step towards general-purpose embedding models. Bag of categories 244,241 "George Johnson", "Jack Russell" Principal cast members (actors) Bag of categories 1,104,280 "George Clooney", "Brad Pitt", "Julia Roberts" A APPENDIXES A.1 UNSUPERVISED RANKING EXPERIMENT DETAILS For our evaluation, we define a testing set that was not used to tune the parameters of the model. For the IMDB dataset, we randomly select a 10% sample of the observations that contain a director that appears at least twice in the database 9. We do this to guarantee that the set of directors in the left-out dataset appear during training at least once, so that each respective algorithm can learn something about the characteristics of these directors. For the educational dataset, our testing set only has observations of textbooks and users that appear at least 10 times in training. For both Feat2Vec and CBOW, we perform cross-validation on the loss function, by splitting the 10% of the training data randomly into a validation set, to determine the number of epochs to train, and then train the full training dataset with this number of epochs.10 While regularization of the embeddings during training is possible, this did not dramatically change , so we ignore this dimension of hyperparameters. We rank left-out entity pairs in the test dataset using the ordinal ranking of the cosine similarity of target and input embeddings. For the IMDB dataset, the target is the director embedding, and the input embedding is the sum of the cast member embeddings. For the educational dataset, the target is the textbook embedding, and the input embedding is the user embedding. For training Feat2Vec we set α 1 = α 2 = 3/4 in the IMDB dataset; and α 1 = 0 and α 2 = 0.5 for the educational. In each setting, α 2 is set to the same flattening hyperparameter we use for CBOW to negatively sample words in a document. We learn r = 50 dimensional embeddings under both algorithms. Below we describe how CBOW is implemented on our datasets for unsupervised experiments and what extraction functions are used to represent features in the IMDB dataset. Word2Vec For every observation in each of the datasets, we create a document that tokenizes the same information that we feed into Feat2Vec. We prepend each feature value by its feature name, and we remove spaces from within features. In Figure A.2 we show an example document. Some features may allow multiple values (e.g., multiple writers, directors). To feed these features into the models, for convenience, we constraint the number of values, by truncating each feature to no more than 10 levels (and sometimes less if reasonable). This in retaining the full set of information for well over 95% of the values. We pad the sequences with a "null" category whenever necessary to maintain a fixed length. We do this consistently for both Word2Vec and Feat2Vec. We use the CBOW Word2Vec algorithm and set the context window to encompass all other tokens in a document during training, since the text in this application is unordered. Here, we explain how we build these functions:• Bag of categories, categorical, and boolean: For all of the categorical variables, we learn a unique r-dimensional embedding for each entity using a linear fully-connected layer (Equation 4). We do not require one-hot encodings, and thus we allow multiple categories to be active; ing in a single embedding for the group that is the sum of the embeddings of the subfeatures. This is ordering-invariant: the embedding of "Brad Pitt" would be the same when he appears in a movie as a principal cast member, regardless whether he was 1st or 2nd star. Though, if he were listed as a director it may in a different embedding.• Text: We preprocess the text by removing non alpha-numeric characters, stopwords, and stemming the remaining words. We then follow the same approach that we did for categorical variables, summing learned word embeddings to a "title embedding" before interacting. It would be easy to use more sophisticated methods (e.g, convolutions), but we felt this would not extract further information.• Real-valued: For all real-valued features, we pass these features through a 3-layer feedforward fully connected neural network that outputs a vector of dimension r, which we treat as the feature's embedding. Each intermediate layer has r units with relu activation functions. These real-valued features highlight one of the advantages of the Feat2Vec algorithm: using a numeric value as an input, Feat2Vec can learn a highly nonlinear relation mapping a real number to our high-dimensional embedding space. In contrast, Word2Vec would be unable to know ex ante that an IMDB rating of 5.5 is similar to 5.6. Figure A.3 shows the full distribution of rankings of the IMDB dataset, rather than summary statistics, in the form of a Cumulative Distribution Function (CDF) of all rankings calculated in the test dataset. The graphic makes it apparent for the vast majority of the ranking space, the rank CDF of Feat2Vec is to the left of CBOW, indicating a greater probability of a lower ranking under Feat2Vec. This is not, however, the case at the upper tail of ranking space, where it appears CBOW is superior. However, when we zoom-in on the absolute upper region of rankings (1 to 25), which might be a sensible length of ranks one might give as actual recommendatiosn, it is the case that up until rank 8 or so, Feat2Vec outperforms CBOW still. Intermediate rankings are still strong signals that our Feat2Vec algorithm is doing a better job of extracting information into embeddings, particularly those entities that appear sparsely in the training data and so are especially difficult to learn. Theorem 1. The gradient for learning embeddings with Feat2Vec is a convex combination of the gradient from n targeted Factorization Machines for each feature in the data when each feature group is a singleton, where n is the total number of features in the dataset. Proof. Let S + κi denote the positively labeled records whose corresponding negative samples resample feature κ i. For convenience, suppress the inclusion of learned parameters θ in the notation in this section while understanding the feature extraction functions φ implicitly include these parameters. We can express the loss function L, the binary cross-entropy of the data given the Feat2Vec model, as follows: DISPLAYFORM0 is simply the probability of the record's feature value x f under the second step noise distribution Q 2 (X f, α 2): DISPLAYFORM1 We now drop the term containing the probability of assignment to a feature group p(x + ∈ S + κi) since it is outside of the learned model parameters φ and fixed in advance: DISPLAYFORM2 Thus, the loss function is just a convex combination of the loss functions of the targeted classifiers for each of the p features, and by extension so is the gradient since: DISPLAYFORM3 Thus the algorithm will, at each step, learn a convex combination of the gradient for a targeted classifier on feature f, with weights proportional to the feature group sampling probabilities in step 1 of the sampling algorithm. Note that if feature groups are not singletons, the gradient from unsupervised Feat2Vec will analogously be a convex combination of n gradients learned from supervised learning tasks on each of the n feature groups. Figure A.4: Feature extraction network used for labelling tasks. We use f=1000 convolutional filters each of width 3 (words) Here we describe the details of the feature extraction function φ used in our experiments for supervised tasks in §4.1. An overview of the network is given in Fig. A.4. We choose the most common words of each dataset to build a vocabulary of size n, and convert the words of each document to a sequence of length t of one-hot encodings of the input words. If the input text is shorter than t, then we pad it with zeros; if the text is longer, we truncate it by discarding the trailing words. Therefore, for a vocabulary size n, the input has dimensions t × n. These t× dimensional matrix is then passed through the following layers:1. We use an embedding layer to assign a d-dimensional vector to each word in the input passage of text. This is done through a d × n-dimensional lookup table, which in an t × d matrix.2. We extract features from the embeddings with functions called convolutional filters BID16 (also called feature maps). A convolutional filter is simply a matrix learned from an input. We learn f filters that are applied on groups of m adjacent word embeddings, thus each of our filters is a d × m matrix of learned parameters. Filters are applied by computing the element-wise dot product of the filter along a sliding window of the entire input. The ing output for each filter is a vector of length t − m + 1. We also apply a ReLU activation to the output of each filter.3. Consider the case of inputs of different lengths. For very short texts, the output of the filters will be mostly zero since the input is zero-padded. To enforce learning from the features of the text, and not just its length we apply a function called 1-max pooling to the output of the filters: from the t − m + 1 output vector of each filter, we select the maximum value. This yields a vector of length F, a representation of the passage which is independent of its length.4. We learn higher-level features from the convolutional filters. For this, we use a fully connected layer with p units and a ReLU activation, 5. During training (not in inference), we prevent the units from co-adapting too much with a dropout layer BID26. Dropout is a form of regularization that for each mini-batch randomly drops a specified percentage of units.6. the final embedding for x j (that is used in the factorization) is computed by a dense layer with r output units and an activation function, where r is the embedding size of our indexable items. We set the maximum vocabulary size n to 100,000 words, and input embedding size d to 50 for all experiments. We initialize the input word embeddings and the label embeddings using Word2Vec BID17 We have have not evaluated multiple architectures or hyperparameter settings and obtain good on diverse datasets with the same architecture, which was designed followed recommendations from a large scale evaluation of CNN hyper parameters BID30. We set the number of convolutional filters f to 1,000, and the dropout rate to 0.1. The maximum sequence length t was chosen according to the typical document length (350 words for CiteULike and 250 for Yelp). For the CTR dataset, because we use very small values of r, due to the tendency of the ReLU units to'die' during training (output zero for all examples), which can have a significant impact, we used instead PReLU activations BID9 for the final layer, since they do not suffer from this issue. The CNN architecture used for DeepCoNN BID31 ) is similar to the previous section. It consists of a word embedding lookup table, convolutional layer, 1-max pooling and a fully connected layer. We use the hyper-parameters that the authors report as best -100 convolution filters and 50 units for the fully connected layer. We set the word embedding size to 100, the vocabulary size to 100,000 and the maximum document length to 250. To compare Feat2Vec with Collaborative Topic Regression, we choose the embedding size r ∈ {5, 10, 15} for which CTR performs best. The are show in TAB2.2.
Learn dense vector representations of arbitrary types of features in labeled and unlabeled datasets
1,355
scitldr
We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''. A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure. Recurrent neural networks (RNNs) seem "unreasonably" effective at modeling patterns in noisy realworld sequences. In particular, they seem effective at recognizing grammatical structure in sequences, as evidenced by their ability to generate structured data, such as source code (C++, LaTeX, etc.), with few syntactic grammatical errors BID9. The ability of RNNs to recognize formal languages -sets of strings that possess rigorously defined grammatical structure -is less well-studied. Furthermore, there remains little systematic understanding of how RNNs recognize rigorous structure. We aim to explain this internal algorithm of RNNs through comparison to fundamental concepts in formal languages, namely, finite automata and regular languages. In this paper, we propose a new way of understanding how trained RNNs represent grammatical structure, by comparing them to finite automata that solve the same language recognition task. We ask: Can the internal knowledge representations of RNNs trained to recognize formal languages be easily mapped to the states of automata-theoretic models that are traditionally used to define these same formal languages? Specifically, we investigate this question for the class of regular languages, or formal languages accepted by finite automata (FA).In our experiments, RNNs are trained on a dataset of positive and negative examples of strings randomly generated from a given formal language. Next, we ask if there exists a decoding function: an isomorphism that maps the hidden states of the trained RNN to the states of a canonical FA. Since there exist infinitely many FA that accept the same language, we focus on the minimal deterministic finite automaton (MDFA) -the deterministic finite automaton (DFA) with the smallest possible number of states -that perfectly recognizes the language. Our experiments, spanning 500 regular languages, suggest that such a decoding function exists and can be understood in terms of a notion of abstraction that is fundamental in classical system theory. An abstraction A of a machine M (either finite-state, like an FA, or infinite-state, like a RNN) is a machine obtained by clustering some of the states of M into "superstates". Intuitively, an abstraction Figure 1: t-SNE plot (Left) of the hidden states of a RNN trained to recognize a regular language specified by a 6-state DFA (Right). Color denotes DFA state. The trained RNN has abstracted DFA states 1(green) and 2(blue) (each independently model the pattern * ) into a single state. A loses some of the discerning power of the original machine M, and as such recognizes a superset of the language that M recognizes. We observe that the states of a RNN R, trained to recognize a regular language L, commonly exibit this abstraction behavior in practice. These states can be decoded into states of an abstraction A of the MDFA for the language, such that with high probability, A accepts any input string that is accepted by R. Figure 1 shows a t-SNE embedding BID13 of RNN states trained to perform language recognition on strings from the regex [(( {2}+)?)3+]. Although the MDFA has 6 states, we observe the RNN abstracting two states into one. Remarkably, a linear decoding function suffices to achieve maximal decoding accuracy: allowing nonlinearity in the decoder does not lead to significant gain. Also, we find the abstraction has low "coarseness", in the sense that only a few of the MDFA states need be clustered, and a qualitative analysis reveals that the abstractions often have simple interpretations. RNNs have long been known to be excellent at recognizing patterns in text BID10 BID9. Extensive work has been done on exploring the expressive power of RNNs. For example, finite RNNs have been shown to be capable of simulating a universal Turing machine BID16. BID4 showed that the hidden state of a RNN can approximately represent dynamical systems of the same or less dimensional complexity. In particularly similar work, BID19 showed that second order RNNs with linear activation functions are expressively equivalent to weighted finite automata. Recent work has also explored the relationship between RNN internals and DFAs through a variety of methods. Although there have been multiple attempts at having RNNs learn a DFA structure based on input languages generated from DFAs and push down automata BID3 BID5 BID7 BID14 BID17, most work has focused on extracting a DFA from the hidden states of a learned RNN. Early work in this field BID6 demonstrated that grammar rules for regular grammars could indeed be extracted from a learned RNN. Other studies BID18 tried to directly extract a DFA structure from the internal space of the RNN, often by clustering the hidden state activations from input stimuli, noting the transitions from one state to another given a particular new input stimuli. Clustering was done by a series of methods, such as K-Nearest Neighbor BID2, K-means BID11, and Density Based Spatial Clustering of Applications with Noise (DBSCAN) BID2 BID12. Another extraction effort BID1 uses spectral algorithm techniques to extract weighted automata from RNNs. Most recently, BID21 have achieved state-of-the-art accuracy in DFA extraction by utilizing the L* query learning algorithm. Our work is different from these efforts in that we directly relate the RNN to a ground-truth minimal DFA, rather than extracting a machine from the RNN's state space. The closest piece of related work is by BID20. Like our work, this seeks to relate a RNN state with the state of a DFA. However, the RNN in BID20 exactly mimics the DFA; also, the study is carried out in the context of a few specific regular languages that are recognized by automata with 2-3 states. In contrast, our work does not require exact behavioral correspondence between RNNs and DFAs: DFA states are allowed to be abstracted, leading to loss of information. Also, in our approach the mapping from RNN states to FA states can be approximate, and the accuracy of the mapping is evaluated quantitatively. We show that this allows us to establish connections between RNNs and DFAs in the setting of a broad class of regular languages that often demand significantly larger automata (with up to 14 states) than those studied by BID20. We start by introducing some definitions and notation. A formal language is a set of strings over a finite alphabet Σ of input symbols. A Deterministic Finite Automaton (DFA) is a tuple A = (Q, Σ, δ, q 0, F) where Q is a finite set of states, Σ is a finite alphabet, δ: Q × Σ → Q is a deterministic transition function, q 0 ∈ Q is a starting state and F ⊂ Q is a set of accepting states. A reads strings over Σ symbol by symbol, starting from the state q 0 and making state transitions, defined by δ, at each step. It accepts the string if it reaches a final accepting state in F after reading it to the end. The set of strings accepted by a DFA is a special kind of formal language, known as a regular language. A regular language L can be accepted by multiple DFAs; such a DFA A L is minimal if there exists no other DFA A = A L such that A exactly recognizes L and has fewer states than A L. It can be shown that this minimal DFA (MDFA), which we denote by A 0 L, is unique BID8.Abstractions. A Nondeterministic Finite Automaton (NFA) is similar to a DFA, except that the deterministic transition function δ is now a non-deterministic transition relation δ N F A. This means that for a state q in the NFA and a ∈ Σ, we have that δ N F A (q, a) is now a subset of NFA states. For a given regular language L we denote by A n L a Nondeterministic Finite Automaton (NFA) with n states that recognizes a superset of the language L. An abstraction map is a map α: DISPLAYFORM0 DISPLAYFORM1 We define the coarseness of an abstraction A n L, as the number of applications of α on the MDFA required to arrive at A n L. Intuitively, repeated applications of α create NFAs that accept supersets of the language L recognized by the MDFA, and can hence be seen as coarse-grained versions of the original MDFA. The coarsest NFA, given by A DISPLAYFORM2, is a NFA with only one accepting node and it accepts all strings on the alphabet Σ.Given a regular language L, we define R L to be a RNN that is trained to recognize the language L, with a certain threshold accuracy. Each RNN R L will have a corresponding set of hidden states denoted by H. More details about the RNN are provided in §4.1. Note that a RNN can also be viewed as a transition system with 5-tuple R = (H, Σ, δ R, h 0, F R), where H is a set of possible'hidden' states (typically H ⊂ R K), δ R is the transition function of the trained RNN, h 0 is the initial state of the RNN, and F R is the set of accepting RNN states. The key distinction between a DFA and a RNN is that the latter has a continuous state space, endowed with a topology and a metric. Decoding DFA States from RNNs. Inspired by methods in computational neuroscience BID0, we can define a decoding function or decoder f: H → Q 0 as a function from the hidden states of a RNN R L to the states of the corresponding DISPLAYFORM3. We are interested in finding decoding functions that provide an insight into the internal knowledge representation of the RNN, which we quantify via the decoding and transitional accuracy metrics defined below. DISPLAYFORM4 repeatedly n times, and let Q n be the set of states of A n L. We can define an abstraction decoding functionf: DISPLAYFORM5, that is the composition of f with α |n|. The function α |n| is the function obtained by taking n compositions of α with itself. Given a dataset of input strings D ⊂ Σ *, we can define the decoding accuracy of a mapf for an abstraction DISPLAYFORM6 where 1(C) is the boolean indicator function that evaluates to 1 if condition C is true and to 0 otherwise, h t+1 = δ R (h t, a t) and q t+1 = δ 0 (q t, a t). Note in particular, that for decoding abstraction states the condition is only checking if the (t + 1) RNN state is mapped to the (t + 1) NFA state byf, which may be true even if the (t + 1) RNN state is not mapped to the (t + 1) MDFA state by the decoding function f. Therefore a functionf can have a high decoding accuracy even if the underlying f does not preserve transitions. Decoding Abstract State Transitions. We now define an accuracy measure that takes into account how well transitions are preserved by the underlying function f.Intuitively, for a given decoding functionf and NFA A n L, we want to check whether the RNN transition on a is mapped to the abstraction of the MDFA transition on a. We note that in the definition of the decoding function, we take into account only the states at (t + 1) and not the underlying transitions in the original MDFA A 0 L, unlike we do here. More precisely, the transitional accuracy of a mapf for a given RNN and abstraction, with respect to a data-set D, is defined as: DISPLAYFORM7 Our experiments in the next section demonstrate that decoding functions with high decoding and transitional accuracies exist for abstractions with relatively low coarseness. Our goal is to experimentally test the hypothesis that a high accuracy, low coarseness decoder exists from R L to A ∈ L) example strings (see Appendix for details). We then train R L with D on the language recognition task: given an input string x ∈ Σ *, is x ∈ L? Thus, we have two language recognition models corresponding to state transition systems from which state sequences are extracted. Given a length T input string x = (x 1, x 2, x t, ..., x T) ∈ D, let the categorical states generated by A 0 L be denoted by q = (q 0, q 1, q t, ..., q T) and continuous states generated by the R L be h = (h 0, h 1, h t, ..., h T). The recorded state trajectories (q and h) for all input strings x ∈ D are used as inputs into our analysis.. For our experiments, we sample a total of ∼ 500 unique L, and thus perform an analysis of ∼ 500 recognizer MDFAs and ∼ 500 trained recognizer RNNs. As mentioned in the beginning of §4, we must first determine what is a reasonable form for the decoders f andf to ensure high accuracy on the decoding task. FIG4 shows decoding accuracy DISPLAYFORM0 for several different decoding functions f. We test two linear classifiers (Multinomial Logistic Regression and Linear Support Vector Machines (SVM)) and two non-linear classifiers (SVM with a RBF kernel, Multi-Layer Perceptrons with varying layers and hidden unit sizes). In order to evaluate whether accuracy varies significantly amongst all decoders, we use a statistically appropriate F-test. Surprisingly, we find there to be no statistical difference umong our sampled languages: the nonlinear decoders achieve no greater accuracy than the simpler linear decoders. We also observe in our experiments that as the size of the MDFA M increases, the decoding accuracy decreases for all decoders in a similar manner. FIG4 shows this relationship for the multinomial logistic regression classifier. Taken together, these have several implications. First, we find that a highly expressive nonlinear decoder does not yield any increase in decoding accuracy, even as we scale up in MDFA complexity. We can conclude from this finding and our extensive hyperparameter search for each decoder model that the decoder models we chose are expressive enough for the decoding task. Second, we find that decoding accuracy for MDFA states is in general not very high. These two observations suggest linear decoders are sufficient for the decoding task, but also suggests the need for a different interpretation of the internal representation of the trained RNN. Given the information above, how is the hidden state space of the R L organized? One hypothesis that is consistent with the observations above is that the trained RNN reflects a coarse-grained abstraction of the state space Q 0 (Figure 1), rather than the MDFA states themselves. To test this hypothesis, we propose a simple greedy algorithm to find an abstraction mapping α: (a) given an NFA A n L with n unique states in Q n, consider all (n − 1)-partitions of Q n−1 (i.e. two NFA states s, s have merged into a single superstate {s, s}); (b) select the partition with the highest decoding accuracy; (c) Repeat this iterative merging process until only a 2-partition remains. We note that this algorithm does not explicitly take into consideration the transitions between states which are essential to evaluating φf (R L, A n L). Instead, the transitions are taken into account implicitly while learning the decoder f at each iteration of the abstraction algorithm. Decreasing the number of states in a classification trivially increases DISPLAYFORM0. We compare to a baseline where the states abstracted are random to validate our method. We compute the normalized Area Under the Curve (AUC) of a decoder accuracy vs coarseness plot. Higher normalized AUC indicates a more accurate abstraction process. We argue through FIG5 that our method gives a non-trivial increase over the abstraction performance of a random baseline. The abstraction algorithm is greedy in the sense that we may not find the globally optimal partition (i.e. with the highest decoding accuracy and lowest coarseness), but an exhaustive search over all partitions is computationally intractable. The greedy method we have proposed has O(M 2) complexity instead, and in practice gives satisfactory . Despite it being greedy, we note that the ing sequence of clusterings are stable with respect to randomly chosen initial conditions and model parameters. Recognizer RNNs with a different number of hidden units in clustering sequences that are consistent with each other in the critical first few abstractions. Once an abstraction α has been found, we can evaluate whether the learned abstraction decoderf is of high accuracy, and whether the α found is of low coarseness. Results showing the relationship between high decoding accuracy ρf (R L, A n L) as a function of coarseness is presented in FIG6 conditioned on the number of nodes in the original MDFA. As stated in §4.2, as M increases, ρf (R L, A n L) decreases on the MDFA (i.e. n = 0). We attribute this to two factors, as M increases, the decoding problem naturally increases in difficulty, and R L abstracts multiple states of A L into a single state in H as can be seen empirically from Figure 1. We validate the second factor by training a overparameterized non-linear decoder on the decoding task and find no instances where the decoder obtains 0% training error. Alongside the decoding accuracy, we also present transitional accuracy φf (R L, A n L) as a function of coarseness FIG6. Both of these figures showcase that for a given DFA, in general we can find a low coarseness NFA that the hidden state space of R L can be decoded to with high accuracy. FIG5 shows the average ratio of abstractions relative to M needed to decode to 90% accuracy, indicating low coarseness relative to a random baseline. For completeness, we also present decoder and transition accuracy for a nonlinear decoder in FIG7 showing similar as the linear decoder. Our fundamental work shows a large scale analysis of how RNNs R L relate to abstracted NFAs A n L for hundreds of minimal DFAs, most of which are much larger and more complex than DFAs typically used in the literature. By evaluating the transition accuracy between R and A n L we empirically validate our claim. We show that there does exist high accuracy decoders from R to an abstracted NFA A n L. 4.5 INTERPRETING THE RNN HIDDEN STATE SPACE WITH RESPECT TO THE MINIMAL DFA With an established high accuracyf with low coarseness α reveals a unique interpretation of H with respect to A 0 L. Using α and f to relate the two, we uncover an interpretation of how R organizes H with respect to A n L ∀ n ∈ [M]. We can then determine the appropriate level of abstraction the network uses to accomplish the logical language recognition task in relation to the underlying MDFA. We provide two example'real-world' DFAs to illustrate this interpretation and show several interesting patterns. We present in FIG9 the clustering sequences of two regular expressions that have real-world interpretations, namely the SIMPLE EMAILS and DATES languages that recognize simple emails and simple dates respectively. To explain, FIG9 shows the DATES language with its clustering sequence superimposed on the MDFA in the form of a dendrogram. The dendrogram can be read in a top-down fashion, which displays the membership of the MDFA states and the sequence of abstractions up to n = M − 1. A question then arises: How should one pick a correct level of abstraction n?. The answer can be seen in the corresponding accuracies ρf (R L, A n L) in FIG10. As n increases and the number of total NFA states decreases, the linear decoding (LDC) prediction task obviously gets easier (100% accuracy when the number of NFA states Q |Q|−1 is 1), and hence it is important to consider how to choose the number of abstractions in the final partition. We typically set a threshold for ρf (R L, A n L) and select the minimum n required to achieve the threshold accuracy. Consider the first two abstractions of the SIMPLE EMAILS DFA. We notice that both states 2 and 5 represent the pattern matching task [a-d] *, because they are agglomerated by the algorithm. Once two abstractions have been made, the decoder accuracy is at a sufficient point, as seen in FIG10. This suggests that the collection of hidden states for the two states are not linearly separable. One possible and very likely reason for this is the network has learned an abstraction of the pattern [a-d] * and uses the same hidden state space regardless of location in string to recognize this pattern, which has been indicated in past work BID9. This intuitive example demonstrates the RNN's capability to learn and abstract patterns from the DFA. This makes intuitive sense because R L does not have any direct access to A 0 L, only to samples generated from A 0 L. The flexibility of RNNs allows such abstractions to be created easily. The second major pattern that arises can be seen in the dendrogram in the bottom row of FIG9. We notice that, generally, multiple states that represent the same location in the input string get merged (1 and 4, 3 and 6, 0 and 5). The SIMPLE EMAILS dendrogram shows patterns that are location-independent, while the fixed length pattern in the DATES regex shows location-dependent patterns. We also notice that the algorithm tends to agglomerate states that are within close sequential proximity to each other in the DFA, again indicating location-dependent hierarchical priors. Overall, our new interpretation of H reveals some new intuitions, empirically backed by our decoding and transitional accuracy scores, regarding how the RNN R L structures the hidden state space H in the task of language recognition. We find patterns such as these in almost all of the DFA's tested. We provide five additional random DFA's in the appendix FIG4 ) to show the wide variability of the regular expressions we generate/evaluate on. We have studied how RNNs trained to recognize regular formal languages represent knowledge in their hidden state. Specifically, we have asked if this internal representation can be decoded into canonical, minimal DFA that exactly recognizes the language, and can therefore be seen to be the "ground truth". We have shown that a linear function does a remarkably good job at performing such a decoding. Critically, however, this decoder maps states of the RNN not to MDFA states, but to states of an abstraction obtained by clustering small sets of MDFA states into "abstractions". Overall, the suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure. We see our work as a fundamental step in the larger effort to study how neural networks learn formal logical concepts. We intend to explore more complex and richer classes of formal languages, such as context-free languages and recursively enumerable languages, and their neural analogs. In order to generate a wide variety of strings that are both accepted and rejected by the DFA corresponding to a given regex R, we use the Xeger Java library, built atop the dk.brics.automaton library BID15. The Xeger library, given a regular expression, generates strings that are accepted by the regular expression's corresponding DFA. However, there is no standard method to generate examples that would be rejected by the DFA. These rejected examples need to be diverse to properly train an acceptor/rejector model: if the rejected examples are completely different from the accepted examples, the model will not be able to discern between similar input strings, even if one is an accepted string and the other is a rejected string. However, if the rejected examples were too similar to the accepted examples, the model would not be able to make a judgment on a completely new string that does not resemble any input string seen during training. In other words we want the rejected strings to be drawn from two distinct distributions, one similar and one independent compared to the distribution of the accepted strings. In order to achieve this, we generate negative examples in two ways: First, we randomly swap two characters in an accepted example enough times until we no longer have an accepted string. And secondly, we take an accepted string and randomly shuffle the characters, adding it to our dataset if the ing string is indeed rejected. In our experiments we generate 1000 training examples with a 50:50 accept/reject ratio. When applicable we generate strings of varying length capped at some constant, for example with the SIMPLE EMAILS language we generate strings of at most 20 characters.
Finite Automata Can be Linearly decoded from Language-Recognizing RNNs using low coarseness abstraction functions and high accuracy decoders.
1,356
scitldr
Designing accurate and efficient convolutional neural architectures for vast amount of hardware is challenging because hardware designs are complex and diverse. This paper addresses the hardware diversity challenge in Neural Architecture Search (NAS). Unlike previous approaches that apply search algorithms on a small, human-designed search space without considering hardware diversity, we propose HURRICANE that explores the automatic hardware-aware search over a much larger search space and a multistep search scheme in coordinate ascent framework, to generate tailored models for different types of hardware. Extensive experiments on ImageNet show that our algorithm consistently achieves a much lower inference latency with a similar or better accuracy than state-of-the-art NAS methods on three types of hardware. Remarkably, HURRICANE achieves a 76.63% top-1 accuracy on ImageNet with a inference latency of only 16.5 ms for DSP, which is a 3.4% higher accuracy and a 6.35x inference speedup than FBNet-iPhoneX. For VPU, HURRICANE achieves a 0.53% higher top-1 accuracy than Proxyless-mobile with a 1.49x speedup. Even for well-studied mobile CPU, HURRICANE achieves a 1.63% higher top-1 accuracy than FBNet-iPhoneX with a comparable inference latency. HURRICANE also reduces the training time by 54.7% on average compared to SinglePath-Oneshot. Neural Architecture Search (NAS) is a powerful mechanism to automatically generate efficient Convolutional Neural Networks (CNNs) without requiring huge manual efforts of human experts to design good CNN models (; ;). However, most existing NAS methods focus on searching for a single DNN model of high accuracy but pay less attention on the performance of executing the model on hardware, e.g., inference latency or energy cost. Recent NAS methods (; ; b;) start to consider model-inference performance but they use FLOPs 1 to estimate inference latency or only consider the same type of hardware, e.g., smartphones from different manufacturers but all ARM-based. However, the emerging massive smart devices are equipped with very diverse processors, such as CPU, GPU, DSP, FPGA, and various AI accelerators that have fundamentally different hardware designs. Such a big hardware diversity makes FLOPs an improper metric to predict model-inference performance and calls for new trade-offs and designs for NAS to generate efficient models for diverse hardware. To demonstrate it, we conduct an experiment to measure the performance of a set of widely used neural network operators (a.k.a. operations) on three types of mobile processors: Hexagon TM 685 DSP, Snapdragon 845 ARM CPU, and Movidius TM Myriad TM X Vision Processing Unit (VPU). Figure 1 shows the and we make the following key observations. First, from Figure 1 (a), we can see that even the operators have similar FLOPs, the same operator may have very different inference latency on different processors. For example, the latency of operator SEP 5 is nearly 12× higher than that of operator Choice 3 on the ARM CPU, but the difference on the VPU is less than 4×. Therefore, FLOPs is not the right metric to decide the inference latency on different hardware. Second, the relative effectiveness of different operators on different processors is also different. For example, operator SEP 3 has the smallest latency on the DSP, but operator Choice 3 has the 1 In this paper, the definition of F LOP s follows, i.e., the number of multiply-adds. smallest latency on the VPU. Thus, different processors should choose different operators for the best trade-off between model accuracy and inference latency. Furthermore, as shown in Figure 1 (b), the computational complexity and latency of the same operator are also affected by the execution context, such as input feature map shapes, number of channels, etc. Such a context is determined by which layer the operator is placed on. As a , even on the same hardware, optimal operators may change at different layers of the network. In addition, we observe that the existing NAS methods tends to handle all the layers equally. For instance, the uniform sampling in one-shot NAS will give the same sampling opportunities to every layer. However, not all the layers are the same: different layers may have different impacts on inference latency and model accuracy. Indeed, some previous works (D.;) have revealed different behaviors between the earlier layers (close to data input) and the latter layers (close to classification output) in CNN models. The earlier layers extract low-level features from inputs (e.g., edges and colors), are computation intensive and demands more data to converge, while the latter layers capture high-level class-specific features but are less computation intensive. From these findings, we argue that exploring more architecture selections in the latter layers may help find better architectures with the limited sampling budget, and limiting the latency in the earlier layers is critical to search for low-latency models. To this end, it is desirable to explore how to leverage this layer diversity for better architecture sampling in NAS. Motivated by these observations, we argue that there is no one-size-fits-all model for different hardware, and thus propose and develop a novel hardware-aware method, called HURRICANE (Hardware aware one-shot neUral aRchitecture seaRch In Coordinate AsceNt framEwork), to tackle the challenge of hardware diversity in NAS. Different from the existing hardware-aware NAS methods that use a small set of operators (e.g., 6 or 9) manually selected for a specific hardware platform, HURRICANE is initialized with a large-size (32 in our implementation) candidate operators set to cover the diversity of hardware platforms. However, doing so increases the search space by many orders of magnitude and thus leads to unacceptable search and training cost and may even cause non-convergence problem. To reduce the cost, we propose hardware-aware search space reduction at both operator level and layer level. In the operator-level search space reduction, a toolkit is developed to automatically score every layer's candidate operators on target hardware platforms, and choose a sub-set of them with low latency for further utilization. In the layer-level search space reduction, we split the layers into two groups, the earlier group and the latter group according to their locations in the network. Based on a coordinate ascent framework (Appendix A), we propose a multistep search scheme, which searches the complete architecture by a sequence of simpler searching of sub-networks. In each iteration (step), we alternatively fix one group of layers and optimize the other group of layers to maximize the validation accuracy by a one-shot NAS 2. The searching of sub-networks is much easier to complete because of the much smaller size of search space, and the better architectures are reached by a sequence of iterations. This layer-level search space reduction is inspired by the layer diversity mentioned above. We choose most latencyeffective operators for earlier layers and allocate more sampling opportunities to latter layers. As a , we are able to search for models with both low latency and high accuracy. We evaluate the effectiveness of our proposed approach on ImageNet 2012 dataset and a small OUIAdience-Age dataset with the above three mobile hardware platforms (DSP/CPU/VPU). Under all the three platforms, HURRICANE consistently achieves the same level (or better) accuracy with much lower inference latency than state-of-the-art hardware-aware NAS methods. Remarkably, HURRICANE reduces the inference latency by 6.35× on DSP compared to FBNet-iPhoneX and 1.49× On VPU compared to Proxyless-mobile, respectively. Compared to Singlepath-Oneshot, on average HURRICANE reduces the training time by 54.7% on ImageNet. Neural Architecture Search (NAS). first proposed to use reinforcement learning (RL) to search for competitive architectures with low FLOPs. As a full configurable architecture search space grows exponentially, early works; ) search for a cell level structure as the building block and the same cell is reused in all layers. However, many cell structures are very complicated and fragmented, and thus are slow when deployed to a device. Recent methods;;; ) adopt a layer-level hierarchical search space with a back-bone structure allowing different layer structures at different resolution blocks of a network. The goal becomes searching operators for each layer so that the architecture achieves competitive accuracy under given constraints. To search hardware efficient architectures, the search spaces have been built on increasingly more efficient building blocks. MnasNet, ProxylessNAS and Single-path NAS (b) built upon the MobileNetV2 structure (MB k e). FBNet and Singlepath-Oneshot built search space by ShuffleNetV1 and ShuffleNetV2 (Choice k). As these structures are primarily designed for mobile CPU, the efficiency of such manually-designed search space is unknown for other hardware. Hardware aware NAS. Early NAS methods (; adopt hardwareagnostic metric FLOPs to measure the efficiency. However, architecture with lower FLOPs is not necessarily faster (a). Recently, many methods (; b;) adopt direct metrics such as measured latency but only for mobile CPUs. builds a latency prediction model, (b;) profiles every operator's latency, and then the latency metric is viewed as regularization loss. Such design is also not optimized for small mobile models because accuracy changes much more dramatically with latency for small models, as pointed out. One-Shot NAS. Starting from ENAS , weight sharing became popular as it accelerates the search process and makes search cost feasible. Recent one-shot methods encode the search space into an over-parameterized supernet, where each path is a stand-alone model. During the supernet training, architectures are sampled by different proxies (e.g., reinforcement learning, gradient-based) with weights updated. and Fair-NAS observe that such coupled architecture search and weight sharing could be problematic to fairly evaluate the performance of candidate architectures. Our work is built upon Singlepath-Oneshot . It decouples the supernet training and evolution-based architecture search by uniform sampling. In this paper, HURRICANE aims to search the following architectures for a given hardware platform h (any of CPU, DSP, NPU, VPU, etc.) and the latency constant LatC h: HURRICANE can find a architecture a that achieves the maximum accuracy ACC val (a) on the validation set and the inference latency Latency(a, h) is under the constraint LatC h. Without loss of generality, A denotes the network architecture search space. Like other recent works (; ; ;, A is constructed by a back-bone network architecture, n layers of which could be chosen from a set of candidate operators O. denotes the l-th layer of architecture a, and O l is the set of candidate operators specialized for l-th layer according to its context (e.g. input shape, number of channels). Algorithm 1 briefs the hardware aware one-shot neural architecture search in coordinate ascent framework. We employ the same back-bone architecture with but a much bigger collection of candidate operators from the primary blocks of off-the-shelf networks. The set of candidate operators (O) contains up to 32 operators detailed in Section 3.3. Require: h (hardware platform), LatC h (latency constraint), O (candidate operators) Require: I (number of iterations), t, e (hyper parameter) Operator level search space reduction 1: for l ← 1 to n do 2:, e is index of extra exploring operator and e ≥ 4 Layer level search space reduction and coordinate ascent 6: Initialize winning architecture awin with Firstly, we leverage the hardware performance to reduce the search space on operator level. We develop a toolkit to automatically score the candidate operators O l on different hardware platforms. The toolkit profiles the real performances (including but may not limited to inference latency), and give a comprehensive score for every candidate operator. Then we select top 4 or 5 operators (O l) in the non-increasing order of scores for l-th layer. Secondly, we reduce the search space on layer level. At the beginning of coordinate ascent, we setup the winning architecture with the operators of highest scores in every layer. In first iteration, we mark the latter n − t layers as active and the earlier t layers as non-active. The non-active layers are fixed to the corresponding layer structures of current winning architecture, while the active layers will be chosen from the 4 highest scored operators (an extra operator for exploring is added in the last 4 layers) (Line 8 and Line 9). The one-shot NAS method itself is similar to the work , except that we constraint the search space with a hardware latency 3 other than FLOPs. We encode all candidate architectures of search space A into a over parameterized supernet (and its weights W), and sample a specific architecture a (and its weights W (a)). The sampled network is denoted by N (a, W (a)). For every mini batch B in training dataset D tr, the supernet weights W is updated by the gradient (∇L B (N (a, W i (a)))) of the training loss of network N (a, W (a)) . After the supernet is trained (only once), all candidate architectures a inherit weights from the supernet W A, and thus weight W A (a) is used to approximate the optimal weight of architecture a. The weights will not be updated during the process of architecture searching. We adopt evolutionary search to find the winning architecture with the highest validation accuracy (Line 11). After a complete process of one-shot NAS, a new winning architecture would be generated. The new winning architecture would remain the same structure in the non-active layers and update to new operators in the active layers. A new iteration is started with the new winning architecture until the termination test is satisfied. Candidate Operator Pool O. As is known, the computation complexity (e.g., FLOPs) and memory access cost of an architecture are key factors in hardware latency efficiency . To cover the hardware diversity, our initial search space consists of 32 candidate operators (detailed structures are in Appendix Figure 3) that leverage different computation and memory complexity. They are built upon the following 4 basic structures from current efficient models: • SEP: depthwise-separable convolution. Following DARTS, we applied the depthwise-separable convolution twice. We allow choosing kernel size k of 3, 5 or 7 for the depthwise convolution and generate 3 operators SEP k. This rule with different kernel size for depthwise convolution also applies to other operator generation. SEP k has a larger FLOP count than others, but less memory access complexity. • MB: mobile inverted bottleneck convolution in MobileNetV2 . It's widely used in recent one-shot ). We generate 9 operators MB k e based on it, that we allow choosing k of 3, 5, 7 and channel expansion rate e of 1, 3, 6. MB k e has a medium memory access cost due to its shortcut and add operation. Its computation complexity is decided by the kernel size k and expansion rate e. • Choice: basic building block in ShuffleNetV2 . We construct 3 operators Choice k (k=3, 5, 7). Following , we also adds a similar operator ChoiceX. Choice k and ChoiceX have much smaller FLOPs than the others, but the memory complexity is high due to the channel split and concat operation. • SE: squeeze-and-excitation network . To balance the impacts in latency and accuracy, we follow the settings in MobileNetV3. We set the reduction ratio r to 4, and replace the original sigmoid function with a hard version of swish ReLU 6(x+3) 6. We apply SE module to the above operators and generate new 16 operators. The computation complexity of SE is decided by its insert position, while the memory access cost is relatively lower. As a , our global search space consists of n |O| =20 32 (n=20 layers) candidate architectures, which is exponentially larger than current one-shot NAS. For each searchable layer, there are total of 32 operators to choose: SEP k, SEP k SE,MB k e, MB k e SE, Choice k,Choice k SE,ChoiceX, ChoiceX SE, where k=3,5,7, and e=1,3,6. Hardware aware profiling and sorting. As shown in Figure 1(a), the real performances of operators vary significantly on different hardware platforms. Without loss of generality, the scoring function should consider both representation capacity and real hardware performance. There are many methods to approximate the scoring function, such as a customized weighted product of FLOPs and number of parameters (#Params) where α and β are non-negative constants. Construct search space with high scored operators. For each layer, we filter out the top 4 operators with highest scores first. This make the search space reduced to a comparable size with previous works, but specialized for the target platform. Another exploring operator is added to the candidate operator set of the last 4 layers. Only adding to the last layers is because the latency degradations of these layers are acceptable due to their smallest feature size; sophisticated operators could help these layers better capture high-level class-specific patterns. Hyper-parameters. There are two important hyper-parameters in Algorithm 1: the layer grouping boundary t and the number of iterations I. We set t = 8 (only searchable layers counted) according to the natural resolution changes of the supernet (Table 4 in Appendix). Experiment discussed in Appendix C shows the rationality of it. For simplicity, we tested the simplest one-iteration case (I = 1) and adding an extra iteration (I = 2). Diverse hardware platforms. HURRICANE targets three representative mobile hardware that is widely used for CNN deployment: DSP (Qualcomm's Hexagon TM 685 DSP),: CPU (Qualcomm's Snapdragon 845 ARM CPU),: VPU (Intel's Movidius TM Myriad TM X Vision Processing Unit). To make full utilization of these hardware at inference, we use the particular inference engine provided by the hardware vendor. Specifically, DSP and CPU latency are measured by Snapdragon Neural Processing Engine SDK , while VPU latency is measured by Intel OpenVINO TM Toolkit . Operators profiling and sorting. Table 1 lists the top scored operators on different hardware platforms when α = 0. The hardware-aware search space constructed by scoring the operators on target hardware platforms and then selecting the best 4 operators (layer 17-20 has another extra exploring operator with e = 10). We share new important insights from hardware profiling: (i) depthwise convolutions with kernel size k ≤ 3 are well optimized on Hexagon TM 685 DSP. As a , all the operators are of k=3 in search space. (ii) SE module is not supported by the AI accelerator of Myriad TM X VPU, and thus rolled back to relatively slow CPU execution. (iii) Even with complex memory operator, Choice 3 (i.e., ShuffleNetV2 unit) is the most efficient operator on VPU and CPU due to its much smaller FLOPs count. Latency predictor. To avoid measuring the latency of every candidate architecture, we build a latency-prediction model with high accuracy: the RMSE (root mean square error) of prediction model is 0.82ms, 21.84ms, and 0.03ms on DSP, CPU, VPU, respectively, which means an average 4.7%, 4.2%, and 0.08% latency estimated error for DSP, CPU and VPU. It suggests the latency prediction model can be used to replace the expensive direct hardware measurement with little error introduced. More details of latency predictor is in Appendix D. Latency constraints. As discussed in Appendix B, the latency constraint on a given platform should be in a meaningful range. For better comparison with other works, we set the latency constraints to be smaller than the best latency of models from other works, which are 310 ms (CPU), 17 ms (DSP) and 36 ms (VPU). Over-parameterized Supernet and Architecture Search. HURRICANE is built on top of Singlepath-Oneshot (; † : In the form "x(y)", where "x" means the training time reduction and "y" means the accuracy achieved. Table 4 ) consists of stem layers and n=20 searchable layers. Once the supernet training finishes, we perform a 20-iterations evolution search for total 1,000 architectures as Singlepath-Oneshot. For better fairness, the supernet is re-initialized randomly before every iteration. , we randomly split the original training set into two parts: 50,000 images for validation (50 images for each class exactly) and the rest as the training set. The original validation set is used for testing, on which all the evaluation are reported. Training Details. We follow most of the training settings and hyper-parameters used in SinglepathOneshot , with two exceptions: (i) For supernet training, the epochs change with different hardware-aware search spaces (listed in Table 1), and we stop at the same level training loss as Singlepath-Oneshot. (ii) For architecture retraining, we change linear learning rate decay to cosine decay from 0.4 to 0. The batch size is 1,024. Training uses 4 NVIDIA V100 GPUs. Results and Search Cost Analysis. Table 2 summarizes our experiment on ImageNet. HUR-RICANE surpasses state-of-the-art models, both manually and automatically designed: compared to MobileNetV2 (top-1 accuracy 72.0%), HURRICANE improves the accuracy by 2.59% to 4.03% on all target hardware platforms. Compared to models searched automatically, HURRICANE demonstrates that it's essential to leverage hardware diversity in NAS to achieve the best efficiency on different hardware platforms. Specially, compared to the most efficient models searched by NAS, HURRICANE (DSP) reaches a 6.35× inference speedup than FBNet-iPhoneX with 3.43% accuracy improvement, HURRICANE (CPU) achieves 1.39% higher accuracy with 11.7ms latency reduction, HURRICANE (VPU) achieves 0.83% higher accuracy with 3.1ms latency reduction. Remarkably, HURRICANE is the only hardware-aware NAS method that searches the better accuracy with much lower latency on all diverse hardware platforms. To compare the search cost, we report supernet training time reduction compared with SinglepathOneshot instead of exact GPU search days as for two reasons: (i): the GPU search days are highly relevant with the experiment environments (e.g., different GPU hardware) and the code implementation (e.g., ImageNet distributed training). (ii): The primary time cost comes from supernet training in Singlepath-Oneshot, as the evolution search is fast that architectures only perform inference. Table 2 shows that our method reduces an average 54.7% time if executes one step search in coordinate ascent (I = 1), which is almost a 2x training time speedup. In addition, HURRICANE already achieves better classification accuracy than other NAS methods at this step. It demonstrates the effectiveness of exploring more architecture selections in the latter layers. With one more step search, HURRICANE usually search better architectures but takes longer tme. Results suggest our method can still save an average 30.4% time (I = 2). FLOPs (higher accuracy) and the DSP hardware characteristics leverage in search space (lower latency). It also demonstrates again that FLOPs is an improper metric for hardware aware NAS. To further demonstrate the effectiveness HURRICANE, we compare it with recent work on a sequence of tasks. These tasks include the original FLOPs (330M) constrained architecture search in , the hardware latency constrained architecture search for different hardware platforms described in this paper. We do the experiments on ImageNet and also the OUIAdience-Age (OUI) for simplicity. For the fairness, we let the methods use the same search space for a single task. For the FLOPs constrained tasks, we use the original search space of . For those hardware latency constrained tasks, we let the methods use our specialized search space listed in Table 1. Dataset and Training Details. OUI-Adience-Age is a small 8-class dataset consisting of 17,000 face images. We split the images into training and testing test by 8:2 for architecture retraining. For architecture search, we randomly split the training set into two parts: 5,567 images for validation and the rest as the training set for supernet. We adopt the same hyperparameter settings as Singlepath-Oneshot, except that we reduce the initial learning rate from 0.5 to 0.1, and the batch size reduced from 1024 to 64. Supernet trains until converge. For the architecture retraining, we train for 400 epochs and change the original linear learning rate decay to Cyclic decay with a bound. We use 1 NVIDIA Telsa P100 for training. Results and Search Cost Analysis. Table 3 summarizes experiment . The searched models outperform the manual designed light-weight models, such as MobileNetV2 (top-1 acc: 72.00% on ImageNet, 85.67% on OUI-Adience-Age). For every row of the table, the proposed method could achieve not only higher accuracy but also better hardware efficiency for all the tasks. As shown in the column of Singlepath-Oneshot, our hardware-aware search space could also improve the accuracy (0.6%-0.8% on OUI-Adience-Age dataset). To illustrate the cost of supernet training, we listed the number of iterations consumed. As shown in Table 3, in most tasks, only one iteration (I = 1) of HURRICANE could achieve a comparable top-1 accuracy (or even better in some tasks), but the number of training iterations is significantly reduced (50%-77.5%). If the computation budget (e.g. training time) allows, HURRICANE can benefit from another iteration (I = 2). The accuracy is improved by 0.15%-1.06% with an additional cost of only 4.0%-23.3% of training iterations. In this paper, we propose HURRICANE to address the challenge of hardware diversity in NAS. By exploring hardware-aware search space and a multistep search scheme based on coordinate ascent framework, our solution achieves better accuracy and much lower latency on three hardware platforms than state-of-the-art hardware-aware NAS. And the searching cost (searching time) is also significantly reduced. For future work, we plan to support more diverse hardware and speed up more NAS methods. Coordinate descent (CD, or coordinate ascent, CA) are optimization algorithms to find the minimum (or maximum) of a function by successively performing approximate minimization (maximization) along coordinate directions or coordinate hyperplanes. They are iterative methods, in which we optimize along a selected coordinate direction or hyperplane at a time while keeping all others fixed. Though not sophisticated, the CD or CA methods could be surprisingly efficient and scalable in many real world scenarios. As it's extremely difficult to an optimal layersampling rates sequence with respect to different layers' importance for final accuracy, CA algorithm is much easier to achieve competitive with careful implementations. In the body of this paper, we assume that the latency constraint should be in a meaningful range, which means the constraint should neither be too small to achieve nor too great to make the problem degraded to an unbounded optimization for accuracy only. The proposed solution doesn't guarantee the performance in case that the latency constraint is out of the meaningful range. However, this could be patched by • for those constraints too small to achieve, we could reduce the number of layers by adding a skip operator to some of the layers. • for those constraints too big, we could adjust the scoring function to let the algorithm cares more about representation capacity other than latency, or even add additional layers. As illustrated in Section 3, we group the 20 layers into earlier t layers and latter 20-t layers. As the changes of resolution are natural boundary due to the feature map size changes, we split the groups at the second resolution downsampling and set t=8. In a constrained optimization problem, it is often required to check whether an architecture exceeds the latency constraint. To reduce the cost and complexity of connecting hardware frequently, we developed the latency predictor, which consists of multiple independent hardware specific predictors. Each predictor takes the sequence of operators in all the layers as the input and predict the real latency of the whole architecture on the target hardware platform. To build the latency prediction model, we uniformly sample 2,000 candidate architectures from our three hardware-aware search space, where 80% of them are used to build the latency model and the rest are used for test. We encode the architectures into a 84-dimension binary vector, where the binary value indicates the occurrence of a corresponding operator. Different regression models are selected for latency prediction on diverse hardware. Specifically, We build GaussianProcessRegressor with Matern kernel (length scale=1.5, nu=0.35) , Lasso Regression model (alpha=0.01) and Bayesian Ridge Regression Table 4: Supernet architecture. Column-"Block" denotes the block type. "TBS" means layer type needs to be searched. The "stride" column represents the stride of the first block in each repeated group. In our paper, we search the operations for total 20 layers. Table 2.
We propose HURRICANE to address the challenge of hardware diversity in one-shot neural architecture search
1,357
scitldr
In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages. A word can be considered as a basic unit in languages. However, in many cases, we often need a phrase to express a concrete meaning. For example, consider understanding the following sentence, "machine learning is a field of computer science". It may become easier to comprehend if we segment it as " [machine learning] [is] [a field of] [computer science]", where the words in the bracket'[]' are regarded as "phrases". These phrases have their own meanings, and can often be reused in other contexts. The goal of this paper is to explore the use of phrase structures aforementioned for neural networkbased machine translation systems BID22 BID0. To this end, we develop a neural machine translation method that explicitly models phrases in target language sequences. Traditional phrase-based statistical machine translation (SMT) approaches have been shown to consistently outperform word-based ones (; ; BID15 . However, modern neural machine translation (NMT) methods BID22 BID0 do not have an explicit treatment on phrases, but they still work surprisingly well and have been deployed to industrial systems BID31 BID28. The proposed Neural Phrase-based Machine Translation (NPMT) method tries to explore the advantages from both kingdoms. It builds upon Sleep-WAke Networks (SWAN), a segmentation-based sequence modeling technique described in BID25, where segments (or phrases) are automatically discovered given the data. However, SWAN requires monotonic alignments between inputs and outputs. This is often not an appropriate assumption in many language pairs. To mitigate this issue, we introduce a new layer to perform (soft) local reordering on input sequences. Experimental show that NPMT outperforms attention-based NMT baselines in terms of the BLEU score BID19 on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese translation tasks. We believe our method is one step towards the full integration of the advantages from neural machine translation and phrase-based SMT. This paper is organized as follows. Section 2 presents the neural phrase-based machine translation model. Section 3 demonstrates the usefulness of our approach on several language pairs. We conclude our work with some discussions in Section 4. We first give an overview of the proposed NPMT architecture and some related work on incorporating phrases into NMT. We then describe the two key building blocks in NPMT: 1) SWAN, and 2) the soft reordering layer which alleviates the monotonic alignment requirement of SWAN. In the context of machine translation, we use "segment" and "phrase" interchangeably. Figure 1(a) shows the overall architecture of NPMT. The input sequence is first turned into embedding representations and then they go through a (soft) reordering layer (described below in Section 2.3). We then pass these "reordered" activations to the bi-directional RNN layers, which are finally fed into the SWAN layer to directly output target language in terms of segments (or phrases). While it is possible to replace bi-directional RNN layers with other layers BID8, in this paper, we have only explored this particular setting to demonstrate our proposed idea. There have been several works that propose different ways to incorporate phrases into attentionbased neural machine translation, such as BID23 BID26; BID6. These approaches typically use predefined phrases (obtained by external methods, e.g., phrase-based SMT) to guide or modify the existing attention-based decoder. The major difference from our approach is that, in NPMT, we do not use attention-based decoding mechanisms, and our phrase structures for the target language are automatically discovered from the training data. Another line of related work is the segment-to-segment neural transduction model (SSNT), which shows promising on a Chinese-to-English translation task under a noisy channel framework BID30. In SSNT, the segments are implicit, and the monotonic alignments between the inputs and outputs are achieved using latent variables. The latent variables are marginalized out during training using dynamic programming. BID25. Symbol $ indicates the end of a segment. Given a sequence of inputs x1,..., x5, which is from the outputs from the bi-directional RNN of FIG0 (a), SWAN emits one particular segmentation of y1:3 = π(a1:5), where DISPLAYFORM0 Here x1 wakes (emitting segment a1) and x4 wakes (emitting segment a4) while x2, x3 and x5 sleep (emitting empty segments a2, a3 and a5 respectively). Here we review the SWAN model proposed in BID25. SWAN defines a probability distribution for the output sequence given an input sequence. It models all valid output segmentations of the output sequence as well as the monotonic alignments between the output segments and the input sequence. Empty segments are allowed in the output segmentations. It does not make any assumption on the lengths of input or output sequence. Assume input sequence for SWAN is x 1:T, which is the outputs from bi-directional RNN of FIG0 (a), and output sequence is y 1:T. Let S y denote the set containing all valid segmentations of y 1:T, with the constraint that the number of segments in a segmentation is the same as the input sequence length, T. Let a t denote a segment or phrase in the target sequence. Empty segments are allowed to ensure that we can correctly align segment a t to input element x t. Otherwise, we might not have a valid alignment for the input and output pair. See FIG1 for an example of the emitted segmentation of y 1:T. The probability of the sequence y 1:T is defined as the sum of the probabilities of all the segmentations in S y {a 1:T : π(a 1:T) = y 1:T }, DISPLAYFORM0 where the p(a t |x t) is the segment probability given input element x t, which is modeled using a recurrent neural network (RNN) with an additional softmax layer. π(·) is the concatenation operator and the symbol $, end of a segment, is ignored in the concatenation operator π(·). (An empty segment, which only contains $ will thus be ignored as well.) SWAN can be also understood via a generative model, 1. For t = 1,..., T: (a) Given an initial state of x t, sample words from RNN until we reach an end of segment symbol $. This gives us a segment a t. 2. Concatenate {a 1, ..., a T} to obtain the output sequence via π(a 1:T) = y 1:T.Since there are more than one way to obtain the same y 1:T using the generative process above, the probability of observing y 1:T is obtained by summing over all possible ways, which is Eq. 1.Note that |S y | is exponentially large, direct summation quickly becomes infeasible when T or T is not small. Instead, BID25 developed an exact dynamic programming algorithm to tackle the computation challenges. 3 The key idea is that although the number of possible segmentations is exponentially large, the number of possible segments is polynomial-O(T 2). In other words, it is possible to first compute all possible segment probabilities, p(a t |x t), ∀a t, x t, and then use dynamic programming to calculate the output sequence probability p(y 1:T |x 1:T) in Eq.. The feasibility of using dynamic programming is due to a property of segmentations-a segmentation of a subsequence is also part of the segmentation of the entire sequence. In practice, a maximum length L for a segment a t is enforced to reduce the computational complexity, since the length of useful segments is often not very long. BID25 also discussed a way to carry over information across segments using a separate RNN, which we will not elaborate here. We refer the readers to the original paper for the algorithmic details. SWAN defines a conditional probability for an output sequence given an input one. It can be used in many sequence-to-sequence tasks. In practice, a sequence encoder like a bi-directional RNN can be used to process the raw input sequence (like speech signals or source language) to obtain x 1:T that is to be passed into SWAN for decoding. For example, BID25 demonstrated the usefulness of SWAN in the context of speech recognition. Greedy decoding for SWAN is straightforward. We first note that p(a t |x t) is modeled as an RNN with an additional softmax layer. Given each p(a t |x t), ∀t ∈ 1,..., T, is independent of each other, we can run the RNN in parallel to produce an output segment (possibly empty) for each p(a t |x t). We then concatenate these output segments to form the greedy decoding of the entire output sequence. The decoding satisfies the non-autoregressive property BID9 and the decoding complexity is O(T L). See BID25 for the algorithmic details of the beam search decoder. We finally note that, in SWAN (thus in NPMT), only output segments are explicit; input segments are implicitly modeled by allowing empty segments in the output. This is conceptually different from the traditional phrase-based SMT where both inputs and outputs are phrases (or segments). We leave the option of exploring explicit input segments as future work. SWAN assumes a monotonic alignment between the output segments and the input elements. For speech recognition experiments in BID25, this is a reasonable assumption. However, for machine translation, this is usually too restrictive. In neural machine translation literature, attention mechanisms were proposed to address alignment problems BID0 BID20 BID24. But it is not clear how to apply a similar attention mechanism to SWAN due to the use of segmentations for output sequences. One may note that in NPMT, a bi-directional RNN encoder for the source language can partially mitigate the alignment issue for SWAN, since it can access every source word. However, from our empirical studies, it is not enough to obtain the best performance. Here we augment SWAN with a reordering layer that does (soft) local reordering of the input sequence. This new model leads to promising on the IWSLT 2014 German-English/English-German, and IWSLT 2015 EnglishVietnamese machine translation tasks. One additional advantage of using SWAN is that since SWAN does not use attention mechanisms, decoding can be done in parallel with linear complexity, as now we remove the need to query the entire input source for every output word BID20 BID9.We now describe the details of the local reordering layer shown in FIG2. Denote the input to the local reordering layer by e 1:T, which is the output from the word embedding layer of FIG0, and the output of this layer by h 1:T, which is fed as inputs to the bi-directional RNN of FIG0. We compute h t as DISPLAYFORM0 where σ(·) is the sigmoid function, and 2τ + 1 is the local reordering window size. weight of e t−τ +i through the gate σ w T i [e t−τ ; . . . ; e t ; . . . ; e t+τ]. The final output h t is a weighted linear combination of the input elements, e t−τ,..., e t,..., e t+τ, in the window followed by a nonlinear transformation by the tanh(·) function. Figure 3(b) illustrates how local reordering works. Here we want to (softly) select an input element from a window given all information available in this window. Suppose we have two adjacent windows, (e 1, e 2, e 3) and (e 2, e 3, e 4). If e 3 gets the largest weight (e 3 is picked) in the first window and e 2 gets the largest weight (e 2 is picked) in the second window, e 2 and e 3 are effectively reordered. Our layer is different from the attention mechanism BID0 BID20 BID24 in following ways. First, we do not have a query to begin with as in standard attention mechanisms. Second, unlike standard attention, which is top-down from a decoder state to encoder states, the reordering operation is bottom-up. Third, the weights {w i} 2τ i=0 capture the relative positions of the input elements, whereas the weights are the same for different queries and encoder hidden states in the attention mechanism (no positional information). The reordering layer performs locally similar to a convolutional layer and the positional information is encoded by a different parameter w i for each relative position i in the window. Fourth, we do not normalize the weights for the input elements e t−τ,..., e t,..., e t+τ. This provides the reordering capability and can potentially turn off everything if needed. Finally, the gate of any position i in the reordering window is determined by all input elements e t−τ,..., e t,..., e t+τ in the window. We provide a visualizing example of the reordering layer gates that performs input swapping in Appendix A.One related work to our proposed reordering layer is the Gated Linear Units (GLU) which can control the information flow of the output of a traditional convolutional layer. But GLU does not have a mechanism to decide which input element from the convolutional window to choose. From our experiments, neither GLU nor traditional convolutional layer helped our NPMT. Another related work to the window size of the reordering layer is the distortion limit in traditional phrase-based statistical machine translation methods BID2. Different window sizes restrict the context of each position to different numbers of neighbors. We provide an empirical comparison of different window sizes in Appendix B. In this section, we evaluate our model on the IWSLT 2014 German-English BID3, IWSLT 2014 English-German, and IWSLT 2015 English-Vietnamese BID4 machine translation tasks. We note that, in this paper, we limit the applications of our model to relatively small datasets to demonstrate the usefulness of our method. We plan to conduct more large scale experiments in future work. MIXER BID21 20.73 21.83 LL BID27 22.53 23.87 BSO BID27 23.83 25.48 LL BID1 25 We evaluate our model on the German-English machine translation track of the IWSLT 2014 evaluation campaign BID3. The data comes from translated TED talks, and the dataset contains roughly 153K training sentences, 7K development sentences, and 7K test sentences. We use the same preprocessing and dataset splits as in Ranzato et al. We report our IWSLT 2014 German-English experiments using one reordering layer with window size 7, two layers of bi-directional GRU encoder (Gated recurrent unit, BID5) with 256 hidden units, and two layers of unidirectional GRU decoder with 512 hidden units. We add dropout with a rate of 0.5 in the GRU layer. We choose GRU since baselines for comparisons were using GRU. The maximum segment length is set to 6. Batch size is set as 32 (per GPU) and the Adam algorithm BID13 ) is used for optimization with an initial learning rate of 0.001. For decoding, we use greedy search and beam search with a beam size of 10. As reported in BID18; BID1, we find that penalizing candidate sentences that are too short was required to obtain the best . We add the middle term of Eq. to encourage longer candidate sentences. All hyperparameters are chosen based on the development set. NPMT takes about 2-3 days to run to convergence (40 epochs) on a machine with four M40 GPUs. The are summarized in Table 1. In addition to previous reported baselines in the literature, we also explored the best hyperparameter using the same model architecture (except the reordering layer) using sequence-to-sequence model with attention as reported as LL * of Table 1.NPMT achieves state-of-the-art on this dataset as far as we know. Compared to the supervised sequence-to-sequence model, LL BID1, NPMT achieves 2.4 BLEU gain in the greedy setting and 2.25 BLEU gain using beam-search. Our are also better than those from the actor-critic based methods in BID1. But we note that our proposed method is orthogonal to the actor-critic method. So it is possible to further improve our using the actor-critic method. We also run the following two experiments to verify the sources of the gain. The first is to add a reordering layer to the original sequence-to-sequence model with attention, which gives us BLEU scores of 25.55 (greedy) and 26.91 (beam search). Since the attention mechanism and reordering layer capture similar information, adding the reordering layer to the sequence-to-sequence model with attention does not improve the performance. The second is to remove the reordering layer from NPMT, which gives us BLEU scores of 27.79 (greedy) and 29.28 (beam search). This shows that the reordering layer and SWAN are both important for the effectiveness of NPMT.. target ground truth there are tens of thousands of machines around the world that make small pieces of dna -30 to 50 letters -in length -and it's a UNK process, so the longer you make the piece, the more errors there are. Table 2: Examples of German-English translation outputs with their segmentations. We label the indexes of the words in the source sentence and we use those indexes to indicate where the output segment is emitted. For example, in greedy decoding , " i word1,..., wordm" denotes i-th word in the source sentence emits words word1,..., wordm during decoding (assuming monotonic alignments). The "•" represents the segment boundary in the target output. See Figure 4 for a visualization of row 1 in this table. In greedy decoding, we can estimate the average segment length 4 for the output. The average segment length is around 1.4-1.6, indicating phrases with more than one word are being decoded. Figure 4 shows an example of the input and decoding with NPMT. We can observe phraselevel translation being captured by the model (e.g., "danke" → "thank you"). The model also knows when to sleep before outputting a phrase (e.g., "das" → "$"). We use the indexes of words in the source sentence to indicate where the output phrases are from. Table 2 shows some sampled exam-ples. We can observe there are many informative segments in the decoding , e.g., "tens of thousands of", "the best thing", "a little", etc. There are also mappings from phrase to phrase, word to phrases, and phrase to word in the examples. Following the analysis, we show the most frequent phrase mappings in Appendix C.We also explore an option of adding a language-model score during beam search as the traditional statistical machine translation does. This option might not make much sense in attention-based approaches, since the decoder itself is usually a neural network language model. In SWAN, however, there is no language models directly involved in the segmentation modeling, 5 and we find it useful to have an external language model during beam search. We use a 4th-order language model trained using the KenLM implementation BID11 for English target training data. So the final beam search score we use is DISPLAYFORM0 where we empirically find that λ 1 = 1.2 and λ 2 = 0.2 give good performance, which are tuned on the development set. The with the external language model are denoted by NPMT+LM in Table 1. If no external language models are used, we set λ 2 = 0. This scoring function is similar to the one for speech recognition in. We also evaluate our model on the opposition direction, English-German, which translates from a more segmented text to a more inflectional one. Following the setup in Section 3.1, we use the same dataset with the opposite source and target languages. We use the same model architecture, optimization algorithm and beam search size as the German-English translation task. NPMT takes about 2-3 days to run to convergence (40 epochs) on a machine with four M40 GPUs. Given there is no previous sequence-to-sequence attention model baseline for this setup, we create a strong one and tune hyperparameters on the development set. The are shown in TAB4. Based on the development set, we set λ 1 = 1 and λ 2 = 0.15 in Eq.. Our model outperforms sequence-to-sequence model with attention by 2.46 BLEU and 2.49 BLEU in greedy and beam search cases. We can also use a 4th-order language model trained using the KenLM implementation for German target training data, which further improves the performance. Some sampled examples are shown in TAB6. Several informative segments/phrases can be found in the decoding , e.g., "some time ago" → "vor enniger zeit". In this section, we evaluate our model on the IWSLT 2015 English to Vietnamese machine translation task. The data is from translated TED talks, and the dataset contains roughly 133K training sentence pairs provided by the IWSLT 2015 Evaluation Campaign BID4. Following the same preprocessing steps in; BID20 We use one reordering layer with window size 7, two layers of bi-directional LSTM (Long shortterm memory, BID12) encoder with 512 hidden units, and three layers of unidirectional LSTM decoder with 512 hidden units. We add dropout with a rate of 0.4 in the LSTM layer. We choose LSTM since baselines for comparisons were using LSTM. The maximum segment length is set to 7. Batch size is set as 48 (per GPU) and the Adam algorithm BID13 is used for optimization with an initial learning rate of 0.001. For decoding, we use greedy decoding and beam search with a beam size of 10. The are shown in TAB8. Based on the development set, we set λ 1 = 0.7 and λ 2 = 0.15 in Eq.. NPMT takes about one day to run to convergence (15 epochs) on a machine with 4 M40 GPUs. Our model outperforms sequence-tosequence model with attention by 1.41 BLEU and 1.59 BLEU in greedy and beam search cases. We also use a 4th-order language model trained using the KenLM implementation for Vietnamese target training data, which further improves the BLEU score. Note that our reordering layer relaxes the monotonic assumption as in BID20 and is able to decode in linear time. Empirically we outperform models with monotonic attention. Table 6 shows some sampled examples. Hard monotonic BID20 23.00 -Luong & Manning FORMULA1 FIG4, we show an example that translates from "can you translate it?" to "können man esübersetzen?", where the mapping between words are as follows: "can → können", "you → man", "translate →übersetzen", "it → es" and "? →?". Note that the example needs to be reordered from "translate it" to "esübersetzen". Each row of FIG4 represents a window of size 7 that is centered at a source sentence word. The values in the matrix represent the gate values for the corresponding words. The gate values will later be multiplied with the embedding e t−τ +i of Eq. and contribute to the hidden vector h t. The y-axis represents the word/phrases emitted from the corresponding position. We can observe that the gates mostly focus on the central word since the first part of the sentence only requires monotonic alignment. Interestingly, the model outputs "$" (empty) when the model has the word "translate" in the center of the window. Then, the model outputs "es" when the model encounters "it". Finally, in the last window (top row), the model not only has a large gate value to the center input "?", but the model also has a relatively large gate value to the word "translate" in order to output the translation "übersetzen?". This shows an example of the reordering effect achieved by using the gating mechanism of the reordering layer. In this section, we examine the effect of window sizes in the reordering layer. Following the setup in Section 3.2, we evaluate the performance of different window sizes on the IWSLT 2014 EnglishGerman translation task. TAB9 summarizes the . We can observe that the performance reaches the peak with a windows size of 7. With a window size of 5, the performance drops 0.88 BLEU in greedy decoding and 0.72 BLEU using beam search. It suggests that the context window is not large enough to properly perform reordering. When the window sizes are 9 and 11, we do not observe further improvements. It might be because the translation between English and German mostly requires local word reordering. Following the examples of Table 2, we analyze the decoding on the test set of the GermanEnglish translation task. Given we do not have explicit input segments in NPMT, we assume input words that emit "$" symbol are within the same group as the next non-'$' word. For example, in Figure 4, input words "das beste" are considered as an input segment. We then can aggregate all the input, output segments (phrases) and sort them based on the frequency. Tables C and C show UNK → the UNK in der → in der UNK → the UNK in diesem → in this und → and und →, and UNK. →. ein UNK → a UNK die welt → the world UNK → UNK das → this is UNK, →, das UNK → the UNK ist es → it's aber → but das, → that's, die → that eine UNK → a UNK ". →. " " → " UNK → a UNK ist. →. in UNK → in UNK ein paar → a few ist → is ich → i think in den → in den UNK → the UNK gibt es → there's der → of es → it was ist, →, wissen sie → you know der welt → the world von → of dies → this is sind. →. in diesem → in this die frage → the question mit → with es → there's, wenn → if dem UNK → the UNK haben wir → we have Table 8: German-English phrase mapping . We show the top 10 input, output phrase mappings in five categories ("One" stands for single word and "Many" stands for multiple words.). In the last column, Many → Many *, we remove the phrases with the "UNK" word as the "UNK" appears often. Phrases with 3 words Phrases with 4 words auf der ganzen → all over the auf der ganzen → a little bit of gibt eine menge → a lot of weiß nicht, was → what's going to be dann hat er→ he doesn't have tun, das wir → we can't do, die man → you can do tat, das ich → i didn't do das können wir → we can do that zu verbessern, die → that can be done Table 9: German-English longer phrase mapping . We show the top 5 input, output phrase mappings for two categories: input and output phrases with three words, and input and output phrases with four words.
Neural phrase-based machine translation with linear decoding time
1,358
scitldr
Generative Adversarial Networks (GANs) have shown impressive in modeling distributions over complicated manifolds such as those of natural images. However, GANs often suffer from mode collapse, which means they are prone to characterize only a single or a few modes of the data distribution. In order to address this problem, we propose a novel framework called LDMGAN. We first introduce Latent Distribution Matching (LDM) constraint which regularizes the generator by aligning distribution of generated samples with that of real samples in latent space. To make use of such latent space, we propose a regularized AutoEncoder (AE) that maps the data distribution to prior distribution in encoded space. Extensive experiments on synthetic data and real world datasets show that our proposed framework significantly improves GAN’s stability and diversity. Generative models (; ; ; ; ; ;) provide powerful tools for unsupervised learning of probability distributions over difficult manifolds such as those of natural images. Among these models, instead of requiring explicit parametric specification of the model distribution and a likelihood function, Generative Adversarial Networks (GAN) only have a generating procedure. They generate samples that are sharp and compelling, which have gained great successes on image generation tasks (; ; ;) recently. GANs are composed of two types of deep neural networks that compete with each other: a generator and a discriminator. The generator tries to map noise sampled from simple prior distribution which is usually a multivariate gaussian to data space with the aim of fooling the discriminator, while the discriminator learns to determine whether a sample comes from the real dataset or generated samples. In practice, however, GANs are fragile and in general notoriously hard to train. On the one hand, they are sensitive to architectures and hyper-parameters . For example, the imbalance between discriminator and generator capacities often leads to convergence issues. On the other hand, there is a common failure issue in GANs called mode collapse. The generator tends to produce only a single sample or a few very similar samples in that case, which means GANs put large volumes of probability mass onto a few modes. We conjecture the mode missing issue in GANs is probably because GANs lack a regularization term that can lead the generator to produce diverse samples. To remedy this problem, in this work, we first propose a regularization constraint called Latent Distribution Matching. It suppresses the mode collapse issue in GANs by aligning the distributions between true data and generated data in encoded space. To obtain such encoded space, we introduce a regularized autoencoder which maps data distribution to a simple prior distribution, eg., a gaussian. As shown in Figure 1, we collapse the decoder of the regularized AE and generator of GAN into one and propose LDMGAN. Our framework can stabilize GAN's training and reduce mode collapse issue in GANs. Compared to other AE-based methods on 2D synthetic, MNIST, Stacked-MNIST, CIFAR-10 and CelebA datasets, our method obtains better stability, diversity and competitive standard scores. GANs were initially proposed by. They contain two neural networks: a generator and a discriminator. Let {x i} N i=1 denote the training data, where each x i ∈ R D is drawn from unknown data distribution p d (x). Generator is a neural network G γ that maps the noise vector z ∈ R K, typically drawn from a multi-variate gaussian, to data spacex ∈ R D. The discriminator D ω is a classification network that distinguishes real samples from generated samples. The parameters of these networks are optimized by solving the following minmax game: where E x indicates an expectation over the data distribution p d (x) and E z indicates an expectation over the prior noise distribution p(z). Given generator G, the optimal discriminator D is is the model distribution. With the help of the optimal discriminator, the GAN objective is actually minimizing the Jensen Shannon divergence between the model distribution and the data distribution. And the global minimum is achieved if and only if There are many works that try to stabilize GAN's training and alleviate mode collapse in GANs. DCGAN designed a class of deep convolutional architectures which has become standard architecture for training GANs. Improved GANs proposed several techniques like feature matching, mini-batch discrimination and historical averaging which stabilized GAN's training and reduced mode collapse. Unrolled GAN proposed unrolling optimization of the discriminator objective to train generator. TTUR introduced two time-scale update rule and proved its convergence to a local Nash equilibrium. analyzed the convergence properties of GANs and proposed WGAN which leveraged the Wasserstein distance and demonstrated its better convergence than Jensen Shannon divergence. However, WGAN required that the discriminator must lie on the space of 1-Lipschitz functions. It used a weight clipping trick to enforce that constraint. WGAN-GP stabilized WGAN by alternating the weight clipping by penalizing the gradient norm of the interpolated samples. SN-GAN proposed spectral normalization which controls the Lipschitz constraint of netowork layers. Some works make efforts towards integrating AEs into GANs. AEs can be used in discriminators. EBGAN employed an autoencoder structure in its discriminator and introduced repelling regularization to prevent mode collapse. BEGAN extended EBGAN by optimizing Wasserstein distance between AE loss distributions of real and generated samples. MDGAN contained a manifold step and a diffusion step that combined a plain autoencoder with GANs to suppress mode missing problem. However, MDGAN required two discriminators and it did not impose regularization on its AE. VAEGAN unified VAE and GAN into one model, and utilized feature-wise distance in intermediate features learned by discriminator to replace similarity metric in data space in VAE to avoid blur. AAE used adversarial learning in its encoded space to perform variational inference. BiGAN and Adversarial Learned Inference (ALI) jointly trained an inference model and a generative model through adversarial game. VEEGAN employed a reconstructor network to both map the true data distribution p d (x) to a gaussian and to approximately invert the generator network. AGE performed adversarial game between a generator and an encoder, and proposed to align the model distribution with data distribution in encoded space. They eliminated discriminator in their framework and used an encoder to extract statistics to align model distribution with data distribution. The most related works to ours are VAEGAN and VEEGAN. VAEGAN's motivation was to combine VAE with GAN, and they relied on an ELBO, namely While we perform marginal distribution matching, namely E q(z|x) [log p(x|z)] − D KL (q(z) p(z)), and view the second term as a regularization term. Therefore there is no need of re-parameterization trick in our framework. Also we compare the distributions of the real samples and generated samples in the encoded space to suppress mode collapse in GANs which is different from VAEGAN. VEEGAN had a reconstructor network which mapped the data distribution to a gaussian as well. However, VEEGAN autoencoded noise vectors rather than data items, and its training objective was formed as an adversarial game in joint space (x, z) like BiGAN and ALI. Our regularized autoencoder is similar to AAE. But instead of using adversarial learning in encoded space which also has possibility of falling into mode collapse as in data space, we use explicit divergence to align aggregated posterior with the prior distribution. GAN's training can be viewed as an adversarial game between two players, in which the discriminator tries to distinguish between real and generated samples, while the generator tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. However, there are many points in data manifold that have high discrimination values. When collapse to a single mode is imminent, the gradient of the discriminator may point in similar directions for many similar points . Since the discriminator iteself processes each sample independently, there is no mechanism to tell the outputs of the generator to become more dissimilar to each other. In this paper, we propose a framework called LDMGAN to address this problem. See Figure 1 for a glance of our model's architecture. Instead of exposing the dissimilarity of generated samples to the discriminator , we introduce a regularized autoencoder in our framework. We perform data dimensionality reduction on the training data to obtain a well-shaped manifold in encoded space, e.g., a factorized gaussian, by learning a regularized autoencoder. Then when sampling noise from a factorized gaussian to feed into the generator to produce generated samples, we encode the generated samples to latent space to enforce them to be coincident with the factorized gaussian. In this way, the generator receives a signal from that constraint to tell it to produce diverse samples. We call this constraint Latent Distribution Matching (LDM). In the following, we describe LDM and the regularized autoencoder in detail. Figure 1: Our LDMGAN's architecture. The thick lines denote added losses to GANs. The main idea of LDM is to introduce an encoder function F θ that can map the true data distribution p(x) to a prior Gaussian p(z). To understand why such an encoder helps alleviate address mode collapse, consider the example in Figure 2. The middle vertical panel represents the data space, where in this example the true data distribution p d (x) is a mixture of two Gaussians. The bottom panel depicts the input to the generator (latent code space), which is drawn from p(z), and the top panel depicts the of applying the encoder network to the generated data and true data. Assume that the encoder approximately maps the data distribution to the prior gaussian (see q(z) above the top panel in Figure 2 ). The purple arrows above the middle panel show the action of the encoder on the true data, whereas the green arrows show the same action on the generated data. If mode collapse occurs as it does in this example that the generator G γ captures only one of the two modes of p d (x), it would cause distribution mismatch between the two marginal distributions q(z) and q (z). The definitions of q(z) and q (z) go as follow: where. This mismatch is evaluated by a divergence measure ∆(q (z) q(z)) in the latent space. We only require this divergence to be nonnegative and zero if and only if the distributions are identical (∆(q (z) q(z)) = 0 ⇐⇒ q (z) = q(z)). Therefore, this mismatch value function is defined as: However, comparing two empirical (hence non-parametric) distributions q (z) and q(z) is difficult. We avoid this issue by introducing an intermediate reference distribution which is also the prior distribution p(z), ing in: As we have assumed that the encoder network can approximately map the data distribution to a gaussian, we can get rid of ∆(q(z) p(z)) with minor error. Finally, this mismatch term turns out to be It can be used to detect mode collapse in generator. We call this mismatch term as Latent Distribution Matching (LDM), and use this term to regularize the generator in GANs to combat mode collapse. AutoEncoders have long been used for data dimensionality reduction and feature learning. We impose the regularization that the distribution in the encoded space of real data is coincident with the prior distribution p(z) on the autoencoder. We call this autoencoder the regularized autoencoer in this paper. There are two ways of utilizing the regularized autoencoder. One is to pretrain a regularized autoencoder, then use the encoder of the regularized autoencoder only. The other is to combine the autoencoder training with GAN objective which has already been proposed in many GAN variants (; ;). We combine autoencoder with GANs in our model for three reasons: to guarantee the learning is grounded on all training samples; to regularize the generator in producing samples to resemble real ones; last but not least, to obtain an encoder that can approximately map the data distribution to a gaussian. In the following, we will explain the reasons for combining autoencoders in our model and describe our framework formally. One of the reasons that mode collapse occurs is probably that the areas near the missing modes are rarely visited by the generator, therefore providing very few samples to improve the generator around those areas. And only when G(z) is very close to the missing modes can the generator get gradients to push itself towards the missing modes. Consider a model combining GANs with reconstruction loss in data space d(x, G(E(x))). Given enough data examples x available in training data for minor mode M 0, we know G • E is a good autoencoder, then G • E(x) will be located very close to the minor mode M 0. Therefore the gradient from the discriminator ∇ log D(G • E(x)) might be able to push these points towards the minor mode M 0 and the learning is grounded on all training samples. For that purpose, the following loss is added to the generator: As the regularized autoencoder combined with GAN has imposed gaussian on the encoded space of real data samples, this guarantees such E(x) for minor modes can be sampled with fair possibility when sampling z from gaussian that are feeded to the generator. In addition, as lots of image synthesis works ) have used L 1 or L 2 loss in data space and achieved successes in producing realistic images. We use the autoencoder loss d(x, G • E(x)) in data space as well for the purpose of regularizing the generator in producing samples to resemble real ones. In this section, we formally demonstrate the regularized autoencoder we combine with GANs. Let x be the data sample vector,x be the reconstructed sample vector of x by the autoencoder and z be the latent code vector of the autoencoder. Let p(z) be the prior distribution we want to impose on the codes, q(z|x) be an encoding distribution and p(x|z) be the decoding distribution. Also let p d (x) be the data distribution, and p g (x) be the model distribution. We desire an autoencoder whose aggregated posterior q(z) is coincident with a prior distribution p(z). In this paper, we use a factorized gaussian as the prior distribution. The encoding function of the autoencoder q(z|x) defines the aggregated posterior in Equation 2. We choose a deterministic function form for q(z|x) which means the stochasticity of q(z) comes only from the the data distribution p d (x). We impose a regularization term on the encoded space of the autoencoder to minimize: ∆(q(z) p(z)). The autoencoder, meanwhile, attempts to minimize the reconstruction error d(x,x) (here d is some similarity metric, besides L 1 or L 2 distance in data space, it can also be distance metric in the feature-wise space like the intermediate layers of well-known VGG network or the discriminator of GANs). Therefore,the regularized autoencoder is trained in two phases: the reconstruction phase and the regularization phase. In the reconstruction phase, the regularized autoencoder uses SGD to update the encoder and decoder's parameters to reconstruct the input data vector. In the regularization phase, the autoencoder uses SGD to update the encoder's parameters to match the aggregated posterior with the prior distribution. Implemented by deep neural networks, with arbitrary capacity, the universal approximation theorem guarantees the regularized autoencoder is capable of approximately matching the aggregated posterior q(z) with the prior distribution p(z). Given the encoder E θ which is capable of approximately mapping the data distribution to a gaussian, consider a standard GAN training loss plus the LDM (Latent Distribution Matching) constraint on the generator: This training objective Equation 8 is what we apply in GAN framework when not concerning combining the regularized autoencoder into GAN. And after unifying the regularized autoencoder and GAN described above, the objective becomes: where λ 1 and λ 2 are hyper-parameters that control the loss weights of reconstruction error and LDM constraint respectively. Note that the LDM term ∆(q (z) p(z)) is minimized only with respect to G γ, which means the parameters of the encoder E are fixed during backpropagating the LDM term. The divergence measure between the empirical distribution and the prior distribution in the latent M -dimensional space that we use in this paper goes as follow: where µ i and σ i are the means and the standard deviations of the fitted empirical distribution Q along various dimensions. The training algorithm and architecture of our proposed model is depicted in Algorithm 1 and Figure 1 respectively. Algorithm 1 Our LDM-GAN's training algorithm 1: Dω, E θ, Gγ ← initialize network parameters respectively 2: repeat 3: X ← Random mini-batch of m data points from dataset 4: Xrec ← Gγ(Z) 8: Zrec ← E θ (X) 9: // Training discriminator D 10: // Training encoder E 12: // Training generator G 14: gγ ← ∇γ log(1 − Dω(X)) + log(1 − Dω(Xrec)) + λ1d(X, Xrec) + λ2DKL(q (Zrec) p(z)) 15: // Using SGD to update model's parameters 16: ω ← ω − ηgω; θ ← θ − ηg θ; γ ← γ − ηgγ 17: until convergence 18: return D, E, G In this section, we compare our model with four different models on 2D Synthetic, MNIST, Stacked-MNSIT, CIFAR-10 and CelebA datasets. We do not finetune the hyper-parameters λ 1 and λ 2 in our model, and by default set λ 1 = 1 and λ 2 = 1. Evaluation of mode collapse in GANs is very difficult especially when trained on natural images. However, the missing modes can be calculated precisely when using synthetic data for training. In this section we compare five different models on two synthetic datasets: a mixture of eight 2D Gaussian distributions arranged in a ring, and a mixture of twenty-five 2D Gaussian distributions arranged in a grid. The covariance matrices and centroids have been chosen such that the distributions exhibit lots of modes separated by large low-probability regions, which make them decently hard tasks despite the 2D nature of the dataset. All models share the same network architectures for fair comparison. The encoder and the generator network are consisted of two hidden layers, while the discriminator network only has one hidden layer. All networks are implemented by fully-connected layers and each layer has 64 neurons. The dimension of the prior input is 2. For VEEGAN and BiGAN, the discriminator's input is augumented by the size of the latent code. Note that the VAEGAN we used here is slightly different from the original paper, since the network architectures are shallow and the data is low-dimensional: we used L 2 loss on the data space for simplicity instead of on features learned by the discriminator. To quantify the mode collapse behavior we report two metrics: the number of modes that each model captures and the percentage of high quality samples. We draw 2048 samples from each model for evaluation. The high quality samples are defined as the points which are within three standard deviations of the nearest mode, and if there are more than twenty such samples registered to a mode, the mode is considered as captured. The are depicted in Table 1 and the numbers are averaged over eight runs. We also show the generated distribution of each model in Figure 3 As we can see from Table 1, our model captures the greatest number of modes on both synthetic datasets. The generator learned by our model is sharper and closer to the true distribution as shown in Figure 3. Though other models may obtain greater percentage of high quality samples, for example, vanilla GAN generates 99.96% high quality samples on 2D Ring dataset. However, it runs into severe mode missing problem. Vanilla GAN only captures one mode on 2D Ring dataset. 1. VEEGAN is able to capture most of the modes in some settings (LDMGAN also performs as well as VEEGAN under those settings), but it fails to generalize well under our settings. In particular, while our model resembles VAEGAN in some ways, VAEGAN suffers from some degree of mode missing problem as well and it is far from competitive compared to our proposed model. In order to systemically explore the effect of our proposed LDMGAN on alleviating mode collapse in GANs, we use a large scale grid search of different GAN hyper-parameters on the MNIST dataset. We use the same hyper-parameters settings for both vanilla GAN and our LDMGAN, and the encoder in our model is the "inverse" of the generator. The search range is listed in Table 2. Our grid search is similar to those proposed in;. Please refer to it for detailed explanations regarding these hyper-parameters. In order to estimate both the missing modes and the sample qualities, we first train a regular CNN classifier on the MNIST digits, and then apply it to compute the MODE Score which is proposed in. MODE Score is a variant of Inception Score (IS) and its definition goes as follow: where p(y|x) is the softmax output of a trained classifier of the labels, and p * (y) and p(y) are the marginal distributions of the label of the generated samples and training samples respectively. We train each architecture for 50 epochs and after that draw 10K samples for evaluation. The ing distribution of MODE Score is shown in Figure 4. It can be seen that our model clearly improves the sample quality and diversity compared to GANs. , we evaluate our method on the stacked MNIST dataset, a variant of the MNIST data specifically designed to increase the number of discrete modes. The data is synthesized by stacking three randomly sampled MNIST digits along the color channel ing in a 28×28×3 image. We now expect 1000 modes on this dataset, corresponding to the number of possible triples of digits. We generate a dataset of 50000 images for training. And for fair comparation we use the same architecture for all models. The generator and discriminator networks are implemented following DCGAN. For encoder network, we use a simple two layer MLP network without any regularization layers. For BiGAN and VEEGAN, we concatenate the last convolutional layer of the discriminator with the latent code which follows by two fully-connected layers in order to perform joint space discrimination. The dimension of the prior input is 64. Each model is trained for 50 epochs. As the true locations of the modes in this data are unknown, the number of modes are estimated using a trained classifier as described in subsection 5.2. We draw 26000 samples from each model for evaluation. As a measure of quality, following , we also report the KL divergence between the model distribution and the data distribution. The are averaged over 5 runs. As reported in Table 3, our model recovers greatest number of modes and better matches the data distribution than any other methods. Table 3: Degree of mode collapse, measured by modes captured and sample quality (as measured by KL) on Stacked-MNIST. Our model captures the most modes and also achieves the highest quality. For CIFAR-10 and CelebA datasets, we measure the performance with FID scores . FID can detect intra-class mode dropping, and measure the diversity as well as the quality of generated samples. FID is computed from 10K generated sampels and the pre-calculated statistics on all training data provided in the code repository 2. Our default parameters are used for all experiments. And the model architecture is the same as the standard DCGAN. We train each model for 50 epochs on CIFAR-10 and 30 epochs on CelebA. The dimension of the prior input is 100. All of our experiments are conducted using the unsupervised setting. We evaluate the FID score for each model per epoch during training, and leave the best one per run. As we can not reproduce VEEGAN and BiGAN on these datasets using the same architecture, we do not report FID scores on these two models. The reported are depicted in Table 4 and are averaged over 5 runs. It is important to mention that we do not pursue the state-of-the-art FID scores on both datasets as we only want to prove that our model improves GANs. It can be seen that our model has both the best FID scores and averaged FID scores on CIFAR-10 and CelebA. The improvement on CelebA is smaller compared to that on CIFAR-10. We suspect that this is due to the less complicated structure of face images, but anyway our model still consistently performs better than DCGAN and VAEGAN. The random samples generated from different models on both datasets are shown in Appendix A. Finally, to show that our proposed model are not just simply memorizing the training data, we interpolate between random vectors on generator of our model trained on CelebA dataset and the are depicted in Appendix A. CelebA Table 4: FID scores for different models, lower is better. Best FID is the best run in 5 trial. We propose a robust AE-based GAN model with novel Latent Distribution Matching constraint, called LDM-GAN, that can address the mode collapse effectively. Our model is different from previous works: We propose a novel regularization constraint called Latent Distribution Matching to align the distributions of real data and generated data in encoded space. We propose a new regularized autoencoder for approximately mapping data distribution to a gaussian. We designed a novel AE-based GAN that drastically alleviate mode collapse in GANs. Extensive experiments demonstrate that our method is stable and does not suffer from mode collapse problem in Synthetic 2D, MNIST, Stacked-MNIST datasets. Furthermore, we achieve better FID scores on CIFAR-10 and CelebA datasets compared to baseline models without any other tricks and techniques. These demonstrate that our proposed LDMGAN indeed improves GAN's stability and diversity. A APPENDIX
We propose an AE-based GAN that alleviates mode collapse in GANs.
1,359
scitldr
Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%. The enormous computational intensity of DNNs have ed in developing either hand-optimized kernels, such as NVIDIA cuDNN or Intel MKL that serve as backend for a variety of programming environment such as TensorFlow and PyTorch . However, the complexity of the tensor operations in DNNs and the volatility of algorithms, which has led to unprecedented rate of innovation , calls for developing automated compilation frameworks. To imitate or even surpass the success of hand-optimized libraries, recent research has developed stochastic optimization passes for general code: STOKE , and neural network code, TVM (a) and TensorComprehensions . TVM and TensorComprehensions are based on random or genetic algorithms to search the space of optimized code for neural networks. AutoTVM (b) builds on top of TVM and leverage boosted trees as part of the search cost model to avoid measuring the fitness of each solution (optimized candidate neural network code), and instead predict its fitness. However, even with these innovations the optimizing compilation time can be around 10 hours for ResNet-18 , and even more for deeper or wider networks. Since the general objective is to unleash new possibilities by developing automatic optimization passes, long compilation time hinders innovation and could put the current solutions in a position of questionable utility. To solve this problem, we first question the very statistical guarantees which the aforementioned optimization passes rely on. The current approaches are oblivious to the patterns in the design space of schedules that are available for exploitation, and causes inefficient search or even converges to solutions that may even be suboptimal. Also, we notice that current approaches rely on greedy sampling that neglects the distribution of the candidate solutions (configurations). While greedy sampling that passively filter samples based on the fitness estimations from the cost models work, many of their hardware measurements (required for optimization) tend to be redundant and wasteful. Moreover, we found that current solutions that rely on greedy sampling lead to significant fractions of the candidate configurations being redundant over iterations, and that any optimizing compiler are prone to invalid configurations which significantly prolongs the optimization time. As such, this work sets out to present an Adaptive approach to significantly reduce the compilation time and offer automation while avoiding dependence to hand-optimization, enabling far more diverse tensor operations in the next generation DNNs. We tackle this challenge from two fronts with the following contributions: Devising an Adaptive Exploration module that utilizes reinforcement learning to adapt to unseen design space of new networks to reduce search time yet achieve better performance. Proposing an Adaptive Sampling algorithm that utilizes clustering to adaptively reduce the number of costly hardware measurements, and devising a domain-knowledge inspired Sample Synthesis to find configurations that would potentially yield better performance. Real hardware experimentation with modern DNNs (AlexNet, VGG-16, and ResNet-18) on a highend GPU (Titan Xp), shows that the combination of these two innovations, dubbed CHAMELEON, yields 4.45×speedup over the leading framework, AutoTVM. CHAMELEON is anonymously available in https://github.com/anony-sub/chameleon, which will be made public. The general life-cycle of deep learning models from its birth to deployment comprises of two major steps. First stage is the designing and the training of a deep learning model by a research scientist, with the primary goal of achieving the highest feasible accuracy. Then, with a general demand to enable the intelligence on a wide range of devices (from mobile CPUs in the edge to cloud-scale GPUs), the second stage has emerged for the deployment of the pre-trained deep learning model to a target hardware by a deployment engineer. These stages are each iterative processes: research scientists iterate until it reaches the target performance in terms of accuracy whereas the deployment engineers iterate until the performance in terms of inference speed with a given hardware satisfies the given constraints. Importantly, these two stages are most often separate processes, and this paper mainly focuses on the second stage (deployment) of the cycle with an overarching goal of accelerating the overall deployment cycle by reducing the optimizing compilation time without compromising the performance of the output code. and TVM (a), using which we implement CHAMELEON that is available as a separate package for adoption in even other frameworks. The first phase of the workflow is the frontend compiler which performs the translation from the compiler and applies target-independent and white-box target-dependent optimizations that do not incorporate a measure of runtime. Target-independent passes transform the input DNN model without specificity to the target hardware. Operator fusion and data layout transformation in TVM are some examples of these passes, which lie in the same category as dead-code elimination or loop-invariant code motion in GCC or LLVM . Target-dependent passes, on the other hand, the compiler takes the hardware architecture (target) into account while optimizing the program; however, this also does not actively leverage runtime measures. The last stage is a black-box optimization pass, called optimizing compiler, that given a measure of performance at runtime from the hardware can further optimize the code. CHAMELEON falls in this class by offering an optimizing compiler that adapts to different design space to be more swift in optimizing deep neural networks compared to conventional approaches. Optimizing compilers usually take a black-box approach and use hardware measurements to configure the optimization based on a measure of fitness f of each solution. In Factors for tiling and binding # of filters height, and width of feature maps. tile rc, tile ry, tile rx Factors for tiling reduction axis such as # of channels, height, and width of filters. auto unroll max step Threshold of number of steps in the loop to be automatically unrolled. Explicitly unroll loop, this may let code generator to generate pragma unroll hint. order to make the problem tractable, the optimizing compilers for deep neural networks reduce the problem down to tuning the knobs θ for the output code template τ, and can be formulated as: A combination of assignment to the knobs is said to be a configuration Θ = (θ 1, θ 2, ..., θ n) while the dimensions of the design space D Θ is defined by the knobs. As such, in Equation 1, an optimizing compiler starts from a code template τ for each layer, and makes use of a search algorithm and real hardware measurements to efficiently find the best configuration Θ * ∈ D Θ. In this context, there are three variables that determine the effectiveness of the optimizing compiler: a large and diverse enough design space (knobs) that covers a variety of transformations, an effective search algorithm to adequately navigate this space, and a mechanism to cut down the number of costly hardware measurements that check the fitness of a solution. Table 1 lists the knobs for performing convolution on a GPU, where it is crucial that the code maximizes data reuse, uses the shared memory wisely, and minimizes bank conflicts. The knobs optimize various aspects of the execution, including tiling (e.g., tile x, tile y, . . .), unrolling (e.g., auto unroll max step and unroll explicit), and these knobs define a design space with 10 10 possibilities. Given the vastness of the design space, the remaining challenges are designing an effective search algorithm and designing a mechanism that reduces the cost of each step in the search (i.e. reducing the need to measure the hardware). As shown in Figure 2, optimizing compilation for DNNs can take an eon even with the advances from prior works (a; b;) With active research (; ; ; ; ;) that has been able to cut down the training time to only few hours and even minutes on big models (e.g., ResNet-50 ) for ImageNet, it renders the optimizing compilation time of the current solutions seem even more prominent. Especially, since the above-mentioned compilers have been integrated to the deep learning pipelines of major players in the industry; ), many users of these pipelines including the deployment engineers in these companies will be going through the compilation workflow depicted in Figure 1 numerous times. Therefore, current long compilation time can be a hindrance to deploying DNN in various hardware, hence a major bottleneck in enabling intelligence on wider range of target platforms. Furthermore, as we explore various neural topologies , deeper or wider networks , and new operations to achieve higher performance , we are pushed to optimize the networks more frequently. The long optimization times are multiplied with such trend, leaving the practical utility of the current compiler solutions to question. As such, the primary goal of this work is reducing the optimizing compilation time to meet the immediate needs of the industry for expedited DNN compilation to foster further diversity and innovation in designing DNNs. Such long optimization time from the inefficiency of simulated annealing which (while it stochastically guarantees a reasonable solution after huge number of iterations) fails to capture the patterns in the design space that can be exploited during the search. On the other hand, we can see in the figure that majority of the optimization time is spent on reaching for measurements on real hardware that is used as a feedback for the aforementioned search. Also, current approach even suffers from numerous invalid configurations that not only wastes the limited hardware measurement budget that the compiler starts with, but also incurs serious overhead to reset the target hardware for subsequent hardware measurements. As such, it is important that a sampling mechanism that selects potential configurations for hardware measurements to be smarter to ensure that each measurement is maximizing the chances of achieving a good solution and that it evades the invalid configurations. However, the current approaches rely on greedy sampling that passively sample based on the estimations from the cost models. This not only has a tendency to overfit but also neglect that solutions are distributed non-uniformly and that there are numerous invalid configurations. As discussed in Section 2, current solutions fall short of providing a swift optimization framework for optimizing emergent deep neural networks, because of the futility of the search in adapting to the design space from a random walk based search algorithm and the inefficiency of the physical hardware measurements from the greedy sampling. Therefore, developing a new framework that can overcome current challenges to unfetter neural network innovation from a prolonged optimization times can be boiled down to two problems: 1 improving the the search algorithm to better adapt to the design space, and 2 improving the sampling algorithm to both better adapt to the distribution of the solutions and decrease the possibility of running into invalid configurations. As such we make two innovations in the optimizing compiler for deep neural networks to develop CHAMELEON by applying reinforcement learning to the search that can adapt to new design spaces (Adaptive Exploration) and devising an Adaptive Sampling that replaces the current greedy sampling. Figure 3 outlines the overall design of our optimizing compiler, dubbed CHAMELEON 1, and gives an overview of the optimizing compilation process. CHAMELEON takes code template τ for each layer in the network and the corresponding design space D Θ as its input, and iteratively optimizes the code for configuration Θ to finally output τ (Θ *). The proposed Adaptive Exploration maneuvers the design space while using a cost model as a proxy for hardware measurements to the output set of candidate configurations S Θ. These configurations are then sampled with Adaptive Sampling so that the sampled configurations S Θ subsume the initial candidate configurations while reducing its number significantly. The sampled configurations S Θ are then passed to the code generator which combines the input template τ and the configurations S Θ to create a set of τ (Θ) that are sent to real hardware for runtime measurements. Runtimes from the hardware are used as the measure of fitness f and update the cost model to enhance the exploration of the subsequent iterations. After multiple iterations, τ (Θ *) with the best fitness f (shortest runtime) is selected as an output for the layer. As stated in Section 2, the current state-of-the-art approach (b) that leverages simulated annealing relies on the stochastic guarantees of its random walks. Therefore, the current approach requires numerous iterations of exploration to converge to a reasonable solution causing long compilation hours, thus insufficient to enable disruptive innovations in neural networks. We take an inspiring approach that avoids naive dependence on the stochastic guarantee of simulated annealing and leverage a technique that can learn to adapt to unseen design space to not only accelerate convergence but also bring some performance gains. As such, we develop Adaptive Exploration by leveraging Reinforcement Learning (RL), which is concerned with learning to maximize reward given an environment by making good exploration and exploitation tradeoffs, in our case maximizing fitness f of the explored configurations S Θ. Reinforcement learning formulation. Our RL-based Adaptive Exploration module uses an actor-critic style RL, where Policy Network learns to emit a set of directions (vector of increment/decrement/stay) for each knob in the design space that will increase f of the next configuration and the Value Network learns the design space D Θ to estimate the value of the action. The first layer of these networks that takes the current configuration Θ as input is shared to foster information sharing among the two networks, and its output is fed into the subsequent layers the networks. These networks not only learn the dependencies among the different knobs of the design space (which are interrelated) that helps our module navigate through the design space but also lean the potential gains of the modifications to the configurations. Learning procedure. Having formulated the RL-based Adaptive Exploration Module, an iteration of our optimization begins with a set of initial configurations and takes multiple search steps (episode) for each of the configurations. As shown in Figure 4, the agent makes an action and applies it to the configuration using configuration updater to get another configuration that potentially has better f. After finishing multiple search steps in the episode, all configurations S Θ are evaluated using a cost model, which its return values are used as a surrogate reward to update our agent, to reduce the number of costly hardware measurements. By taking this approach, f of S Θ improves as our module progresses through the episodes. In other words, by repeating multiple episodes and iterations, our Adaptive Exploration Module gradually learns to locate good configurations. Reducing number of costly hardware measurements. After the exploration step (regardless of the exploration method), we observe that the candidate configurations are clustered in subregions of the design space and these clusters are non-uniformly distributed (Figure 5). We also find that, while the design space's surface is discrete and un-smooth, a large fraction of configurations within each cluster achieve similar runtime (Figure 6). Utilizing these characteristics of the design space, we devise Adaptive Sampling that can sample a new set of candidates, by adapting to the shape of the design space and the non-uniformity of the distribution while leaving the performance of optimization intact. We first leverage Clustering algorithm to find configurations that are representative of each cluster; the sampling module uses centroids as the representative configurations. Our adaptive sampling iterates over a different number of clusters for their respective centroids and the L2 loss. In the context of optimizing compiler, selecting the number of centroids for clustering entails making the important tradeoff between selecting more centroids for better performance or fewer centroids for a reduced number of hardware measurements. As such, we must devise a method that would automatically make the tradeoff in a reasonable manner. We take advantage of the decreasing trend in the aforementioned L2 loss as we increase the number of centroids, and devise a Threshold-based Meta-Search to determine the number of clusters. By setting the threshold (hyperparameter) it allows the compiler to determine the point of diminishing return (Knee of the curve), inflection point beyond which fewer centroids may lead to performance degradation and more clusters would prolong the optimization substantially. Overall, our sampling curtails the number of hardware measurements so that it is just enough to subsume the entire subspace of the candidate configurations. Improving candidate configurations using sample synthesis. While the above sampling algorithm significantly reduces the number of hardware measurements compared to the conventional greedy sampling, without impacting the performance of the output code, we are still left with a critical issue of Redundancy among the candidate configurations. We find that the exploration algorithm (regardless of the type) combined with the greedy sampling frequently leads to redundancy among the candidate configurations over different iterations of optimization due to the overfitting of the cost model from the greediness of the sampling. Even though the exploration algorithm tries to explore unvisited regions of the design space, these explored (not exploited) configurations are discarded due to the greedy sampling which entirely depends on the cost model for its selections of the configurations. Therefore, the current greedy sampling algorithm has its limitation in focusing the hardware measurements to the same region over and over. On the other hand, we find that from a code optimization point of view, we know that many of the automated approaches for black-box optimization are prone to Invalid Configurations, which from too large a tile that goes over the input feature map boundary or errors during memory accesses (cannot be solved analytically). These invalid configurations not only blow the chances for better exploration but also leads to an extra optimization time overhead to reset the physical hardware for the subsequent hardware measurement. We try to overcome both of these limitations by devising Sample Synthesis. When our compiler runs into redundant samples, the proposed synthesis method analyzes the candidate samples to determine the most probable (most frequent = mode function) non-invalid choice for each knob to come up with a new configuration. This statistical combination of the most frequent knob settings yield configurations that combine the strengths of different knobs to converge to a better overall solution. In spirit, the recombination (crossover) operator in genetic algorithms also tries to combine the best features of the solutions with high fitness values. Algorithm 1 presents the integration of our Adaptive Sampling and the Sample Synthesis. Architecture exploration for the adaptive exploration. We use Proximal Policy Optimization (PPO) , a policy gradient that has been shown to adapt to various problems and have good sample complexity, as our reinforcement learning algorithm. Since reinforcement learning could incur computational overhead that could prolong the optimization time, we optimize the actor-critic networks through architecture exploration to find good tradeoff for size of these networks (that determines the computational overhead) and the optimization performance. return new candidates Feed to Code Generator to make measurements on hardware 12: end procedure Design choices for the adaptive sampling. We use a K-means Clustering to determine centroids of the configurations, because K-means has been shown effective in practice and it only requires K, over error or radius in other algorithms which are much more challenging to tune. For example, DBSCAN (Ester et al.) or mean-shift clustering are very sensitive to the above hyperparameters. On the other hand, K can be framed as a lever for trading performance and the speed of the optimization which abstracts away the aforementioned challenges enabling the thresholding-based swift meta-search for detecting the knee (optimal number of clusters). Hyperparameter tuning. Hyperparameter tuning is a very important task in machine learningbased tools and models. As such, we present the hyperparameters we used for the evaluation in Table 7 (in appendix), which its tuning took several days. For the hyperparameters in Table 8 (in appendix), we used the same set of values that were used in the AutoTVM paper (b) in order to conduct a fair comparison or CHAMELEON. Additionally, for parameters used in the Adaptive Exploration module, which is not present in AutoTVM, we have tuned the hyperparameters using the set of layers presented in Table 5 (in appendix). We emphasize, however, that the hyperparameters have been tuned offline before the deployment of CHAMELEON, and the hyperparameters are not changed during the use of the framework or the experimentation. So the tuning overhead is not part of the compilation after the Adaptive Exploration module is tuned once before releasing the compiler to the deployment practitioners. We integrate CHAMELEON into TVM (a) to perform component evaluation and compare with AutoTVM (b). We first evaluate components of CHAMELEON in Section 4.1 and Section 4.2 on set of convolution layers sampled from AlexNet , VGG-16 , and ResNet-18 . Then we provide end-to-end evaluation of CHAMELEON on both set of layers and end-to-end deep models, in Section 4.3. Due to space limitations, we present only the representative plots in the paper, and the complete set of and the details of the parameters are provided in the appendix. In the previous approach (b), authors have built a cost model to estimate fitness instead of performing costly measurements on real hardware, then used simulated annealing to find potentially optimal configurations. Figure 7 (a) compares the number of search steps taken per iteration to reach or converge to the solution in simulated annealing and reinforcement learning, respectively. Overall, observation is that CHAMELEON's reinforcement learning agent requires 2.88×less search steps compared to simulated annealing to find good solution. This comes from reinforcement learning agent's ability to quickly learn about the correlation between different dimensions, and start search on top of previous iterations, to reuse the information, over starting from scratch, relying on stochastic guarantees of the simulated annealing process. Figure 7(b) summarizes the effect of applying CHAMELEON's adaptive sampling module on simulated annealing and reinforcement learning based search. First, the show that using adaptive sampling helps the framework to make less hardware measurements regardless of the search algorithm used. The adaptive sampling algorithm reduces the number of measurements by 1.98×when used with simulated annealing and 2.33×with reinforcement learning One observation is that the adaptive sampling is more effective with reinforcement learning search. This comes from the reinforcement learning agent's capacity to better localize the search to meaningful samples (exploitation) while still aiming to find good solution by making diverse search (exploration). Diversity exploration of AutoTVM aims to spread out the candidate configurations with a regularizing effect that fosters uniform sampling. In contrast, our adaptive sampling uses a clustering algorithm to perform more measurements on the regions with higher likelihood of achieving better output performance, leading to a nonuniform sampling. While AutoTVM states that diversity-aware selection had no meaningful impact on most of the evaluated workloads, our adaptive sampling brings significant improvement as depicted in Figure 8. As shown, adaptive sampling brings an average of 13.5%and 19.0%improvement on simulated annealing and reinforcement learning, respectively. CHAMELEON integrates two components into the workflow: RL-based Adaptive Exploration and Adaptive Sampling. This section compares the performance of CHAMELEON with AutoTVM (b) on both set of layers and end-to-end deep neural networks. Layer evaluation. Figure 9 shows the trend of output code performance of ResNet-18's 11th layer over number of hardware measurements during optimization. The figure illustrates that our Adaptive Exploration finds better configurations than simulated annealing which in better output code performance, and the Adaptive Sampling reduces number of hardware measurements significantly during optimization. Also, CHAMELEON's Adaptive Exploration and Adaptive Sampling working in tandem emits better code with shorter optimization time than others. As such, Figure 10 (a) compares optimization time and the performance of the output code in CHAMELEON and AutoTVM to confirm the observation. CHAMELEON achieved 1.17×better performance with 4.82×shorter optimization time compared to AutoTVM. Overall, the suggest that our Adaptive Exploration effectively maneuvers the design space, and Adaptive Sampling reduces hardware measurements and the overall optimization time while even improving output performance. Table 3: End-to-end evaluation of the output performance for deep networks. End-to-end evaluation. Up until now, we have focused on evaluation with subset of layers. Now we continue our discussion to the applicability of CHAMELEON to optimization of end-to-end deep neural networks. Figure 10 (b) shows that CHAMELEON spends 3.59×, 5.73×, and 4.28×less time than AutoTVM to optimize AlexNet, VGG-16, and ResNet-18, respectively. On average, our work shows 4.45×optimization time speedup while achieving up to 6.4%improvement in terms of performance of output code. Inference time in Figure 10 (b) illustrates the speedup for optimized code. Raw numbers are available in Table 2 and Table 3. All in all, such improvements from efficient Adaptive Exploration and the reduced number of hardware measurements from Adaptive Sampling. CHAMELEON uniquely offers a solution that exclusively enables (i) Reinforcement Learning and (ii) Sampling in the context of (iii) Optimizing Compilers for neural networks. As such, we discuss the related work from each of the three independent research directions. Optimizing compilers. TensorComprehensions and TVM (a) use genetic algorithm and simulated annealing to choose parameters of polyhedral optimization for neural networks. In a more general context, some computing libraries make use of black box optimization and also profiling-based compilation passes utilize runtime information to generate optimized code. Later, AutoTVM (b) incorporates learning with boosted trees within the cost model for TVM to reduce the number of real hardware measurements. While CHAMELEON is inspired and builds on these prior works, unlike them, it is based on reinforcement learning for Adaptive Exploration and further reduces the number of measurements by focusing them through Adaptive Sampling that leverages clustering. Reinforcement learning for hyper-parameter optimization. There are a growing body of studies on using reinforcement learning to perform various optimizations (; ; ; ; ;) for a variety of objectives including hyper-parameter optimization for neural networks. For instance, DeepArchitect and NAS use reinforcement learning to automate the process of designing deep neural network models and their associated parameters. HAQ and ReLeQ use reinforcement learning to chose levels of quantization for the layers of a given deep neural network. AMC formulates neural network compression as a RL problem. Our work exclusively explores a different problem, that is optimizing compilers, using reinforcement learning. Sampling algorithms for learning. Active learning is a broad field (; ; ; ; ;) that uses a measure of the change in the model to decide which training data elements should be used to update the model. Passive learning is an alternative view that independent of the model, analyze the distribution of the training data set and selects a subset. The Adaptive Sampling algorithm for CHAMELEON shares similarities with Passive learning but it differs in its context. The sampling is designed to reduce the number of samples (configuration) for hardware measurement from the exploration of the design space whilst performing an optimization to accelerate the process. We present CHAMELEON to allow optimizing compilers to adapt to unseen design spaces of code schedules to reduce the optimization time. This paper is also an initial effort to bring Reinforcement Learning to the realm of optimizing compilers for neural networks, and we also develop an Adaptive Sampling with domain-knowledge inspired Sample Synthesis to not only reduce the number of samples required to navigate the design space but also augment its quality in terms of fitness. Experimentation with real-world deep models shows that CHAMELEON not only reduces the time for compilation significantly, but also improves the quality of the code. This encouraging suggests a significant potential for various learning techniques to optimizing deep learning models. A.1 DNN MODELS AND LAYERS A.3 HYPER-PARAMETERS . .
Reinforcement learning and Adaptive Sampling for Optimized Compilation of Deep Neural Networks.
1,360
scitldr
In this paper, we propose a differentiable adversarial grammar model for future prediction. The objective is to model a formal grammar in terms of differentiable functions and latent representations, so that their learning is possible through standard backpropagation. Learning a formal grammar represented with latent terminals, non-terminals, and productions rules allows capturing sequential structures with multiple possibilities from data. The adversarial grammar is designed so that it can learn stochastic production rules from the data distribution. Being able to select multiple production rules leads to different predicted outcomes, thus efficiently modeling many plausible futures. We confirm the benefit of the adversarial grammar on two diverse tasks: future 3D human pose prediction and future activity prediction. For all settings, the proposed adversarial grammar outperforms the state-of-the-art approaches, being able to predict much more accurately and further in the future, than prior work. Future prediction in videos is one of the most challenging visual tasks. Being able to accurately predict future activities, human or object pose has many important implications, most notably for robot action planning. Prediction is particularly hard because it is not a deterministic process as multiple potential'futures' are possible, and in the case of human pose, predicting real-valued output vectors is further challenging. Given these challenges, we address the long standing questions: how should the sequential dependencies in the data be modeled and how can multiple possible long-term future outcomes be predicted at any given time. To address these challenges, we propose an adversarial grammar model for future prediction. The model is a differentiable form of a regular grammar trained with adversarial sampling of various possible futures, which is able to output real-valued predictions (e.g., 3D human pose) or semantic prediction (e.g., activity classes). Learning sequences of actions or other sequential processes with the imposed rules of a grammar is valuable, as it imposes temporal structural dependencies and captures relationships between states (e.g., activities). At the same time, the use of adversarial sampling when learning the grammar rules is essential, as this adversarial process is able to produce multiple candidate future sequences that follow a similar distribution to sequences seen in the data. More importantly, a traditional grammar will need to enumerate all possible rules (exponential growth in time) to learn multiple futures. This adversarial stochastic sampling process allows for much more memory-efficient learning without enumeration. Additionally, unlike other techniques for future generation (e.g., autoregressive RNNs), we show the adversarial grammar is able to learn long sequences, can handle multi-label settings, and predict much further into the future. The proposed approach is driven entirely by the structure imposed from learning grammar rules and their relationships to the terminal symbols of the data and by the adversarial losses which help model the data distribution over long sequences. To our knowledge this is the first approach of adversarial grammar learning and the first to be able to successfully produce multiple feasible long-term future predictions for high dimensional outputs. The approach outperforms previous state-of-the-art methods, including RNN/LSTM and memory based methods. We evaluate future prediction on high dimensional data and are able to predict much further in the future than prior work. The proposed approach is also general -it is applied to diverse future prediction tasks: 3D human pose prediction and multi-class and multi-label activity forecasting, and on three challenging datasets: Charades, MultiTHUMOS, and Human3.6M. Grammar models for visual data. The notion of grammars in computational science was introduced by for description of language, and has found a widespread use in natural language understanding. In the domain of visual data, grammars are used to parse images of scenes (; ;). In their position paper, present a comprehensive grammar-based language to describe images, and propose MCMC-based inference. More recently, a recursive neural net based approach was applied to parse scenes by. However, this work has no explicit representation of grammar. In the context of temporal visual data, grammars have been applied to activity recognition and parsing (; ; ;) but not to prediction or generation. used used traditional stochastic grammar to predict activities, but only within 3 seconds. Generative models for sequences. Generative Adversarial Networks (GANs) are a very powerful mechanism for data generation by an underlying learning of the data distribution through adversarial sampling . GANs have been very popular for image generation tasks (Emily L ; ;). Prior work on using GANs for improved sequences generation (; ;) has also been successful. proposed a stochastic RNN which enables generation of different sequences from a given state. Differentiable Rule Learning Previous approaches that address differentiable rule or grammar learning are most aligned to our work. However, they can only handle rules with very small branching factors and have not been demonstrated in high dimensional output spaces. Future pose prediction. Previous approaches for human pose prediction (; ;) are relatively scarce. The dominant theme is the use of recurrent models (RNNs or GRUs/LSTMs) . use attention models specifically to target long-term predictions, up to 1 second in the future. propose a structural RNN which learns the spatio-temporal relationship of pose joints. The above models, contrary to ours, cannot deal with multi-modality and ambiguity in the predictions, and do not produce multiple futures. These are also only within short-term horizons and the produced sequences often'interpolate' actual data examples. Video Prediction. Without providing an exhaustive survey on video prediction, we note that our approach is related to the video prediction literature (; ;) where adversarial formulations are also common . Overview and main insights. Our approach is driven by learning the production rules of a grammar, with which we can learn the transitions between continuous events in time, for example 3D human pose or activity. While an activity or action may be continuous, it can also spawn into many possible futures at different points, similarly to switching between rules in a grammar. For example, an activity corresponding to'walking' can turn into'running' or continuing the'walking' behaviour or change to'stopping'. These production rules are learned in a differentiable fashion with an adversarial mechanism which allows learning multiple candidate future sequences. This enables robust future prediction, which, more importantly, can easily generate multiple realistic futures. A formal regular grammar is represented as the tuple (N, Σ, P, N 0) where N is a finite non-empty set of non-terminals, Σ is a finite set of terminals (or output symbols), P is a set of production rules, and N 0 is the starting non-terminal symbol, N 0 ∈ N. Productions rules in a regular grammar are of the form A → aB, A → b, and A →, where A, B ∈ N, a, b ∈ Σ, and is the empty Figure 1: Overview of the adversarial grammar model. The initial non-terminal is produced by an encoder based on some observations. The grammar then generates multiple possible sequences from the non-terminal. The generated and real sequences are used to train the discriminator. string. Applying multiple productions rules to the starting non-terminal generates a sequence of terminals. Note that we only implement rules of form A → aB in our grammar, allowing it to generate sequences infinitely. Our objective is to learn such non-terminals (e.g., A) and terminals (e.g., a) as latent representations directly from training data, and model the production rules P as a (differentiable) generative neural network function. That is, at the heart of the proposed method is learning nonlinear function G: N → {(N, Σ)} that maps a non-terminal to a set of (non-terminal, terminal) pairs. We denote each element (i.e., each production rule) derived from the input non-terminal as {(A i, t i)}. Note that this mapping to multiple possible elements enables modeling of multiple, different sequences, and is not done by existing models (e.g., RNNs). For any latent non-terminal A ∈ N, the grammar production rules are generated by applying the function G, to A as (here G is a neural network with learnable parameters): Each pair corresponds to a particular production rule for this non-terminal. More specifically, This function is applied recursively to obtain a number of output sequences, similar to prior recurrent methods (e.g., RNNs and LSTMs). However, in RNNs, the learned state/memory is required to abstract multiple potential possibilities into a single representation, as the mapping from the state/memory representation to the next representation is deterministic. As a , when learning from sequential data with multiple possibilities, standard RNNs tend to learn states as a mixture of multiple sequences instead of learning more discriminative states. By learning explicit production rules, our states lead to more salient and distinct predictions which can be exploited for learning long-term, complex output tasks with multiple possibilities, as shown later in the paper. For example, suppose A is the non-terminal that encodes the activity for'walking'. An output of the rule A → walkingA will be able to generate a sequence of continual'walking' behavior. Additional rules, e.g., A → stoppingV, A → runningU, can be learned, allowing for the activity to switch to'stopping' or'running' (with the non-terminals V, U respectively learning to generate their corresponding potential futures). Clearly, for high dimensional outputs, such as 3D human pose, the number and dimensionality of the non-terminals required will be larger. We also note that the non-terminals act as a form of memory, capturing the current state with the Markov property. To accomplish the above task, G has a special structure. The model contains a number of nonterminals and terminals which are learned: |N | non-terminals of dimensionality D, and |Σ| terminals of dimensionality C (the latter naturally correspond to the number and dimensionality of the desired outputs). G takes input of A ∈ N, then using several nonlinear transformations (e.g., fully connected layers), maps A to a vector r corresponding to a set of rules: r = f R (A). Here, r is a vector with the size |P | whose elements specify the probability of each rule given input non-terminal. We learn |P | rules which are shared globally, but only a (learned) subset are selected for each non-terminal as the other rule probabilities would become zero. This is conceptually similar to using memory with recurrent neural network methods , but the main difference is that the rule vectors are used to build grammar-like rule structures which are more advantageous in explicitly modeling of temporal dependencies. In order to generate multiple outputs, the candidate rules, r are followed by the Gumbel-Softmax function , which allows for stochastic selection of a rule. This function is differentiable and samples a single rule from the candidate rules based on the learned rule probabilities. These probabilities model the likelihood of each generated sequence. Two nonlinear functions f T and f N are additionally learned, such that, given a rule r, output the ing terminal and non-terminal: B = f N (r), t = f T (r). These functions are both a sequence of fully-connected layers followed by a non-linear activation function (e.g., softmax or sigmoid depending on the task). As a , The schematic of G is visualized in Figure 1, more details on the functions are provided in the later sections. The non-terminals and terminals are modeled as sets of high dimensional vectors with pre-specified size and are learned jointly with the rules (all are tunable parameters and naturally more complex datasets require larger capacity). For example, for a simple C-class classification problem, the terminals are represented as C-dimensional vectors matching the one-hot encoding for each class. Difference to stochastic RNNs Standard recurrent models have a deterministic state, given some input, while the grammar is able to generate multiple potential next non-terminals (i.e., states). Stochastic RNNs address this by allowing the next state to be stochastically generated, but this is difficult to control, as the next state now depends on a random value. In the grammar model, the next non-terminal is sampled randomly, but from a set of deterministic candidates. By maintaining a set of deterministic candidates, the next state can be selected randomly or by some other method, giving more control over the generated sequences. Learning the starting non-terminal. Given an initial input data sequence (e.g., a short video or pose sequences), we learn to generate its corresponding starting non-terminal (i.e., root node). This is used as input to G to generate a sequence of terminal symbols starting from the given nonterminal. Concretely, given the initial input sequence X, a function s is learned which gives the predicted starting non-terminal N 0 = s(X). Then the function G is applied recursively to obtain the possible sequences where j is an index in the sequence and i is one of the possible rules: The function G generates a set of (non-terminal, terminal) pairs, which is applied recursively to the non-terminals, ing in new rules and the next set of (non-terminal, terminal) pairs. Note that in most cases, each rule generates a different non-terminal, thus sampling G many times will lead to a variety of generated sequences. As a , an exponential number of sequences will need to be generated during training, to cover the possible sequences. For example, consider a branching factor of k rules per non-terminal with a sequence of length L. This in k L terminals and nonterminals (e.g., for k = 2 we have ∼ 1000 and for k = 3 ∼ 60, 000). Thus, enumerating all possible sequences is computationally prohibitive beyond k = 2. Furthermore, this restricts the tasks that can be addressed to ones with lower dimensional outputs because of memory limits. With k = 1 (i.e., no branching), this reduces to a standard RNN during training, unable to generate multiple possible future sequences (i.e., we observed that the rules for each non-terminals become the same). We address this problem by using stochastic adversarial rule sampling. Given the non-terminals, which effectively contain a number of potential'futures', we learn an adversarial-based sampling, similar to GAN approaches , which learns to sample the most likely rules for the given input. The use of a discriminator network allows the model to generate realistic sequences that may not match the ground truth without being penalized. We use the function G, which is the function modeling the learned grammar described above, as the generator function and build an additional discriminator function D. Following standard GAN training, the discriminator function returns a binary prediction which discriminates examples from the data distribution vs. generated ones. Note that the adversarial process is designed to ultimately generate terminals, i.e., the final output sequence for the model. D is defined as: More specifically, D is tasked with the prediction of p ∈ {T rue, F alse} based on if the input sequence of terminals, t = t 0 t 1 t 2... t L, is from the data or not (L is the length of the sequence). Note that our discriminator is also conditioned on the non-terminal sequence (n = n 0 n 1 n 2 . . . n L), thus the distribution on non-terminals is learned implicitly, as well. The discriminator function D is implemented as follows: given an input non-terminal and terminal sequence, we apply several 1D convolutional layers to the terminals and non-terminals, then concatenate their representations followed by a fully-connected layer to produce the binary prediction. (Note that we also tried a GRU/LSTM instead of 1D conv, and it did not making a difference). The discriminator and generator (grammar) functions are trained to work jointly, as is standard in GANs training. This constitutes the adversarial grammar. The optimization objective is defined as: where p data (x) is the real data distribution (i.e., sequences of actions or human pose) and G(z) is the generated sequence from an initial state based on a sequence of frames (X). The sequences generated by G could be compared to the ground truth to compute a loss during training (e.g., maximum likelihood estimation), however, doing so requires enumerating many possibilities in order learn multiple, distinct possible sequences. Without enumeration, the model converges to a mixture representing all possible sequences. By using the adversarial training of G, the model is able to generate sequences that match the distribution observed in the dataset. This allows for computationally feasible learning of longer, higher-dimensional sequences. Architectures and implementation details. The functions G, f N and f t, f R, mentioned above, are networks using several fully-connected layers, which depend on the task and dataset (specific details are provided in the supplemental material). For pose, the function s is implemented as a two-layer GRU module followed by a 1x1 convolutional layer with D N outputs to produce the starting non-terminal. For activity prediction, s is implemented as two sequential temporal convolutional layers which produce the starting non-terminal. The model is trained for 5000 iterations using gradient descent with momentum of 0.9 and the initial learning rate set to 0.1. We follow the cosine learning rate decay schedule. Our models were trained on a single V100 GPU. We conduct experiments on two sets of problems for future prediction: future activity prediction and future 3D human pose prediction and three datasets. Our experiments demonstrate strong performance of the proposed approach over the state-of-the-art and the ability to produce multiple future outcomes, to handle multi-label datasets, and to predict further in the future than prior work. We first test the method for video activity anticipation, where the goal is to predict future activities at various time-horizons, using an initial video sequence as input. We predict future activities up to 45 seconds in the future on well-established video understanding datasets MultiTHUMOS for multi-class prediction and Charades which is a multi-class and multi-label prediction task. We note that we predict much further into the future than prior approaches, that reported within a second or several seconds . Evaluation metric To evaluate the approaches, we use a standard evaluation metric: we predict the activities occurring T seconds in the future and compute the mean average precision (mAP) between the predictions and ground truth. As the grammar model is able to generate multiple, different future sequences, we also report the maximum mAP over 10 different future predictions. We compare the (; ;) predictions at 1, 2, 5, 10, 20, 30 and 45 seconds into the future. As little work has explored longterm future activity prediction, we compare against four different baseline methods: (i) repeating the activity prediction of the last seen frame, (ii) using a fully connected layer to predict the next second (applied autoregressively), (iii) using a fully-connected layer to directly predict activities at various future times, and (iv) an LSTM applied autoregressively to future activity predictions. MultiTHUMOS dataset. The MultiTHUMOS dataset is a popular video understanding benchmark with 400 videos spanning about 30 hours of video and 65 action classes. Table 1 shows activity prediction accuracy for the MultiTHUMOS dataset. In the table, we denote our approach as'Adversarial Grammar -max' but also report our approach when limited to generating a single outcome ('Adversarial Grammar'), to be consistent to previous methods which are not able to generate more than one outcome. We also compare to grammar without adversarial learning. As seen, our approach outperforms alternative methods including LSTMs. We observe that the gap to other approaches widens further in the future: 3.9 mean accuracy for the LSTM vs 11.2 of ours at 45 seconds in the future, as these autoregressive approaches become noisy. Due to the structure of the grammar model, we are able to generate better long-term predictions. We also find that by predicting multiple futures and taking the max improves performance, confirming that the grammar model is generating different sequences, some of which more closely match the ground truth. Charades dataset. Charades is a challenging video dataset containing longer-duration activities recorded in home environments. Charades is a multi-label dataset in which multiple activities often co-occur. We use it to demonstrate the ability to handle this complex data. It consists of 9858 videos (7990 training, 1868 test) over 157 activity classes. Table 2 shows the future activity prediction for Charades. Similarly, we observe that the adversarial grammar model provides more accurate future prediction than previous work, slightly outperformed by grammar only. We note that Charades is more challenging than others on both recognition and prediction tasks, and that grammar only, while performing well here, is not feasible for high dimensional tasks. Figure 2 shows a true sequence and several other sequences generated by the adversarial grammar. As Charades contains many different possible sequences, generating multiple futures is beneficial. We further evaluate the approach on forecasting 3D human pose, a high dimensional structuredoutput problem. This is a challenging task but is of high importance, e.g., for motion planning in robotics. It also showcases the use of the Adversarial Grammar, as using the standard grammar is not feasible. Human 3.6M dataset. We conduct experiments on a well established future pose prediction benchmark, the Human 3.6M dataset , which has 3.6 million 3D human poses of 15 activities. The goal is to predict the future 3D locations of 32 joints in the human body. We use quaternions to represent each joint location, allowing for a more continuous joint representation space. We also predict differences, rather than absolute positions, which we found leads to more stable learning. Previous work demonstrated prediction up to a second on this dataset. This work can generate future sequences for longer horizons, 4 seconds in the future. We compare against the state-of-the-art methods on the Human 3.6M benchmark (; ; ; ;) using the Mean Angle Error (MAE) metric as introduced by. Table 3 shows average MAE for all (; ; activities compared to the state-of-the-art methods and Table 4 shows on several activities, consistent with the protocol in prior work. As seen from the tables, our work outperforms all prior methods. Furthermore, we are able to generate at larger time horizons of four seconds in the future. In Fig 3, we show some predicted future poses for several different activities, confirming the reflect the characteristics of the actual behaviors. In Fig. 4, we show the grammar's ability to generate different sequences from a given starting state. Here, given a starting state, we select different rules, which lead to different sequences corresponding to walking, eating or sitting. We propose a novel differentiable adversarial grammar and apply it to several diverse future prediction and generation tasks. Because of the structure we impose for learning grammar-like rules for sequences and learning in adversarial fashion, we are able to generate multiple sequences that follow the distribution seen in data. Our work outperforms prior approaches on all tasks and is able to generate sequences much further in the future. We plan to release the code. Activity Prediction For activity prediction, the number of non-terminals (|N |) was set to 64, the number of terminals (|Σ|) was set to the number of classes in the dataset (e.g., 65 in MultiTHUMOS and 157 in Charades). We used 4 rules for each non-terminal (a total of 256 rules). G, f N and f t each used one fully connected layer with sizes matching the desired inputs/outputs. s is implemented as a two sequential temporal convolutional layers with 512 channels. 3D Pose estimation For 3D pose, the number of non-terminals (|N |) was set to 1024, the number of terminals (|Σ|) was set to 1024, where each terminal has size of 128 (32 joints in 4D quaternion representation). The number of rules was set to 2 per non-terminal (a total of 2048 rules). G was composed of 2 fully connected layers, f N and f t each used three fully connected layers with sizes matching the desired inputs/outputs. s was implemented as a 2-layer GRU using a representation size of 1024. A.2 SUPPLEMENTAL Table 5 provides of our approach for future 3D human pose prediction for all activities in the Human3.6M dataset. Figure 5 shows more examples of future predicted 3D pose at different timesteps.
We design a grammar that is learned in an adversarial setting and apply it to future prediction in video.
1,361
scitldr
We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods. Pooling is a fundamental operation in deep learning architectures BID23. The role of pooling is to merge a collection of related features into a single, possibly vector-valued, summary feature. A prototypical example is in convolutional neural networks (CNNs) BID22, where linear activations of features in neighborhoods of image locations are pooled together to construct more abstract features. A more modern example is in neural networks for graphs, where each layer pools together embeddings of neighbors of a vertex to form a new embedding for that vertex, see for instance, BID20 BID0 BID15; BID28; BID26 BID25 van den; BID12 BID13; ).A common requirement of a pooling operator is invariance to the ordering of the input features. In CNNs for images, pooling allows invariance to translations and rotations, while for graphs, it allows invariance to graph isomorphisms. Existing pooling operators are mostly limited to predefined heuristics such as max-pool, min-pool, sum, or average. Another desirable characteristic of pooling layers is the ability to take variable-size inputs. This is less important in images, where neighborhoods are usually fixed a priori. However in applications involving graphs, the number of neighbors of different vertices can vary widely. Our goal is to design flexible and learnable pooling operators satisfying these two desiderata. Abstractly, we will view pooling as a permutation-invariant (or symmetric) function acting on finite but arbitrary length sequences h. All elements h i of the sequences are features lying in some space H (which itself could be a high-dimensional Euclidean space R d or some subset thereof). The sequences h are themselves elements of the union of products of the H-space: h ∈ ∞ j=0 H j ≡ H ∪. Throughout the paper, we will use Π n to represent the set of all permutations of the integers 1 to n, where n will often be clear from the context. In addition, h π, π ∈ Π |h|, will represent a reordering of the elements of a sequence h according to π, where |h| is the length of the sequence h. We will use the double bar superscript f to indicate that a function is permutation-invariant, returning the same value no matter the order of its arguments: f (h) = f (h π), ∀π ∈ Π |h|. We will use the arrow superscript f to indicate general functions on sequences h which may or may not be permutationinvariant 1. Functions f without any markers are'simple' functions, acting on elements in H, scalars or any other argument that is not a sequence of elements in H.Our goal in this paper is to model and learn permutation-sensitive functions f that can be used to construct flexible and learnable permutation-invariant neural networks. A recent step in this direction is work on DeepSets by , who argued for learning permutation-invariant functions through the following composition: DISPLAYFORM0 f (|h|, h; θ (f) ) = |h| j=1 f (h j ; θ (f) ) and h ≡ h(x; θ (h) ).Here, (a) x ∈ X is one observation in the training data (X itself may contain variable-length sequences), h ∈ H is the embedding (output) of the data given by the lower layers h: X × R a → H ∪, a > 0 with parameters θ (h) ∈ R a; (b) f: H × R b → F is a middle-layer embedding function with parameters θ (f) ∈ R b, b > 0, and F is the embedding space of f; and (c) ρ: F × R c → Y is a neural network with parameters θ (ρ) ∈ R c, c > 0, that maps to the final output space Y. Typically H and F are high-dimensional real-valued spaces; Y is often R d in d-dimensional regression problems or the simplex in classification problems. Effectively, the neural network f learns an embedding for each element in H, and given a sequence h, its component embeddings are added together before a second neural network transformation ρ is applied. Note that the function h may be the identity mapping h(x; ·) = x that makes f act directly on the input data. argue that if ρ is a universal function approximator, the above architecture is capable of approximating any symmetric function on h-sequences, which justifies the widespread use of average (sum) pooling to make neural networks permutation-invariant in BID12, BID15, BID20, BID0, among other works. We note that focus on functions of sets but the work was extended to functions of multisets by and that Janossy pooling can be used to represent multiset functions. The embedding h is permuted in all |h|! possible ways, and for each permutation h π, f (|h|, h π ; θ (f) ) is computed. These are summed and passed to a second function ρ(·; θ (ρ) ) which gives the final permutation-invariant output y(x; θ (ρ), θ (f), θ (h) ); the gray rectangle represents Janossy pooling. We discuss how this can be made computationally tractable. In practice, there is a gap between flexibility and learnability. While the architecture of equations 1 and 2 is a universal approximator to permutationinvariant functions, it does not easily encode structural knowledge about y. Consider trying to learn the permutation-invariant function y(x) = max i,j≤|x| |x i − x j |. With higherorder interactions between the elements of h, the functions f of equation 2 cannot capture any useful intermediate representations towards the final output, with the burden shifted entirely to the function ρ. Learning ρ means learning to undo mixing performed by the summation layer f (|h|, h; θ (f) ) = |h| j=1 f (h j ; θ (f) ). As we show in our experiments, in many applications this is too much to ask of ρ. Contributions. We investigate a learnable permutation-invariant pooling layer for variable-size inputs inspired by the Janossy density framework, widely used in the theory of point processes (, Chapter 7). This approach, which we call Janossy pooling, directly allows the user to model what higher-order dependencies in h are relevant in the pooling. FIG0 summarizes a neural network with a single Janossy pooling layer f (detailed in Definition 2.1 below): given an input embedding h, we apply a learnable (permutation-sensitive) function f to every permutation h π of the input sequence h. These outputs are added together, and fed to the second function ρ. Examples of function f include feedforward and recurrent neural networks (RNNs). We call the operation used to construct f from f the Janossy pooling. Definition 2.1 gives a more detailed description. We will detail three broad strategies for making this computation tractable and discuss how existing methods can be seen as tractability strategies under the Janossy pooling framework. Thus, we propose a framework and tractability strategies that unify and extend existing methods in the literature. We contribute the following analysis: (a) We show DeepSets is a special case of Janossy pooling where the function f depends only on the first element of the sequence h π. In the most general form of Janossy pooling (as described above), f depends on its entire input sequence h π. This naturally raises the possibility of intermediate choices of f that allow practitioners to trade between flexibility and tractability. We will show that functions f that depend on their first k arguments of h π allow the Janossy pooling layer to capture up to k-ary dependencies in h. (b) We show Janossy pooling can be used to learn permutation-invariant neural networks y(x) by sampling a random permutation of h during training, and then modeling this permuted sequence using a sequence model such as a recurrent neural network (LSTMs BID17, GRUs BID6) or a vector model such as a feedforward network. We call this permutation-sampling learning algorithm π-SGD (π-Stochastic Gradient Descent). Our analysis explains why this seemingly unsound procedure is theoretically justified, which sheds light on the recent puzzling success of permutation sampling and LSTMs in relational models BID29 BID15. We show that this property relates to randomized model ensemble techniques. (c) , the authors describe a connection between DeepSets and infinite de Finetti exchangeabilty. We provide a probabilistic connection between Janossy pooling and finite de Finetti exchangeabilty BID11. We first formalize the Janossy pooling function f. Start with a function f, parameterized by θ (f), which can take any variable-size sequence as input: a sequence of matrices (such as images), a sequence of vectors (such as a sequence of vector embeddings), or a variable-size sequence of features or embeddings representing the neighbors of a node in an attributed graph. In practice, we implement f with a neural network. Formalizing FIG0 from Section 1, we use f to define f: DISPLAYFORM0 where Π |h| is the set of all permutations of the integers 1 to |h|, and h π represents a particular reordering of the elements of sequence h according to π ∈ Π |h|. We refer the operation used to construct f from f as Janossy pooling. ♦ Definition 2.1 provides a conceptually simple approach for constructing permutation-invariant functions from arbitrary and powerful permutation-sensitive functions such as feedforward networks, recurrent neural networks, or convolutional neural networks. If f is a vector-valued function, then so is f, and in practice, one might pass this vector output of f through a second function ρ (e.g. a neural network parameterized by θ (ρ) ): DISPLAYFORM1 Equation FORMULA2 can capture any permutation-invariant function g for a flexible enough family of permutation-sensitive functions f (for instance, one could always set f = g). Thus, at least theoretically, ρ in equation 4 provides no additional representational power. In practice, however, ρ can improve learnability by capturing common aspects across all terms in the summation. Furthermore, when we look at approximations to equation 3 or restrictions of f to more tractable families, adding ρ can help recover some of the lost model capacity. Overall then, equation 4 represents one layer of Janossy pooling, forming a constituent part of a bigger neural network. FIG0 summarizes this. Janossy pooling, as defined in equation 3 and 4 is intractable; the computational cost of summing over all permutations (for prediction), and backpropagating gradients (for learning) is likely prohibitive for most problems of interest. Nevertheless, it provides an overarching framework to unify existing methods, and to extend them. In what follows we present strategies for mitigating this, allowing novel and effective trade-offs between learnability and computational cost. A simple way to achieve permutation-invariance without the summation in equation 3 is to order the elements of h according to some canonical ordering based on its values, and then feed the reordered sequence to f. More precisely, one defines a function CANONICAL: H ∪ → H ∪ such that CANONICAL(h) = CANONICAL(h π)∀π ∈ Π |h| and only considers functions f based on the composition f = CANONICAL• f. Note that specifying a permutation-invariant CANONICAL is not equivalent to the original problem since one may define a function of only the data and not of learnable parameters (e.g. sort). This input constraint then allows the use of complex f models, such as RNNs, that can capture arbitrary relationships in the canonical ordering of h without the need to sum over all permutations of the input. Examples of the canonical ordering approach already exist in the literature, for example, BID30 order nodes in a graph according to a user-specified ranking such as betweenness centrality (say from high to low). This approach is useful only if the canonical ordering is relevant to the task at hand. BID30 acknowledges this shortcoming and BID29 demonstrates that an ordering by Personalized PageRank BID32 BID18 achieves a lower classification accuracy than a random ordering. As an idealized example, consider DISPLAYFORM0, with (h i,1, h i,2) ∈ H = R 2, and components h i,1 and h i,2 sampled independently of each other. Choosing to sort h according to h ·,1 when the task at hand depends on sorting according to h ·,2 can lead to poor prediction accuracy. Rather than pre-defining a good canonical order, one can try to learn it from the data. This requires searching over the discrete space of all |h|! permutations of the input vector h. In practice, this discrete optimization relies on heuristics (; BID36 . Alternatively, instead of choosing a single canonical ordering, one can choose multiple orderings, ing in ensemble methods that average across multiple permutations. These can be viewed as more refined (possibly data-driven) approximations to equation 3. Here, we provide a different spectrum of options to trade-off flexibility, complexity, and generalizability in Janossy pooling. Now, to simplify the sum over permutations in equation 3, we impose structural constraints where f (h) depends only on the first k elements of its input sequence. This amounts to the assumption that only k-ary dependencies in h are relevant to the task at hand. Definition 2.2: [k-ary Janossy pooling] Fix k ∈ N. For any sequence h, define ↓ k (h) as its projection to a length k sequence; in particular, if |h| ≥ k, we keep the first k elements. Then, a k-ary permutation-invariant Janossy function f is given by DISPLAYFORM0 Note that if some of the embeddings have length |h| < k, then we can zero pad to form the length-k sequence (↓ k (h π), 0,..., 0). Proposition 2.1 shows that if |h| > k, equation 5 only needs to sum over |h|!/(|h| − k)! terms, which can be tractable for small k. Proposition 2.1. The Janossy pooling in equation 5 requires summing over only |h|! (|h|−k)! terms, thus saving computation when k < |h|. In particular, equation 5 can be written as DISPLAYFORM1, where I |h| is the set of all permutations of {1, 2, . . ., |h|} taken k at a time, and h j is the j-th element of h. Note that the value of k balances computational savings and the capacity to model higher-order interactions; it can be selected as a hyperparameter based on a-priori beliefs or through typical hyperparameter tuning strategies. Remark 2.1 (DeepSets is a 1-ary (unary) Janossy pooling). Equation 5 represented with k = 1 and composing with ρ as in equation 4 yields the model ρ DISPLAYFORM2 and thus equations 1 and 2 for an appropriate choice of f.Not surprisingly, the computational savings obtained from k-ary Janossy pooling come at the cost of reduced model flexibility. The next formalizes this. Theorem 2.1. For any k ∈ N, define F k as the set of all permutation-invariant functions that can be represented by Janossy pooling with k-ary dependencies. Then, F k−1 is a proper subset of F k if the space H is not trivial (i.e. if the cardinality of H is greater than 1). Thus, Janossy pooling with k-ary dependencies can express any Janossy pooling function with (k − 1)-ary dependencies, but the converse does not hold. The proof is given in the Supplementary Material. Theorem 2.1 has the following implication: Corollary 2.1. For k > 1, the DeepSets function in equation 1 pushes the modeling of k-ary relationships to ρ. Proof. DeepSets functions can be expressed via Janossy pooling with k = 1. Thus, by Theorem 2.1, f in equation 2 cannot express all functions that can be expressed by higher-order (i.e. k > 1) Janossy pooling operations. Consequently, if the DeepSets function can express any permutationinvariant function, the expressive power must have been pushed to ρ. Another approach to tractable Janossy pooling samples random permutations of the input h during training. Like the canonical ordering approach of Section 2.1, this offers significant computational savings, allowing more complex models for f such as LSTMs and GRUs. However, in contrast with that approach, this is considerably more flexible, avoiding the need to learn a canonical ordering or to make assumptions about the dependencies between the elements of h and the objective function. Rather, it can be viewed as implicitly assuming simpler structure in these functions. The approach of sampling random permutations has been previously used in relational learning tasks BID29 BID15 as a heuristic with an LSTM as f. Both these papers report that permutation sampling outperforms or closely matches other tested neural network models they tried. Therefore, this section not only proposes a tractable approximation for equation 3 but also provides a theoretical framework to understand and extend such approaches. For the sake of simplicity, we analyze the optimization with a single sampled permutation. However, note that increasing the number of sampled permutations in the estimate of f decreases variance, and we recover the exact algorithm when all |h|! permutations are sampled. We assume a supervised learning setting, though our analysis easily extends to unsupervised learning. We are given training data D ≡ {(x, y),..., (x(N), y(N))}, where y(i) ∈ Y is the target output and x(i) its corresponding input. Our original goal was to minimize the empirical loss DISPLAYFORM0, where DISPLAYFORM1 and DISPLAYFORM2 Computing the gradient of equation 6 is intractable for large inputs h (i), as the backpropagation computation graph branches out for every permutation in the sum. To address this computational challenge, we will turn our attention to stochastic optimization. Permutation sampling. Consider replacing the Janossy sum in equation 7 with the estimatê DISPLAYFORM3 where s is a random permutation sampled uniformly, s ∼ Unif(Π |h|). The estimator in equation 8 is unbiased: DISPLAYFORM4. Note however that when f is chained with another nonlinear function ρ and/or nonlinear loss L, the composition is no longer unbiased: DISPLAYFORM5. Nevertheless, we use this estimate to propose the following stochastic approximation algorithm for gradient descent: DISPLAYFORM6 uniformly from the training data D. At step t, consider the stochastic gradient descent update DISPLAYFORM7 where DISPLAYFORM8 is the random gradient, where h DISPLAYFORM9 Effectively, this is a Robbins-Monro stochastic approximation algorithm of gradient descent BID37 BID3 and optimizes the following modified objective: DISPLAYFORM10 Observe that the expectation over permutations is now outside the L and ρ functions. Like equation 6, the loss in equation 10 is also permutation-invariant, though we note that π-SGD, after a finite number of iterations, returns a ρ(f (· · ·, h (i), · · · )) sensitive to the random input permutations of h (i) presented to the algorithm. Further, unless the function f itself is permutation-invariant (f = f), the optima of J are different from those of the original objective function L. Instead, J is an upper bound to L via Jensen's inequality if L is convex and ρ is the identity function (equation 3); minimizing this upper bound forms a tractable surrogate to the original Janossy objective. If the function class used to model f is rich enough to include permutation-invariant functions, then the global minima of J will include those of L. In general, minimizing the upper bound implicitly regularizes f to return functions that are insensitive to permutations of the training data. While a general ρ no longer upper bounds the original objective, the implicit regularization of permutationsensitive functions still applies to the composition f ≡ ρ • f and we show competitive . It is important to observe that the function ρ plays a very different role in our π-SGD formulation compared to k-ary Janossy pooling. Previously ρ was composed with an average over f to model dependencies not captured in the average-and was in some sense separate from f -whereas here it becomes absorbed directly into f = ρ • f.The next , which we state and prove more formally in the Supplementary Material, provides some insight into the convergence properties of our algorithm. Although the conditions are difficult to check, they are similar to those used to demonstrate the convergence of SGD, which has been empirically demonstrated to yield strong performance in practice. Proposition 2.2. [π-SGD Convergence] The optimization of π-SGD enjoys properties of almost sure convergence to the optimal θ under similar conditions as SGD.Variance reduction. Variance reduction of the output of a sampled permutation f (|h|, DISPLAYFORM11, inducing a nearequivalence between optimizing equation 6 and equation 10. Possible approaches include importance sampling (used by BID5 for 1-ary Janossy), control variates (also used by BID4 also used for 1-ary Janossy), Rao-Blackwellization (, Section 1.7), and an output regularization, which includes a penalty for two distinct sampled permutations s and s, f (|h|, h s ; DISPLAYFORM12, so as to reduce the variance of the sampled Janossy pooling output (used before to improve Dropout masks by Zolna et al. FORMULA0).Inference. The use of π-SGD to optimize the Janossy pooling layer optimizes the objective J, and thus has the following implication on how outputs should be calculated at inference time: Remark 2.2 (Inference). Assume L(y,ŷ) is convex as a function ofŷ (e.g., L is the L 2 norm, crossentropy, or negative log-likelihood losses). At test time we estimate the output y(i) of input x(i) by computing (or estimating) DISPLAYFORM13 Combining π-SGD and Janossy with k-ary Dependencies. In some cases one may consider k-ary Janossy pooling with a moderately large value of k in which case even the summation over |h|! (|h|−k)! terms (see proposition 2.1) becomes expensive. In these cases, one may sample s ∼ Unif Π |h| and computef k = f (|h|, ↓ k (h s); θ (f) ) in lieu of the sum in equation 5. Note that equation 5 defining k-ary Janossy pooling constitutes exact inference of a simplified model whereas π-SGD with k-ary dependencies constitutes approximate inference. We will return to this idea in our section where we note that the GraphSAGE model of BID15 can be cast as a π-SGD approximation of k-ary Janossy pooling. In what follows we empirically evaluate two tractable Janossy pooling approaches, k-ary dependencies (section 2.2) and sampling permutations for stochastic optimization (section 2.3), to learn permutation-invariant functions for tasks of different complexities. One baseline we compare against is DeepSets ; recall that this corresponds to unary (k = 1) Janossy pooling (Remark 2.1). Corollary 2.1 shows that explicitly modeling higher-order dependencies during pooling simplifies the task of the upper layers (ρ) of the neural network, and we evaluate this experimentally by letting k = 1, 2, 3, |h| over different arithmetic tasks. We also evaluate Janossy pooling in graph tasks, where it can be used as a permutation-invariant function to aggregate the features and embeddings of the neighbors of a vertex in the graph. Note that in graph tasks, permutation-invariance is required to ensure that the neural network is invariant to permutations in the adjacency matrix (graph isomorphism). The code used to generate the in this section are available on GitHub 2. We first consider the task of predicting the sum of a sequence of integers and extend it to predicting other permutation-invariant functions: range, unique sum, unique count, and variance. In the sum task we predict the sum of a sequence of 5 integers drawn uniformly at random with replacement from {0, 1, . . ., 99}; the range task also receives a sequence 5 integers distributed the same way and tries to predict the range (the difference between the maximum and minimum values); the unique sum task receives a sequence of 10 integers, sampled uniformly with replacement from {0, 1, . . ., 9}, and predicts the sum of all unique elements; the unique count task also receives a sequence of repeating elements from {0, 1, . . ., 9}, distributed in the same was as with the unique sum task, and predicts the number of unique elements; the variance task receives a sequence of 10 integers drawn uniformly with replacement from {0, 1, . . ., 99} and tries to predict the variance DISPLAYFORM0 2, wherex denotes the mean of x. Unlike Zaheer et al. FORMULA0, we choose to work with the digits themselves, to allow a more direct assessment of the different Janossy pooling approximations. Note that the summation task of Zaheer et al. FORMULA0 is naturally a unary task that lends itself to the approach of embedding individual digits then adding them together while the other tasks require exploiting high-order relationships within the sequence. Following Zaheer et al. FORMULA0, we report accuracy (0-1 loss) for all tasks with an integer target; we report root mean squared error (RMSE) for the variance task. Here we explore two Janossy pooling tractable approximations: (a) (k-ary dependencies) Janossy (k = 1) (DeepSets), and Janossy k = 2, 3 where f is a feedforward network with a single hidden layer comprised of 30 neurons. As detailed in the Supplementary Material, the models are constructed to have the same number of parameters regardless of k by modifying the embedding (output) dimension of h. In the Supplementary Material, we also show for experiments that relax this constraint. (b) (π-SGD) Full k = |h| Janossy pooling where f is an LSTM or a GRU that returns the shortterm hidden state of the last temporal unit (the h t of Cho et al. with t = |h|). The LSTM has 50 hidden units and the GRU 80, trained with the π-SGD stochastic optimization. The number of hidden units was chosen to be consistent with. At test time, we experiment with approximating (estimating) equation 11 using 1 and 20 sampled permutations. FORMULA0 ] a feedforward network with one hidden layer using tanh activations and 100 units. Choosing a simple and complex form for ρ allows insight into the extent to which ρ supplements the capacity of the model by capturing relationships not exploited during pooling, and serves as an evaluation of the strategy of optimizing J as a tractable approximation of L.Much of our implementation, architectural, and experimental design are based on the DeepSets code 3 of , see the Supplementary Material for details. We tuned the Adam learning rate for each model and report the using the rate yielding top performance on the validation set. TAB0 shows the accuracy (average 0-1 loss) of all tasks except variance, for which we report RMSE in the last column. Performance was similar between the LSTM and GRU models, with the GRU performing slightly better, thus we moved the LSTM to Table 3 in the Supplementary Material for the sake of clarity. We trained each model with 15 random initializations of the weights to quantify variability. TAB2 in the Supplementary Material shows the same measured by mean absolute error. The data consists of 100,000 training examples and 10,000 test examples. The in TAB0 show that: models trained with π-SGD using LSTMs and GRUs as f typically achieve top performance or are comparable to the top performer (within confidence intervals) on all tasks, for any choice of ρ. We also observe for LSTMs and GRUs that adding complexity to ρ can yield small but meaningful performance gains or maintain similar performance, lending credence to the approach of optimizing J as a tractable approximation to L. Specifically, in the variance task, GRUs and LSTMs with π-SGD provide significant accuracy gains over k ∈ {1, 2, 3}, showing that modeling full-dependencies can be advantageous even if model training with π-SGD is approximate. For a more complex ρ (MLP as opposed to Linear), lowercomplexity Janossy pooling achieves consistently better : k ∈ {2, 3} gives good when ρ is linear but poorer when ρ is an MLP (as these models are more expressive, the only feasible explanation is an optimization issue since we also observed poorer performance on the training data). We also note that when ρ is an MLP, it takes significantly more epochs for k ∈ {2, 3} to find the best model (2000 epochs) while k = 1 finds good models much quicker (1000 epochs). The we report come from training with 1000 epochs on all models with a linear ρ and 2000 epochs for all models where ρ is an MLP. We observe that for k = 1 (DeepSets), a more complex ρ (MLP) is required as the pooling pushes the complexity of modeling high-order interactions over the input to ρ. The converse is also true, if ρ is simple (Linear) then a Janossy pooling that models high-order interactions k ∈ {2, 3, |h|} gives higher accuracy, as shown in the range, unique sum, unique count, and variance tasks. Here we consider Janossy pooling in the context of graph neural networks to learn vertex representations enabling vertex classification. The GraphSAGE algorithm BID15 consists of sampling vertex attributes from the neighbor multiset of each vertex v before performing an aggregation operation which generates an embedding of v; the authors consider permutation-invariant operations such as mean and max as well as the permutation-sensitive operation of feeding a randomly permuted neighborhood sequence to an LSTM. The sample and aggregate procedure is repeated twice to generate an embedding. Each step can be considered as Janossy pooling with π-SGD and k-ary subsequences, where k l, l ∈ {1, 2} is the number of vertices sampled from each neighborhood and f is for instance a mean, max, or LSTM. However, at test time, GraphSAGE only samples one permutation s of each neighborhood to estimate equation 11.In our experiments, we also consider computing the mean of the entire neighborhood. Here we say k = 1 to reinforce the connection to unary Janossy pooling whereas with the LSTM model, k refers to the number of samples of the neighborhood. In this section we investigate two conditions: (a) the impact of increasing k in the k-ary dependencies; and (b) the benefits of increasing the number of sampled permutations at inference time. To implement the model and design our experiments, we modified the reference PyTorch code provided by the authors 4. We consider the three graph datasets considered in BID15: Cora and Pubmed and the larger Protein-Protein Interaction (PPI) . The first two are citation networks where vertices represent papers, edges represent citations, and vertex features are bag-of-words representations of the document text. The task is to classify the paper topic. The PPI dataset is a collection of several graphs each representing human tissue; vertices represent proteins, edges represent protein interaction, features include genetic and immunological features, and we try to classify protein roles (there are 121 targets). More details of these experiments are shown in Table 9 in the Supplementary Material.(a) TAB1 shows the impact (on accuracy) of increasing the number of k-ary dependencies. We use k 1, k 2 ∈ {3, 5, 10, 25} for the two pooling layers of our graph neural network (GNN). The function f is an LSTM (except for when we try mean-pooling). Note that for the LSTM, the number of parameters of the model is independent of k. At inference time, we sample 20 random permutations of each sequence and average the predicted probabilities before making a final prediction of the class label. The in TAB1 show that the choice of k 1, k 2 ∈ {3, 5, 10, 25} makes little difference on Cora and Pubmed due to the small neighborhood sizes: k 1, k 2 ≥ 5 often amounts to a Entries denoted by -all differ by less than 0.01. Typical neighborhoods in Cora and Pubmed are small, so that sampling ≥ 5 neighbors is often equivalent to using the entire neighborhood. b Some neighbor sequences in PPI are prohibitively large, so we take k1 = k2 = 100.sampling the entire neighborhood. In PPI, whose average degree is 28.8, increasing k yields consistent improvement. The strong performance of mean-pooling points to both a relatively easy task 5 and the benefits of utilizing the entire neighborhood of each vertex. (b) We now investigate whether increasing the number of sampled permutations used to estimate equation 11 at test (inference) time impacts accuracy. FIG8 in the Supplementary Material shows that increasing the number of sampled permutations from one to three leads to an increase in accuracy in the PPI task (Cora and Pubmed degrees are too small for this test) but diminishing returns set in by the seventh sample. Using paired tests -t and Wilcoxon signed rank -we see that test inference with seven sampled permutations versus one permutation is significant with p < 10 −3 over 12 replicates. Sampling permutations at inference time is thus a cheap method for achieving modest but potentially important gains at inference time. Under the Janossy pooling framework presented in this work, existing literature falls under one of three approaches to approximating to the intractable Janossy-pooling layer: Canonical orderings, k-ary dependencies, and permutation sampling. We also discuss the broader context of invariant models and probabilistic interpretations. Canonical Ordering Approaches. In section 2.1, we saw how permutation invariance can be achieved by mapping permutations to a canonical ordering. Rather than trying to define a good canonical ordering, one can try to learn it from the data, however searching among all |h|! permutations for one that correlates with the task of interest is a difficult discrete optimization problem. Recently, BID36 proposed a method that computes the posterior distribution of all permutations, conditioned on the model and the data. This posterior-sampling approach is intractable for large inputs, unfortunately. We note in passing that BID36 is interested in permutation-invariant outputs, and that Janossy pooling is also trivially applicable to these tasks. proposes a heuristic using ancestral sampling while learning the model. k-ary Janossy Pooling Approaches. In section 2.2 we described k-ary Janossy pooling, which considers k-order relationships in the input vector h to simplify optimization. DeepSets can be characterized as unary Janossy pooling (i.e., k-ary for k = 1).. Qi et al. FORMULA0 and BID34 propose similar unary Janossy pooling models. BID8 proposes to add inductive biases to the DeepSets model in the form of monotonicity constraints with respect to the vector valued elements of the input sequence by modeling f and ρ with Deep Lattice Networks ; one can extend BID8 by using higher-order (k > 1) pooling. Exploiting dependencies within a sequence to learn a permutation-invariant function has been discussed elsewhere. For instance BID38 exploits pairwise relationships to perform relational reasoning about pairs of objects in an image and Battaglia et al. FORMULA0 contemplates modeling the center of mass of a solar system by including the pairwise interactions among planets. However, Janossy pooling provides a general framework for capturing dependencies within a permutation-invariant pooling layer. Permutation Sampling Approaches. In section 2.3 we have seen a that permutation sampling can be used as a stochastic gradient procedure (π-SGD) to learn a model with a Janossy pooling layer. The learned model provides only an approximate solution to original permutation-invariant function. Permutation sampling has been used as a heuristic (without a theoretical justification) in both BID29 and BID15, which found that randomly permuting sequences and feeding them forward to an LSTM is effective in relational learning tasks that require permutation-invariant pooling layers. Probabilistic Interpretation and Other Invariances Our work has a strong connection with finite exchangeability. Some researchers may be more familiar with the concept of infinite exchangeability through de Finetti's theorem BID10 BID11, which imposes strong structural requirements: the probability of any subsequence must equal the marginalized probability of the original sequence (projectivity). BID21 noted the importance of this property for generative models and propose a model that learns a distribution without variational approximations. Finite exchangeability drops this projectivity requirement BID11, which in general, cannot be simplified beyond first sampling the number of observations m, and then sampling their locations from some exchangeable but non- BID9. Equivalently, de Finetti's theorem for infinitely exchangeable sequences implies that the joint distribution can represented as a mixture distribution over conditionally independent random variables (given θ) BID10 BID31 whereas the probability distribution of a finitely exchangeability sequence is a mixture over dependent random variables as shown by BID11. DISPLAYFORM0 In comparison, the restrictive assumption of letting k = 1 in k-ary Janossy Pooling yields the form of a log-likelihood of conditionally iid random variables (consider f a log pdf), the strong requirement of de Finetti's theorem for infinitely exchangeable sequences. Conversely, higher-order Janossy pooling was designed to exploit dependencies among the random variables such as those that arise under finitely exchangeable distributions. Indeed, finite exchangeability also arises from the theory of spatial point processes; our framework of Janossy pooling is inspired by Janossy densities BID9, which model the finite exchangeable distributions as mixtures of non-exchangeable distributions applied to permutations. This literature also studies simplified exchangeable point processes such as finite Gibbs models (; BID27 that restrict the structure of p exch to fixed-order dependencies, and are related to k-ary Janossy. More broadly, there are other connections between permutation-invariant deterministic functions and exchangeability in probability distributions, as recently discussed by BID2 . There, the authors also contemplate more general invariances through the language of group actions. An example is permutation equivariance: one form of permutation equivariance asserts that f (X π) = f (X) π ∀π ∈ Π |X| where f (X) is a sequence of length greater than 1. BID35 provides a weight-sharing scheme for maintaining general neural network equivariances characterized as automorphisms of a colored multi-edged bipartite graph. BID16 proposes a matrix completion model invariant to (possibly separate) permutations of the rows or columns. Other invariances are studied through a probabilistic perspective in BID31. Our approach of permutation-invariance through Janossy pooling unifies a number of existing approaches, and opens up avenues to develop both new methodological extensions, as well as better theory. Our paper focused on two main approaches: k-ary interactions and random permutations. The former involves exact Janossy pooling for a restricted class of functions f. Adding an additional neural network ρ can recover lost model capacity and capture additional higher-order interactions, but hurts tractability and identifiability. Placing restrictions on ρ (convexity, Lipschitz continuity etc.) can allow a more refined control of this trade-off, allowing theoretical and empirical work to shed light on the compromises involved. The second was a random permutation approach which conversely involves no clear trade-offs between model capacity and computation when ρ is made more complex, instead it modifies the relationship between the tractable approximate loss J and the original Janossy loss L. While there is a difference between J and L, we saw the strongest empirical performance coming from this approach in our experiments (shown in the last row of TAB0 ; future work is required to identify which problems π-SGD is best suited for and when its conver-gence criteria are satisfied. Further, a better understanding how the loss-functions L and J relate to each other can shed light on the slightly black-box nature of this procedure. It is also important to understand the relationship between the random permutation optimization to canonical ordering and how one might be used to improve the other. Finally, it is important to apply our methodology to a wider range of applications. Two immediate domains are more challenging tasks involving graphs and tasks involving non-Poisson point processes. is now a summation over only |h|!/(|h| − k)! terms. We can conclude that Next, we restate and prove the remaining portion of Theorem 2.1. Theorem 2.1. For any k ∈ N, define F k as the set of all permutation-invariant functions that can be represented by Janossy pooling with k-ary dependencies. Then, F k−1 is a proper subset of F k if the space H is not trivial (i.e. if the cardinality of H is greater than 1). Thus, Janossy pooling with k-ary dependencies can express any Janossy pooling function with (k − 1)-ary dependencies, but the converse does not hold. Proof. DISPLAYFORM0 Consider any element f k−1 ∈ F k−1, and write f (|h|, · ; θ (f) ) for its associated Janossy function. For any sequence h, f (|h|, DISPLAYFORM1, where the function f + looks at its first k elements. Thus, DISPLAYFORM2 where f k is the Janossy function associated with f + and thus belongs to F k .(F k ⊂ F k−1): the case where k = 1 is trivial, so assume k > 1. We will demonstrate the existence of DISPLAYFORM3 Let f k and f k−1 be associated with f k and f k−1, respectively. Thus, for any f k−1 and any θ DISPLAYFORM0 where Π {1,...,|h|}\j denotes the set of permutation functions defined on {1, 2, . . ., j − 1, j + 1, . . ., |h|} and (h −j)π is a permutation of the sequence h 1,..., h j−1, h j+1,..., h |h|. This can be written as DISPLAYFORM1 Now, f k−1 = f k if and only if their quotient in equation 13 is unity for all h. But this is clearly not possible in general unless H is a singleton, which we have precluded in our assumptions. Proposition 2.2 is repeated below and is followed by a more rigorous restatement. Proposition 2.2. [π-SGD Convergence] The optimization of π-SGD enjoys properties of almost sure convergence to the optimal θ under similar conditions as SGD.The following statement is similar to that in , which also provides intuition behind the theoretical assumptions, which are indeed quite general. See also . This is a familiar application of stochastic approximation algorithms already used in training neural networks. Proposition A.1 (π-SGD Convergence). Consider the π-SGD algorithm in Definition 2.3. If (a) there exists a constant M > 0 such that for all θ, −G DISPLAYFORM2, where G t is the true gradient for the full batch over all permutations, DISPLAYFORM3, and θ is the optimum.(b) there exists a constant δ > 0 such that for all θ, DISPLAYFORM4 2 ), where the expectation is taken with respect to all the data prior to step t. Then, the algorithm in equation 9 converges to θ with probability one. Proof. First, we can show that E t [Z t] = G t by equation 10, the linearity of the derivative operator, and the fact that the permutations are independently sampled for each training example in the minibatch and are assumed independent of θ. That equation 9 converges to θ is a consequence of our conditions and the supermartingale convergence theorem (, pp. 481). The following argument follows. DISPLAYFORM5 t, and DISPLAYFORM6. Note that C t is positive for a sufficiently large t, and ∞ t=1 B t ≤ ∞ by our definition of η t (Definition 2.3). We will demonstrate that E t [A t] ≤ A t−1 +B t−1 −C t−1, for all t, in the Supplementary Material from which it follows that A t converges to zero with probability one and The accuracy scores for all models (including the LSTM) on the sequence arithmetic tasks are shown in TAB0, except here we show additional rows representing models that use LSTM as f. We chose accuracy (0-1 loss) to be consistent with; here we report mean absolute error to evaluate the differences it makes on our . These can be found in TAB2. The message is similar to the one told by accuracy scores; there is a drop in the mean absolute error as the value of k increases and when using more sampled permutations at test-time (e.g., Janossy-20inf-LSTM versus Janossy-1inf-LSTM). Again, the power of using an RNN for f and training with π-SGD is salient on the variance task where it is important to exploit dependencies in the sequence. Beyond the performance gains, we also observe a drop in variance when sampling more permutations at test time. Furthermore, as discussed in the implementation section, we constructed k-ary models to have the same number of parameters regardless of k for the reported in the main body. We show where this constraint is relaxed in TAB4. Here we see a modest improvement of k-ary models which stands to reason considering the embedding dimension fed to the Janossy pooling layer was reduced from 100 with k = 1 to 33 with k = 3 (please see the implementation section for details). DISPLAYFORM7 For the graph tasks, the plot of performance as a function of number of inference-time permutations is shown in FIG8. Table 3: Full table showing the Accuracy (and RMSE for, which was written in Keras BID7, and subsequently ported to PyTorch. For k-ary models with k ∈ {2, 3}, we always sort the sequence x beforehand to reduce the number of combinations we need to sum over. In the notation of FIG0, h is an Embedding with dimension of floor(100 k) (to keep the total number of parameters consistent for each k as discussed below), f is either an MLP with a single hidden layer or an RNN depending on the model (k-ary Janossy or full-Janossy, respectively), and ρ is either a linear dense layer or one hidden layer followed by a linear dense layer. The MLPs in f have 30 neurons whereas the MLPs in ρ have 100 neurons, the LSTMs have 50 neurons, and the GRUs have 80 hidden neurons. All activations are tanh except for the output layer which is linear. We chose 100 for the embedding dimension to be consistent with.For the k-ary shown in the body, we made sure the number of parameters was consistent for k ∈ {1, 2, 3} (see Table 8). We unify the number of parameters by adjusting the output dimension of the embedding. We also experimented with relaxing the restriction that k-ary models have the same numbers of parameters TAB4, and the numbers of parameters in these models is also shown in Table 8. For the LSTM than GRU models, we follow the choice of which also reports that the choices were made to keep the numbers of parameters consistent. Optimization is done with Adam with a tuned the learning rate, searching over {0.01, 0.001, 0.0001, 0.00001}. Training was performed on GeForce GTX 1080 Ti GPUs. Graph-based tasks The datasets used for this task are summarized in Table 9. Our implementation is in PyTorch using Python 2.7, following the PyTorch code associated with BID15. That repo did not include an LSTM aggregator, so we implemented our own following the TensorFlow implementation of GraphSAGE, and describe it here. At the beginning of every forward pass, each vertex v is associated with a p-dimensional vertex attribute h (see Table9). For every vertex in a batch, k 1 neighbors of v are sampled, their order is shuffled, and their features are fed through an LSTM. From the LSTM, we take the short-term hidden state associated with the last element in the input sequence (denoted h (T) in the LSTM literature, but this h is not to be confused with a vertex attribute). This short-term hidden state is passed through a fully connected layer to yield a vector of dimension q 2, where q is a user-specified positive even integer referred to as the embedding dimension. The vertex's own attribute h is also fed forward through a fully connected layer with q 2 output neurons. At this point, for each vertex, we have two representation vectors of size q 2 representing the vertex v and its neighbor multiset, which we concatenate to form an embedding of size q. This describes one convolution layer, and it is repeated a second time with a distinct set of learnable weights for the fully connected and LSTM layers, sampling k 2 vertices from each neighborhood and using the embeddings of the first layer as features. After each convolution, we may optionally apply a ReLU activation and/or embedding normalization, and we follow the decisions shown in the GraphSAGE code BID15. After both convolution operations, we apply a final fully connected layer to obtain the score, followed by a softmax (Cora, Pubmed) or sigmoid (PPI). The loss function is cross entropy for Cora and Pubmed, and binary cross entropy for PPI. Last, the definition above for f caused difficulties in environments such as figure, so we defined and occasionally used in L A T E X \newcommand{\harrowStable}{\overset{\rightharpoonup}{#1}}.
We propose Janossy pooling, a method for learning deep permutation invariant functions designed to exploit relationships within the input sequence and tractable inference strategies such as a stochastic optimization procedure we call piSGD
1,362
scitldr
While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework. Despite the success of deep learning in many real-world tasks such as visual recognition and machine translation, such good performances are achievable at the availability of large training data, and many fail to generalize well in small data regimes. To overcome this limitation of conventional deep learning, recently, researchers have explored meta-learning approaches, whose goal is to learn a model that generalizes well over distribution of tasks, rather than instances from a single task, in order to utilize the obtained meta-knowledge across tasks to compensate for the lack of training data for each task. However, so far, most existing meta-learning approaches (; ; ; ; ; have only targeted an artificial scenario where all tasks participating in the multi-class classification problem have equal number of training instances per class. Yet, this is a highly restrictive setting, as in real-world scenarios, tasks that arrive at the model may have different training instances (task imbalance), and within each task, the number of training instances per class may largely vary (class imbalance). Moreover, the new task may come from a distribution that is different from the task distribution the model has been trained on (out-of-distribution task) (See (a) of Figure 1 ). Under such a realistic setting, the meta-knowledge may have a varying degree of utility to each task. Tasks with small number of training data, or close to the tasks trained in meta-training step may want to rely mostly on meta-knowledge obtained over other tasks, whereas tasks that are out-of-distribution or come with more number of training data may obtain better solutions when trained in a task-specific manner. Furthermore, for multi-class classification, we may want to treat the learning for each class differently to handle class imbalance. Thus, to optimally leverage meta-learning under various imbalances, it would be beneficial for the model to task-and class-adaptively decide how much to use from the meta-learner, and how much to learn specifically for each task and class. Head class Figure 1: Concept. (a) To handle task imbalance (Task Imbal.), class imbalance (Class Imbal.) and outof-distribution tasks (OOD) for each task τ, we introduce task-specific balancing variables γ τ, ω τ and z τ, respectively. (b) With those variables, we learn to balance between the meta-knowledge θ and task-specific update to handle imbalances and distributional discrepancies. To this end, we propose a novel Bayesian meta-learning framework, which we refer to as Bayesian Task-Adaptive Meta-Learning (Bayesian TAML), that learns variables to adaptively balance the effect of meta-and task-specific learning. Specifically, we first obtain set-representations for each task, which are learned to convey useful statistics about the task or class distribution, such as mean, variance, tailedness (kurtosis), and skewness, and then learn the distribution of three balancing variables as the function of the set: 1) task-dependent learning rate decay, which decides how far away to deviate from the meta-knowledge, when performing task-specific learning. Tasks with higher shots could benefit from taking gradient steps afar, while tasks with few shots may need to stay close to the initial parameter. 2) class-dependent learning rate, which decides how much information to use from each class, to automatically handle class imbalance where the number of instances per class can largely vary. 3) task-dependent attention mask, which modifies the shared parameter for each task by learning a set-dependent attention mask to it, such that the task can decide how much and what to use from the initial shared parameter and what to ignore based on its set representation. This is especially useful when handling out-of-distribution task, which may need to ignore some of the meta-knowledge. We validate our model on Omniglot and mini-ImageNet dataset, as well as a new dataset that consists of heterogeneous datasets, under a scenario where every class in each episode can have any number of shots, that leads to task and class imbalance, and where the dataset at meta-test time is different from that of meta-training time. The experimental show that our Bayesian TAML significantly improves the performance over the existing approaches under these realistic scenarios. Further analysis of each component reveals that the improvement is due to the effectiveness of the balancing terms for handling task and class imbalance, and out-of-distribution tasks. To summarize, our contribution in this work is threefold: • We consider a novel problem of meta-learning under a realistic task distribution, where the number of instances across classes and tasks could largely vary, or the unseen task at the meta-test time is largely different from the seen tasks. • For effective meta-learning with such imbalances, we propose a Bayesian task-adaptive meta-learning (Bayesian TAML) framework that can adaptively adjust the effect of the meta-learner and the task-specific learner, differently for each task and class. • We validate our model on realistic imbalanced few-shot classification tasks with a varying number of shots per task and class and show that it significantly outperforms existing meta-learning models. Meta-learning Meta-learning is an approach to learn a model to generalize over a distribution of task. The approaches in general can be categorized into either memory-based, metric-based, and optimization-based methods. A memory-based approach learns to store correct instance and label into the same memory slot and retrieve it later, in a task-generic manner. Metric-based approaches learn a shared metric space . defines the distance between the instance and the class prototype, such that the instances are closer to their correct prototypes than to others. As for optimization-based meta-learning, learns a shared initialization parameter that is optimal for any tasks within few gradient steps from the initial parameter. Meta-SGD improves upon MAML by learning the learning rate differently for each parameter. For effective learning of a meta-learner, meta-learning approaches adopt the episodic training strategy which trains and evaluates a model over a large number of tasks, which are called meta-training and meta-test phase, respectively. However, existing approaches only consider an artificial scenario which samples the classification of classes with exactly the same number of training instances, both within each episode and across episodes. On the other hand, we consider a more challenging scenario where the number of shots per class and task could vary at each episode, and that the task given at meta-test time could be an out-of-distribution task. Task-adaptive meta-learning The goal of learning a single meta-learner that works well for all tasks may be overly ambitious and leads to suboptimal performances for each task. Thus recent approaches adopt task-adaptively modified meta-learning models. proposed to learn the temperature scaling parameter to work with the optimal similarity metric. also suggested a model that generates task-specific parameters for the network layers, but it only trains with many-shot classes, and implicitly expects generalization to few-shot cases. proposed a network type task-specific parameter producer, and proposed to differentiate the network weights into task-shared and task-specific weights. Our model also aims to obtain task-specific parameter for each task, but is rather focused on learning how to balance between the meta-learning and task-/class-specific learning. To our knowledge, none of the existing approaches explicitly tackle this balancing problem since they only consider few-shot learning with the fixed number of instances for each class and task. Probabilistic meta-learning Recently, a probabilistic version of MAML has been proposed , where they interpret a task-specific gradient update as a posterior inference process under variational inference framework. proposed Bayesian MAML with a similar motivation but with a stein variational inference framework and chaser loss. proposed a probabilistic meta-learning framework where the paramter for a novel task is rapidly estimated under decision theoretic framework, given a set representation of a task. The motivation behind these works is to represent the inherent uncertainty in few-shot classification tasks. Our model also uses Bayesian modeling, but it focuses on leveraging the uncertainties of the meta-learner and the gradient-direction in order to balance between meta-and task-or class-specific learning. We first introduce notations and briefly recap the model-agnostic meta-learning (MAML) by. Suppose a task distribution p(τ) that randomly generates task τ consisting of a training set Then, the goal of MAML is to meta-learn the initial model parameter θ as a meta-knowledge to generalize over the task distribution p(τ), such that we can easily obtain the task-specific predictor θ τ in a single (or a few) gradient step from the initial θ. Toward this goal, MAML optimizes the following gradient-based meta-learning objective: where α denotes stepsize and L denotes empirical loss such as negative log-likelihood of observations. Note that by meta-learning the initial point θ, the task-specific predictor θ even with D τ which only contains few samples. We can easily extend the Eq., such that we obtain θ τ with more than one inner-gradient steps from the initial θ. However, the existing MAML framework has the following limitations that prevent the model from efficiently solving real-world problems involving task/class imbalance and out-of-distribution tasks. 1. Task imbalance. MAML has a fixed number of inner-gradient steps and stepsize α across all tasks, which prevents the model from adaptively deciding how much to use from the meta-knowledge depending on the number of the training examples per task. 2. Class imbalance. The model does not provide any framework to handle class imbalance within each task. Therefore, classes with large number of training instances (head classes) may dominate the task-specific learning during the inner-gradient steps, yielding low performance on classes with fewer shots (tail classes). 3. Out-of-distribution tasks. The model assumes that the meta-knowledge will be equally useful for the unseen tasks, but for unseen tasks that are out-of-distribution, the metaknowledge may be less useful. As shown in Figure 1 for the concepts, we introduce three balancing variables γ τ, ω τ, z τ to tackle each problem mentioned above. How to compute these variables will be described in Section 4. In order to learn with realistic scenarios, we assume that the task distribution p(τ) samples some fixed number of C classes ("way"), and then sample uniform-random number of instances for each class ("shots"), thereby simulating both task and class imbalance at the same time. Tackling task imbalance. To control whether to stay close to the initial parameter or deviate far from it, we introduce a clipping function f (·) = max(0, min(·, 1)) and a task-dependent learning-rate decaying factor f (γ τ), such that the learning rate exponentially decays as to be large for large tasks, such that they rely more on task-specific updates, while small tasks use small f (γ τ) to benefit from the meta-knowledge. Tackling class imbalance. To handle class imbalance, we vary the learning rate of classspecific gradient update for each task-specific gradient update step. Specifically, for class c = 1,..., C, we introduce a non-negative activation function g(·) = SoftPlus(·) and a set of classspecific non-negative scalars g(ω is the set of instances and labels for class c. We expect g(ω τ c) to be large for tail-classes to consider them more in task-specific gradient updates. Tackling out-of-distribution tasks. Lastly, we introduce an additional task-dependent variable z τ with the non-negative activation function g(·) which weights the initial parameter θ according to the usefulness for each task. We expect the variable g(z τ) to heavily emphasize the meta-knowledge θ when D τ is similar to the trained dataset, and use less of it when D τ is unfamilar. This behavior can be implemented with Bayesian modeling on the latent z τ, which we introduce in the next subsection. A unified framework. Finally, we assemble all these components together into a single unified framework. The update rule for the task-specific θ τ is recursively defined as follows: where the last step θ K corresponds to the task-specific predictor θ τ and α is a multi-dimensional global learning rate vector that is learned such as. As previously mentioned, we need a Bayesian framework for modeling z τ, since it needs a prior in order to prevent the posterior of z τ from overly utilizing the meta-knowledge θ when the task is out-ofdistribution. Moreover, for the learning of balancing variables γ τ and ω τ, Bayesian modeling improve the quality of the inference on them, which we empirically verified through extensive experiments. We allow the three variables to share the same inference network pipeline to minimize the computational cost, and thereby effectively amortize the inference rule across variables as well. for training, and for test. Let φ τ denote the collection of three latent variables, γ τ, ω τ and z τ for uncluttered notation. Then, the generative process is as follows for each task τ (See Figure 2): for the complete data likelihood. Note that the deterministic θ is shared across all the tasks. The goal of learning for each task τ is to maximize the log-likelihood of the joint datasetD τ and. However, solving it involves the true posterior p(φ τ |D τ,D τ), which is intractable. Thus, we resort to amortized variational inference with a tractable form of approximate posterior q(φ τ |D τ,D τ ; ψ) parameterized by ψ. Further, similarly to , we drop the dependency on the test datasetD τ for the approximate posterior, in order to make the two different pipelines consistent; one for meta-training where we observe the whole test dataset, and the other for meta-testing where the test labels are unknown. The form of our approximate posterior is now q(φ τ |D τ ; ψ). It greatly simplifies the inference framework, while ensuring that the following objective is still a valid lower bound of the log evidence. Also, considering that performing the inner-gradient steps with the training dataset D τ automatically maximizes the training log-likelihood in MAML framework, we slightly modify the objective so that the expected loss term only involves the test examples. The ant form of the lower bound that suits for our meta-learning purpose is as follows: We assume q(φ τ |D τ ; ψ) fully factorizes for each variable and also for each dimension as well: where we assume that each single dimension of q(φ τ |D τ ; ψ) follows univariate gaussian having trainable mean and variance. We also let each dimension of prior p(φ τ) factorize into N. The KL-divergence between two univariate gaussians has a simple closed form , thereby we obtain the low-variance estimator for the lower bound L τ θ,ψ. The final form of the meta-training minimization objective with Monte-Carlo approximation for the expection in is as follows:, and T is the number of tasks. We implicitly assume the reparameterization trick for φ τ to obtain stable and unbiased gradient estimate w.r.t. ψ . We set the number of MC samples to S = 1 for meta-training for computational efficiency. When meta-testing, we can set S = 10 or naively approximate the expectation by taking the expectation inside:, which works well in practice. The main challenge in modeling our variational distribution q(φ τ |D τ ; ψ) is how to refine the training dataset D τ into informative representation capturing the dataset as a distribution, which is not trivial. This inference network should capture all the necessary statistical information in the dataset D τ to solve both imblanace and out-of-distribution problems. DeepSets is frequently used as a practical set-encoder, where each instance in the set is transformed by the shared nonlinearity, and then summed together to generate a single vector summarizing the set. However, for the classification dataset D τ which is the set of (class) sets, we cannot use DeepSets directly as it will completely ignore the label information. Therefore, we need to stack the structure of DeepSets twice according to the hierarchical set of sets structure of classification dataset. However, there exists additional limitation of DeepSets with sum-pooling when describing the distribution. Suppose that we have a set containing a replication of single instance. Then, its representation will change based on the number of replications, although distribution-wise all sets should be the same. Mean-pooling may alleviate the problem; however, it does not recognize the number of elements in the set, which is a critical limitation in encoding imbalance. To overcome the limitations of the two pooling methods, we propose to use higher-order statistics in addition to the sample mean, namely element-wise sample variance, skewness and kurtosis. For instance, the sample variance could capture task imbalance and skewness will capture class imbalance (imbalance in the number of instances per class). Based on this intuition, we propose the following encoder network StatisticsPooling(·) that generates the concatenation of those statistics (See Figure 3): for classes c = 1,..., C, and X τ c is the collection of class c examples in task t. NN 1 and NN 2 are some appropriate neural networks parameterized by ψ. The vector v τ finally summarizes the whole classification dataset D τ and our balancing variables γ τ, ω τ and z τ are generated from it with an additional affine transformation. See Appendix B for the justification. We validate our method in imbalanced scenarios, where each task, or every class within a task can have different shots, and the tasks at the evaluation time could come from a different task distribution from the seen task distribution. , we use 4-block convolutional neural networks with 64 channels for each layer for Omniglot and MNIST. We reduce the number of channels into 32 for other datasets. Imbalanced Omniglot. This dataset consists of 1623 hand-written character classes, with 20 training instances per class. We consider 10-way classification problems, where we have 5 queries per each class. To generate imbalanced tasks, we randomly set the number of training instances to be sampled within the range of 1 to 15. We set 5 inner-gradient steps for all gradient-based models and train all models on Omniglot, and evaluate on both Omniglot and MNIST, where the latter is used to evaluate the performance on out-of-distribution task. Imbalanced tiered-ImageNet. This is sub-sampled ImageNet dataset including 608 classes . As similarly with the Imbalanced Omniglot dataset, we consider 5-way classification problems, while randomly setting the number of training instances per class within the range of 1 to 50, and use 15 instances per class for test (queries). We set 5 inner-gradient steps for all gradient-based models and train all models on the tiered-ImageNet dataset, and evaluate the model on the test split of tiered-ImageNet and mini-ImageNet, where the latter is used to evaluate on out-of-distribution task. See the Appendix A for more details of the experimental setup. Analysis. Multi-Dataset OVD. We further test our model under a more challenging setting where tasks could come from a highly heterogeneous dataset. To this end, we combine Omniglot, VGG flower , DTD into a single dataset OVD, and randomly sample each class from the combined dataset for every task. We train all models with 10-way any-shot tasks with 3 inner gradient steps and test on OVD and FasionMNIST , where the latter is used to generate out-of-distribution tasks at evaluation time. The in Table 2 shows that under this challenging multi-dataset setting, our Bayesian TAML outperforms all baselines, especially with larger gains on the out-of-distribution tasks (Fashion MNIST) consistent with the of Table 1. We now validate the effectiveness of each balancing variable. For all of the ablations, we set the meta-training condition to Omniglot 5-way any-shot (i.e. 1-to 15-shot) classification with 5 innergradient steps. For the meta-testing condition for each ablation, see each of the tables below. Further, in order to correctly evaluate each variable, we add in each individual component to Meta-SGD, one at a time, dropping all other balancing variables, for both meta-training and meta-testing. We report mean accuracies over 1000 random episodes with 95% confidence intervals. g(z τ) for handling distributional discrepancy. g(z τ) modulates the initial model parameter θ, deciding what and how much to use from the meta-knowledge θ based on the relatedness between θ and the task at hand. Table 3 shows that by adding in Bayesian g(z τ) component to Meta-SGD, we can effectively handle out-of-distribution tasks (MNIST) by significant margin. The histogram in Figure 4 shows that the actual distribution of mask g(z τ) of OOD tasks is more skewed toward zero than the distribution of ID tasks, which agrees with our expectation. f (γ τ) for handling task imbalance. f (γ τ), which is a decaying factor for inner gradient steps, handles inter-task imbalance where each task has different number of examples. Figure 5 shows f (γ τ) values w.r.t. the task size, where it increases monotonically with the number of instances allowing the model to stay close to the initial parameter for few-shot cases and deviate far from it for many-shot cases. Table 4 shows that the larger gains of our model for 1-shot than 5 or 15 shots support that relying on meta-knowledge is useful for improving the performance on smaller tasks. g(ω τ) for handling class imbalance. g(ω τ) rescales the class-specific gradients to handle class imbalance where the number of instances per class largely varies. Table 5 shows the under the varying degree of class imbalance across the task distribution. The degree of class imbalance (×N) means that the maximum number of shots is N times larger than the minimum number of shots within the given task. We observe that our model significantly outperforms baselines especially under the high degree of class imbalance (×5 and ×15). Figure 6 shows that g(ω τ) actually increases the gradient scale of tail-classes (classes with fewer instances), so that we obtain the big improvements on those tail classes. Effectiveness of Bayesian modeling We further demonstrate the effectiveness of Bayesian modeling by comparing it with the deterministic version of our model (Deterministic TAML), where three balancing variables are no longer stochastic and we apply 2 regularization of 10 −3 on them instead of KL-divergence in Eq. 5. Table 6 shows the under the same setting of Table 3. The clearly show that the Bayesian modeling greatly contribute to addressing the imbalance problem, especially for the OOD tasks (MNIST). Figure 7 further shows that our balancing variables g(ω τ) and f (γ τ), that are responsible for handling class/task imbalance, more sensitively react to the actual imbalance conditions with the Bayesian modeling (Bayesian TAML) than without Bayesian (Deterministic TAML). Dataset encoding Lastly, we perform an ablation study to validate the effectiveness of the proposed dataset encoding scheme, Set of Sets, for generating the balancing variables. Table 7 shows the performance of various encoding schemes on the imbalanced tiered-ImageNet 5-way classification, the same setting as in Table 1. We see that Set of Sets, equipped with higher-order statistics and hierarchical set encoding, is far more effective than simple mean-pooling method (; ;). We propose Bayesian TAML that learns to balance the effect of meta-learning and task-adaptive learning, to consider meta-learning under a more realistic task distribution where each task and class can have varying number of instances. Specifically, we encode the dataset for each task into hierarchical set-of-sets representations, and use it to generate attention mask for the original parameter, learning rate decay, and the class-specific learning rate. We use a Bayesian framework to infer the posterior of these balancing variables, and propose an effective variational inference framework to solve for them. Our model outperforms existing meta-learning methods when validated on imbalanced few-shot classification tasks. Further analysis of each balancing variable shows that each variable effectively handles task imbalance, class imbalance, and out-of-distribution tasks respectively. We believe that our work makes a meaningful step toward application of meta-learning to real-world problems. A EXPERIMENTAL SETUP A.1 BASELINES AND NETWORK ARCHITECTURE. We describe baseline models and our task-adaptive learning to balance model. Note that all gradientbased models can be extended to take K inner-gradient steps for both meta-training and meta-testing. 1) Meta-Learner LSTM. A meta-learner that learns optimization algorithm with LSTM . The model performs few-shot classification using cosine similarities between the embeddings generated from a shared convolutional network. 2) Prototypical Networks. A metric-based few-shot classification model proposed by . The model learns the metric space based on Euclidean distance between class prototypes and query embeddings. 3) MAML. The Model-Agnostic Meta-Learning (MAML) model by , which aims to learn the global initial model parameter, from which we can take a few gradient steps to get task-specific predictors. A base MAML with the learnable learning-rate vector (without any restriction on sign) element-wisely multiplied to each step inner-gradient. A gradient-based meta-learning model proposed by. The model obtains a task-specific parameter only w.r.t. a subset of the whole dimension (M-Net), followed by a linear transformation to learn a metric space (T-Net). 6) Probabilistic MAML. A probabilistic version of MAML by , where they model task-adaptive inner-gradient steps as a posterior inference process under hierarchical Bayesian framework. This model also interprets MAML under hierarchical Bayesian framework, but they propose to share and amortize the inference rules across both global initial parameters as well as the task-specific parameters. 8) Bayesian TAML. Our learning to balance model that can adaptively balance between meta-and task-specific learners for each task and class. A.2 REALISTIC ANY-SHOT CLASSIFICATION. We describe more detailed settings for realistic any-shot classification. Imbalanced Omniglot. We modified the episode generating strategy of C-way classification, which selects the number of shots randomly between 1 to 15 for each of the classes. The metalearning rate β and the total number of iterations are set to 1e-3 and 60000, respectively for all models, and the inner gradient step size α is set to 0.05 for MAML, MT-NET, and ABML and is set to learnable parameters for Meta-SGD and our model. The number of inner-gradient steps is 5 for all gradient-based models and all other components are the same as reported for fixed-way and fixed-shot few-shot classification in the paper for each model. We keep the meta-batch as 1 for all experiments to clearly see the effect of imbalance scenario. We trained models in 10-way 5 inner-gradient steps on the Omniglot, and evaluated with the test split of Omniglot and MNIST. Imbalanced tiered-ImageNet. We modified the episode generating strategy of C-way classification, which selects the number of shots randomly between 1 to 50 for each of the classes. We set the number of query points as 15 and the meta learning rate β is set to 1e-4. Other components are set to the same as referred in Imbalanced Omniglot. We trained models on the tiered-ImageNet, and evaluated with the test split of tiered-ImageNet and mini-ImageNet. We describe the network architecture of the inference network that takes a classification dataset as an input and generates three balancing variables as output. We additionally used two average pooling with 2 × 2 strides before the shared encoder NN 1 with large inputs such as tiered-ImageNet and mini-ImageNet. We empirically found that attaching average pooling reduces computation cost while improving performance. Based on the previous justification of DeepSets , we can easily justify the Set-ofSets structure proposed in the main paper as well, in terms of the two-level permutation invariance properties required for any classification dataset. The main theorem of DeepSets is: Theorem 1. A function f operating on a set X ∈ X is a valid set function (i.e. permutation invariant), iff it can be decomposed as f (X) = ρ 2 (x∈X ρ 1 (x)), where ρ 1 and ρ 2 are appropriate nonlinearities. See for the proof. Here we apply the same argument twice as follows. 1. A function f operating on a set of representations {s 1, . . ., s C} (we assume each s c is an output from a shared function g) is a valid set function (i.e. permutation invariant w.r.t. the order of {s 1, . . ., s C}), iff it can be decomposed as f ({s 1, . . ., s C}) = ρ 2 (C c=1 ρ 1 (s c)) with appropriate nonlinearities ρ 1 and ρ 2. 2. A function g operating on a set of examples {x c,1, . . ., x c,N} is a valid set function (i.e. permutation invariant w.r.t. the order of {x c,1, . . ., x c,N}) iff it can be decomposed as ) with appropriate nonlinearities ρ 3 and ρ 4. Inserting s c = g({x c,1, . . ., x c,N}) into the expression of f, we arrive at the following valid composite function operating on a set of sets: Let F denote the composite of f and (multiple) g and let NN 2 denote the composite of ρ 1 and ρ 4. Further define NN 1:= ρ 3 and NN 3:= ρ 2. Then, we have where C is the number of classes and N is the number of examples per class. See Section A.3 for the correspondence between Eq. and the actual encoder structure. We provide the comparison between the two approximation schemes for evaluating the expectation of the test example predictions at meta-testing time. Naive approximation means that we take the expectation inside (i.e. we do not sample) and MC approximation means that we perform MonteCarlo integration with sample size S = 10 1. We see from the Table 8 that MC ingetration performs better than the naive approximation, especially with OOD tasks (e.g. MNIST, mini-ImageNet). This is because the predictive distributions involve higher uncertainty for OOD tasks, hence there exists more benefit from considering the large variance than simply ignoring it. and tiered-ImageNet(right) with imbalanced setting. All reported are average performances over 1000 for Omniglot and MNIST and 600 for tiered-ImageNet and mini-ImageNet randomly selected episodes with standard errors for 95% confidence interval over tasks. We further compare our model with the existing meta-learning approaches on conventional few-shot classification with fixed-way and fixed-shot. Omniglot. We report the 20-way classification performance for this dataset. , we use 4-block convolutional neural network architecture with 64 channels for each layer. We set the number of inner-gradient steps K to 5 for both meta-training and meta-testing. Mini-ImageNet. We report the 5-way classification performance with the meta-batch 4 and 2 for 1-and 5-shot, respectively. We reduce the convolution filter size of the 4-block CNN network into 32 to prevent overfitting. We set K = 5 for multi-step models for both meta-training and meta-testing. 95.80 ± 0.30 98.90 ± 0.20 48.70 ± 1.84 63.11 ± 0.92 Meta-SGD 95. 97.60 ± 0.20 99.10 ± 0.10 50.44 ± 0.82 65.32 ± 0.70 MT-NET 96.20 ± 0.40 -51.70 ± 1.84 -Probabilistic MAML --50.13 ± 1.86 -Reptile 89.43 ± 0.14 97.12 ± 0.32 49.97 ± 0.32 65.99 ± 0.58 BMAML --53.80 ± 1.46 -VERSA 97. We first compare our method on conventional fixed-way fixed-shot classification task against existing meta-learning methods. Though the classification with uniformly distributed instances is not the task we aim to tackle, we find that Bayesian TAML outperforms most baseline models, except for the mini-imagenet 5-way 1-shot experiment where BMAML works the best. Table 9: Classification to θ0 inititalization with g(z). θ 0 inititalization with g(z). Applying g(z) to the shared initial parameter θ may set some of the initial parameters to smaller values. Thus, we further compare our model with applying the linear interpolation between θ and a random initialization vector θ random with coefficient g(z) as θ 0 = g(z) • θ + (1 − g(z) • θ random ) to Bayesian TAML (Bayesian g(z) TAML + Interpolation). As shown in the Table 9, those models has no significant difference in the performance. 1 At meta-training time, we perform MC approximation with a single sample for computational efficiency 2 We adopt the accuracies of the Prototypical Network in the setting which the number of shot and way are the same for training and testing phase for the consistency with other methods. In the "higher way" setting where 20-way is used during training for 5-way testing, the reported performance of the model is 68.20 ± 0.66%.
A novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning, and also class-specific learning within each task.
1,363
scitldr
Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspective is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetics experiments and when applied to challenging StarCraft micromanagement tasks. Reinforcement learning (RL) provides a formal framework concerned with how an agent takes actions in one environment so as to maximize some notion of cumulative reward. Recent years have witnessed successful application of RL technologies to many challenging problems, ranging from game playing [17; 21] to robotics BID8 and other important artificial intelligence (AI) related fields such as BID19 etc. Most of these works have been studying the problem of a single agent. However, many important tasks require the collaboration of multiple agents, for example, the coordination of autonomous vehicles BID1, multi-robot control BID12, network packet delivery BID31 and multi-player games BID24 to name a few. Although multi-agent reinforcement learning (MARL) methods have historically been applied in many settings [1; 31], they were often restricted to simple environments and tabular methods. Motivated from the success of (single agent) deep RL, where value/policy approximators were implemented via deep neural networks, recent research efforts on MARL also embrace deep networks and target at more complicated environments and complex tasks, e.g. [23; 19; 4; 12] etc. Regardless though, it remains an open challenge how deep RL can be effectively scaled to more agents in various situations. Deep RL is notoriously difficult to train. Moreover, the essential state-action space of multiple agents becomes geometrically large, which further exacerbates the difficulty of training for multi-agent deep reinforcement learning (deep MARL for short).From the viewpoint of multi-agent systems, recent methods often take the form of one of two perspectives. That is, the decentralized perspective where each agent has its own controller; and the centralized perspective where there exists a larger model controlling all agents. As a consequence, learning can be challenging in the decentralized settings due to local viewpoints of agents, which perceive non-stationary environment due to concurrently exploring teammates. On the other hand, under a centralized perspective, one needs to directly deal with parameter search within the geometrically large state-action space originated from the combination of multiple agents. BID0 StarCraft and its expansion StarCraft: Brood War are trademarks of Blizzard Entertainment TM In this regard, we revisit the idea of master-slave architecture to combine both perspectives in a complementary manner. The master-slave architecture is a canonical communication architecture which often effectively breaks down the original challenges of multiple agents. Such architectures have been well explored in multi-agent tasks [18; 28; 15; 16]. Although our designs vary from these works, we have inherited the spirit of leveraging agent hierarchy in a master-slave manner. That is, the master agent tends to plan in a global manner without focusing on potentially distracting details from each slave agent and meanwhile the slave agents often locally optimize their actions with respect to both their local state and the guidance coming from the master agent. Such idea can be well motivated from many real world systems. One can consider the master agent as the central control of some organized traffic systems and the slave agents as each actual vehicles. Another instantiation of this idea is to consider the coach and the players in a football/basketball team. However, although the idea is clear and intuitive, we notice that our work is among the first to explicitly design master-slave architecture for deep MARL.Specifically, we instantiate our idea with policy-based RL methods and propose a multi-agent policy network constructed with the master-slave agent hierarchy. For both each slave agent and the master agent, the policy approximators are realized using recurrent neural networks (RNN). At each time step, we can view the hidden states/representations of the recurrent cells as the "thoughts" of the agents. Therefore each agent has its own thinking/reasoning of the situation. While each slave agent takes local states as its input, the master agent takes both the global states and the messages from all slave agents as its input. The final action output of each slave agent is composed of contributions from both the corresponding slave agent and the master agent. This is implemented via a gated composition module (GCM) to process and transform "thoughts" from both agents to the final action. We test our proposal (named MS-MARL) using both synthetic experiments and challenging StarCraft micromanagement tasks. Our method consistently outperforms recent competing MARL methods by a clear margin. We also provide analysis to showcase the effectiveness of the learned policies, many of which illustrate interesting phenomena related to our specific designs. In the rest of this paper, we first discuss some related works in Section 2. In Section 3, we introduce the detailed proposals to realize our master-slave multi-agent RL solution. Next, we move on to demonstrate the effectiveness of our proposal using challenging synthetic and real multi-agent tasks in Section 4. And finally Section 5 concludes this paper with discussions on our findings. Before proceeding, we summarize our major contributions as follows• We revisit the idea of master-slave architecture for deep MARL. The proposed instantiation effectively combines both the centralized and decentralized perspectives of MARL.• Our observations highlight and verify that composable action representation, independent master/slave reasoning and learnable communication in-between are key factors to be successful in MS-MARL.• Our proposal empirically outperforms recent state-of-the-art methods on both synthetic experiments and challenging StarCraft micromanagement tasks, rendering it a novel competitive MARL solution in general. Current main stream RL methods apply conventional wisdoms such as Q-learning, policy gradient, actor-critic etc. BID23. Recent progress mainly focuses on practical adaptations especially when applying deep neural networks as value/policy approximators. BID9 provides a recent review on deep RL.Although MARL has been studied in the past, they have been focused on simple tasks BID0. Only until recently, with the encouragement from the successes of deep RL, deep MARL has become a popular research area targeting at more complex and realistic tasks, see e.g. [3; 23; 7; 19; 4; 12] etc. and BID6 are among the first to propose learnable communications via back-propagation in deep Qnetworks. However, due to their motivating tasks, both works focused on a decentralized perspective and usually applies to only a limited number of agents., BID4 and BID18 all proposed practical network structure or training strategies from a centralized perspective of MARL. Specifically, BID18 proposed a bidirectional communication channel among all agents to facilitate effective communication and many interesting designs toward the StarCraft micromanagement tasks. BID26 proposed episodic exploration strategy for deterministic policy search and BID4 proposed the concept of stabilizing experience replay for MARL.Note that the above works take only one of the two perspectives and are then inherently missing out the advantages of the other. Perhaps the most related works are from BID2, BID3 and BID11. BID22 proposed the "CommNet", where a broadcasting communication channel among all agents was set up to share global information realized as summation of the output from all individual agents. This design represents an initial version of the proposed master-slave framework, however it does not facilitate an independently reasoning master agent which takes in messages from all agents step by step and processes such information in an recurrent manner. In BID3 and BID11, a global critic was proposed, which could potentially work at a centralized level, however since critics are basically value networks, they do not provide explicit policy guidance. Therefore they tend to work more like a commentator of a game who job is to analyze and criticize the play, rather than a coach coaching the game. As discussed above, the master-slave architecture has already been studied in several multi-agent scenarios. BID17 utilized the master-slave architecture to resolve conflicts between multiple soccer agents; while [28; 15; 16] explored master-slave hierarchy in RL applied to load-balancing and distributed computing environments. Our proposal can be viewed as a revisit to similar ideas for deep MARL. With the proposed designs, we facilitate independent master reasoning at a global level and each slave agent thinking at a local but focused scale, and collectively achieve optimal rewards via effective communication learned with back propagation. Compared with existing works, we emphasize such independent reasoning, the importance of which are well justified empirically in the experiments. We consistently outperform existing MARL methods and achieve state-of-the-art performance on challenging synthetic and real multi-agent tasks. Since the master-slave architecture constructs agent hierarchy by definition, another interesting related field is hierarchical RL, e.g. [8; 29]. However, such hierarchical deep RL methods studies the hierarchy regarding tasks or goals and are usually targeting at sequential sub-tasks where the meta-controller constantly generates goals for controllers to achieve. Master-slave architecture, on the other hand, builds up hierarchy of multiple agents and mainly focuses on parallel agent-specific tasks instead, which is fundamentally different from the problems that hierarchical RL methods are concerned with. We start by reiterating that the key idea is to facilitate both an explicit master controller that takes the centralized perspective and organize agents in a global or high level manner and all actual slave controllers work as the decentralized agents and optimize their specific actions relatively locally while depending on information from the master controller. Such an idea can be realized using either value-based methods, policy-based methods or actor-critic methods. Hereafter we focus on introducing an instantiation with policy gradient methods as an example, which also represents the actual solution in all our experiments. In particular, our target is to learn a mapping from states to actions π θ (a t |s t) at any given time t, where s = {s m, s 1, ..., s C} and a = {a 1, ..., a C} are collective states and actions of C agents respectively and θ = {θ m, θ 1, ..., θ C} represents the parameters of the policy function approximator of all agents, including the master agent θ m. Note that we have explicitly formulated s m to represent the independent state to the master agent but have left out a corresponding a m since the master's action will be merged with and represented by the final actions of all slave agents. This design has two benefits: 1) one can now input independent and potentially more global states to the master agent; and meanwhile 2) the whole policy network can be trained end-to-end with signals directly coming from actual actions. In FIG0, we illustrate the whole pipeline of our master-slave multi-agent architecture. Specifically, we demonstrate the network structure unfolded at two consecutive time steps. In the left part, for example, at time step t, the state s consists of each s i of the i-th slave agent and s m = o t of the master agent. Each slave agent is represented as a blue circle and the master agent is represented as a yellow rectangle. All agents are policy networks realized with RNN modules such as LSTM BID5 cells or a stack of RNN/LSTM cells. Therefore, besides the states, all agents also take the hidden state of RNN h t−1 as their inputs, representing their reasoning along time. Meanwhile the master agent also take as input some information from each slave agent c i and broadcasts back its action output to all agents to help forming their final actions. These communications are represented via colored connections in the figure. To merge the actions from the master agent and those from the slave agents, we propose a gated composition module (GCM), whose behavior resembles LSTM. Figure 4 illustrates more details. Specifically, this module takes the "thoughts" or hidden states of the master agent h, which later will be added to independent action proposals from the corresponding slave agents a i t. Since such a module depends on both the "thoughts" from the master agent and that from certain slave agent, it facilitates the master to provide different action proposals to individual slave agents. Moreover, we demonstrate in 4 that the GCM module naturally facilitates our model to generalize to cases involving heterogeneous agents. In certain cases, one may also want the master to provide unified action proposals to all agents. This could easily be implemented as a special case where the gate related to the slave's "thoughts" shuts down, which is denoted as regular MS-MARL. As mentioned above, due to our design, learning can be performed in an end-to-end manner by directly applying policy gradient in the centralized perspective. Specifically, one would update all parameters following the policy gradient theorem BID23 as DISPLAYFORM0 where data samples are collected stochastically from each episode {s 1, a 1, r 2, ..., s T −1, a T −1, r T} ∼ π θ and v t = t j=1 r t. Note that, for discrete action space, we applied softmax policy on the top layer and for continous action space, we adopted Gaussian policy instead. We summarize the whole algorithm in Algorithm 1 as follows. DISPLAYFORM1 Update network parameter θ m, θ 1,..., θ C using; As stated above, our MS-MARL proposal can leverage advantages from both the centralized perspective and the decentralized perspective. Comparing with the latter, we would like to argue that, not only does our design facilitate regular communication channels between slave agents as in previous works, we also explicitly formulate an independent master agent reasoning based on all slave agents' messages and its own state. Later we empirically verify that, even when the overall information revealed does not increase per se, an independent master agent tend to absorb the same information within a big picture and effectively helps to make decisions in a global manner. Therefore compared with pure in-between-agent communications, MS-MARL is more efficient in reasoning and planning once trained. On the other hand, when compared with methods taking a regular centralized perspective, we realize that our master-slave architecture explicitly explores the large action space in a hierarchical way. This is so in the sense that if the action space is very large, the master agent can potentially start searching at a coarse scale and leave the slave agents focus their efforts in a more fine-grained domain. This not only makes training more efficient but also more stable in a similar spirit as the dueling Q-network design BID29, where the master agent works as base estimation while leaving the slave agents focus on estimating the advantages. And of course, in the perspective of applying hierarchy, we can extend master-slave to master-master-slave architectures etc. To justify the effectiveness of the proposed master-slave architecture, we conducted experiments on five representative multi-agent tasks and environments in which multiple agents interact with BID21 and originally proposed in BID22, (c)-(f) are mini-games from StarCraft that is included in the TorchCraft platform BID24 each other to achieve certain goals. These tasks and environments have been widely used for the evaluation of popular MARL methods such as [23; 12; 19; 27; 4; 5].The first two tasks are the traffic junction task and the combat task both originally proposed in BID22.(Examples are shown in FIG2 (a)(b)) These two tasks are based on the MazeBase environment BID21 and are discrete in state and action spaces. Detailed descriptions are given below:The traffic junction task As originally designed in BID22, there are three versions of this task corresponding to different hard levels. In our experiments, we choose the hardest version which consists of four connected junctions of two-way roads (as shown in FIG2). During every time step, new cars enter the grid area with a certain probability from each of the eight directions. Each car occupies a single-cell in the grid. All the cars keep to the right-hand side of the road. Note that there are only three possible routes for each car. A car is allowed to perform two kinds of actions at each time step: advances by one cell while keeping on its route or stay still at the current position. Once a car moves outside of the junction area, it will be removed. A collision happens when two cars move to the same location. In our experiments, following the literature, we apply the same state and reward originally designed in BID22.The combat task This environment simulates a simple battle between two opposing teams. The entire area is a 15 × 15 grid as shown in FIG2. Both teams have 5 members and will be born at a random position. Possible actions for each agent are as follow: move one step towards four directions; attack an enemy agent within its firing range; or keep idle. One attack will cause an agent lose 1 point of health. And the initial health point of each agent is set to 3. If the health point reduces to 0, the agent will die. One team will lose the battle if all its members are dead, or win otherwise. Default settings from BID22 are used in our experiments following the literature. The following StarCraft micromanagement tasks originate from the well-known real-time strategy (RTS) game: StarCraft and was originally defined in [25; 27]. Instead of playing a full StarCraft game, the micromanagement tasks involve a local battle between two groups of units. Similar to the combat task, the possible actions also include move and attack. One group wins the game when all the units of the other group are eliminated. However, one big difference from the combat task is that the StarCraft environment is continuous in state and action space, which means a much larger search space for learning. In other words, the combat task can also be considered as a simplified version of StarCraft micromanagement task with discrete states and actions. FIG2 (c)-(e)). All the three tasks are categorized as "hard combats" in BID18. Thus it is quite hard for an AI bot to win the combat without learning a smart policy. On all the three tasks, independent policy methods have been proved to be less effective. Instead, MARL methods must learn effective collaboration among its agents to win these tasks e.g. [19; 27]. In this task, one needs to control 15 Terran marines to defeat a builtin AI that controls 16 Terran marines (showcased in FIG2). Note that the Marines are ranged ground units. Among the "mXvY" tasks defined in BID24, such as m5v5 (5 Marines vs. 5 Marines), m10v10 (10 Marines vs. 10 Marines) m18v18 (18 Marines vs. 18 Marines), the chosen combat "15 Marines vs. 16 Marines" represents a more challenging version in the sense that the total number of agents is high and that the controlled team has less units than the enemies. Therefore a model need to learn very good strategies to win the battle. As described in BID26, the key to winning this combat is to focus fire while avoiding "overkill". Besides this, another crucial policy -"Spread Out" is also captured in our experiments.10 Marines vs. 13 Zerglings While we still control Marines in this scenario, the enemies belong to another type of ground unit -Zerglings. Unlike Terran Marines, Zerglings can only attack by direct contact, despite their much higher moving speed. Useful strategies also include "focus fire without overkill", which is similar to the "15 Marines vs. 16 Marines" task. Interestingly, unlike the "15M vs. 16M" task, the "Spread Out" strategy is not effective anymore in this case according to our experiments. (An example of this task is given in FIG2 (d))15 Wraiths vs. 17 Wraiths As a contrast to the above two tasks, this one is about Flying units. 15 Wraiths (ranged flying unit) are controlled to fight against 17 opponents of the same type. An important difference from the ground units is that the flying units will not collide with each other. Hence it is possible for flying units to occupy the same tile. In this case, the "Spread Out" strategy may not be important anymore. But it is essential to avoid "overkill" in this task since Wraiths have much longer "cooldown" time and much higher attack damage. (This task is shown in FIG2 (e))Dragoons and Zealots Unlike the previous three StarCraft tasks, this one involves a heterogeneous multi-agent task where 2 Dragoons(attack from remote) and 3 Zealots(attack by direct contact) need to fight against a group of enemies of the same unit type and number. Typical successful policy for this task is to focus fire on enemy zealots (low attack and high defense) at first and then eliminate the Dragoons (high attack and low defense), which was analyzed in detail by BID26. (Illustrated in FIG2 (f))As presented in BID18, the state-of-the-art performance on these tasks are still quite low compared with others. Amazingly, the proposed MS-MARL method achieves much higher winning rates, as demonstrated in later section. We will refer to these tasks as {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W and 2D3Z} for brevity in the rest of this paper. In order to help reproduce our , we hereby provide our very implementation details. Model Architecture A detailed illustration of our implementation is displayed in Figure 4. As introduced in section 3, in our MS-MARL model, both the master agents and the slave agents are implemented using RNN or LSTM to incorporate temporal recurrence. Here in Figure 4, we use LSTM for the master module, and RNN for the slave module. The dimension of the hidden states in RNN or LSTM (including cell states) are all set to 50 for both the master and slave agents. The GCM module in Figure 4 (c) is noted as the "Gated Composition Module", which is introduced in section 3.1 in detail. Note that the action output is different for discrete and continuous tasks. For the traffic junction task and the combat task, the output of the network is designed as the probability of a number of actions since the action space is discrete (actually a Softmax policy). As a contrast, for "15M vs. 16M" "10M vs. 13Z" and "15W vs. 17W", our network directly generates a continuous action following Gaussian policy as described in section 3. The Softmax/Gaussian action modules are illustrated by Figure 4 D). State Features For the traffic junction task and the combat task, we just apply the original state features designed in BID22. For "15M vs. 16M", "10M vs. 13Z" and "15W vs. 17W", we adopt one representation similar to existing methods [19; 4; 5]. To be specific, the feature of an agent is a map of a certain-sized circle around it, as illustrated in Figure 3. If there is an agent in a cell, a feature vector shown in Fig. 3 will be included in the final observation map. A slightly different state definition is applied for the "2D3Z" task, where unit type is critical for learning correct collaborative policy. In practice, we directly add the type id as one component to the input state feature for the "2D3Z" task. Note that the proposed GCM module facilitates the learning of different strategy for different unit type, which is essential for winning this task. Action Definition For action definitions of the first two tasks, we use the default ones from BID22.Since "15M vs. 16M", "10M vs. 13Z" and "15W vs. 17M" needs a continuous action space. We apply the design from BID18. It contains a 3-dimension vector, each of which is of the range from -1 to 1, representing action type, relative angle and distance to the target respectively. Reward Definition The reward definition of the traffic junction task and the combat task is the same as in BID22. Here we define the reward for "15M vs. 16M", "10M vs. 13Z" and "15W vs. 17W"by simulating the design of that in the combat task. For each time step, there is a reward formulated in 2 to evaluate how the current action of agent i works. DISPLAYFORM0 Note that here ∆n t j∈Nm(i) = n t j∈Nm(i) − n t−1 j∈Nm(i) (the definition of ∆n t k∈Ne(i) is similar) which is actually the change of the number of our units in the neighboring area. N m (i) and N e (i) refers to the sets containing our units and enemy units within a certain range of the current agent i. In experiments we define the range as a circle of a radius of 7 around agent i, which is exactly the circular range as 3 shows. Besides, we also define a terminal reward (as formulated in ) to add to the final reward at the end of each episode. DISPLAYFORM1 Training Procedure The entire training process is shown in Algorithm 1. Note that we follow the softmax policy as proposed in BID22 for the first two discrete tasks. As for the cotinuous tasks {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W}, a Gaussian policy is adopted for stochastic policy parameterization. Specifically, the output of the network is considered as the mean of a Gaussian policy distribution. By fixing the variance to σ = 0.05, the final actions are sampled as follows DISPLAYFORM2 And the score function in can be computed as DISPLAYFORM3 For the traffic junction task, we use a batch size of 16. For the combat task, a larger batch size of 144 is adopted for the training of both CommNet and our master-slave models. And for {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W}, we find that a small batch size of 4 is good enough to guarantee a successful training. The learning rate is set to 0.001 for the first two tasks and 0.0005 for {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W}. The action variance for {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W} is initialized as 0.01 and drops down gradually as the training proceeds. For all tasks, the number of batch per training epoch is set to 100.Baselines For comparison, we select three state-of-the-art MARL methods that have been tested on these tasks of interest. A brief introduction of these baselines are given as follow:• CommNet: This method exploits a simple strategy for multi-agent communication. The idea is to gather a message output from all agents and compute the mean value. Then the mean value is spread to all agents again as an input signal of the next time step. In this way, a communication channel is established for all agents. Such a simple strategy has been shown to work reasonably well for the first two tasks in BID22 and for {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W} when explored in BID18.• GMEZO: The full name of this method is GreedyMDP with Episodic Zero-Order Optimization. And it was proposed particularly to solve StarCraft micromanagement tasks in BID26. The main idea is to employ a greedy update over MDP for inter-agent communications, while combining direct exploration in the policy space and backpropagation.• BiCNet: A bidrectional RNN is proposed to enable communication among multiple agents in BID18. The method adopts an Actor-Critic structure for reinforcement learning. It has been shown that their model is able to handle several different micromanagement tasks successfully as well as learn meaningful polices resembling skills of expert human players. TAB2 demonstrate the performance improvement of our method when compared with the baselines. For CommNet we directly run the released code on the traffic junction task and the combat task using hyper-parameters provided in BID22. We compute the mean winning rates in TAB2 by testing the trained models for 100 rounds. However, since the code of GMEZO and BiCNet is not released yet, there is no report of their performance on traffic junction and combat tasks. Therefore we only compare with CommNet on these two tasks. And it can be seen that our MS-MARL model performs better than CommNet on both of the two tasks. To further clarify the contribution of the proposed master-slave scheme, we add another baseline "CommNet + Occupancy Map" which takes its original state as well as the occupancy map explicitly. As is shown, CommNet performs better with global information. However, with an explicit global planner (the "master" agent) designed in our model, such information seems better utilized to facilitate learning of more powerful collaborative polices. The on the more challenging StarCraft micromanagement tasks {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W, 2D3Z} are displayed in TAB3. Obviously, on all the tasks, our MS-MARL method achieves consistently better performance compared to the available of the three baselines. FIG4 shows the training process of CommNet and our MS-MARL method by plotting win rate curves for the first two tasks as well as "15M vs. 16M". As analyzed above, one key difference between our MS-MARL model and CommNet is that we explicitly facilitate an independent master agent reasoning with its own state and messages from all slave agents. From this plot, our MS-MARL model clearly enjoys better, faster and more stable convergence, which highlights the importance of such a design facilitating independent thinking. In the setting of the combat task, we further analyze how different components of our proposal contribute individually. Specifically, we compare the performance among the CommNet model, our MS-MARL model without explicit master state (e.g. the occupancy map of controlled agents in this case), and our full model with an explicit occupancy map as a state to the master agent. As shown in FIG7 (a)(b), by only allowing an independently thinking master agent and communication among agents, our model already outperforms the plain CommNet model which only supports broadcasting the same message to all agents. Further more, by providing the master agent with its unique state, our full model finally achieves a significant improvement over the CommNet model. Note here that, every information revealed from the extra occupancy map is by definition already included to each agents state as their locations. Another interesting observation is how the master agent and each of the slave agents contribute to the final action choices (as shown in FIG6). We observe that the master agent does often learn an effective global policy. For example, the action components extracted from the master agent lead the whole team of agents move towards the enemies regions. Meanwhile, all the slave agents adjust their positions locally to gather together. This interesting phenomenon showcases different functionality of different agent roles. Together, they will gather all agents and move to attack the enemies, which seems to be quite a reasonable strategy in this setting. Our method combines the advantages of both centralized and decentralized perspectives. To better demonstrate this point, we compare a set of baselines by modifying the structure of our model. FIG6 (c) we draw the training curve of the first 300 steps for all the baselines. It is evident that our MS-MARL performs better than the other baselines with a quite clear margin(more than 20 percent higher than the second best one). Also we find that it is very hard for a single master agent to control all units (unfortunately it earns a 0% winning chance in this case). While independent agents do learn some winning policy, their performance is far lower from MS-MARL but comparable to CommNet. From these , we can see that a stronger global planner as well as a more powerful gating mechanism for message collection/dispatching is necessary for multi-agent coordination. Note that the CommNet model has already been verified to have learned meaningful policies in these tasks according to [23; 19]. However, in our experiments, we often observe more successful policies learned by our method which may not be captured very well by the CommNet model. For example, in the case of combat task, we often observe that some agents of the CommNet model just fail to find the enemies (potentially due to their limited local views and ineffective communications) and therefore lose the battle in a shorthanded manner, see e.g. FIG7 (a). As a contrast, the learned model of our MS-MARL method usually gathers all agents together first and then moves them to attack the enemies. Another support case is in the task of "15M vs. 16M". In this task, our model learns a particular policy of spreading the agents into a half-moon shape ("Spread Out") to focus fire and attacking the frontier enemies before the others enter the firing range, as demonstrated in FIG9. Actually, this group behavior is similar to the famous military maneuver "Pincer Movement" which is widely exploited in representative battles in the history. Although CommNet sometimes also follows such kind of policy, it often fails to spread to a larger "pincer" to cover the enemies and therefore loses the battle. FIG9 (a) shows one of such examples. The size of the "pincer" seems especially important for winning the t ask of "15M vs. 16M" where we have less units than the enemies. From the above analysis, we see that the master module does contribute to a higher performance. To further explore the key ingredients, we generate a visualization of the hidden state from the master agent (projected to 2D with t-SNE BID10, shown in FIG7 (c)). Specifically, we plot the hidden state of two different versions of our MS-MARL model: the blue points version is coupled with a GCM module as well as occupancy map as master's input (with a 61% win rate), while the red points are just from the original version (with only a 55% win rate). Although an explicit explanation of the meanings of the hidden states cannot be certified at this point, what excites us here is that the distribution of master's message does correspond to several representative successful polices. This interesting phenomenon indicates that the hidden states of the master agent effectively carry meaningful messages, according to which, the distribution of successful policies stands out from that of relatively poor policies. In this paper, we revisit the master-slave architecture for deep MARL where we make an initial stab to explicitly combine a centralized master agent with distributed slave agents to leverage their individual contributions. With the proposed designs, the master agent effectively learns to give high-level instructions while the local agents try to achieve fine-grained optimality. We empirically demonstrate the superiority of our proposal against existing MARL methods in several challenging mutli-agent tasks. Moreover, the idea of master-slave architecture should not be limited to any specific RL algorithms, although we instantiate this idea with a policy gradient method, more existing RL algorithms can also benefit from applying similar schemes.
We revisit the idea of the master-slave architecture in multi-agent deep reinforcement learning and outperforms state-of-the-arts.
1,364
scitldr
We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset. The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function. We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima. We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context. For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges. We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation. It has been observed in various machine learning problems recently that the gradient descent (GD) algorithm and the stochastic gradient descent (SGD) algorithm converge to solutions with certain properties even without explicit regularization in the objective function. Correspondingly, theoretical analysis has been developed to explain such implicit regularization property. For example, it has been shown in Gunasekar et al. (2018; 2017) that GD converges to the solution with the minimum norm under certain initialization for regression problems, even without an explicit norm constraint. Another type of implicit regularization, where GD converges to the max-margin classifier, has been recently studied in;; Nacson et al. (2018a); Soudry et al. (2017; 2018) for classification problems as we describe below. Given a set of training samples z i = (x i, y i) for i = 1,..., n, where x i denotes a feature vector and y i ∈ {−1, +1} denotes the corresponding label, the goal is to find a desirable linear model (i.e., a classifier) by solving the following empirical risk minimization problem It has been shown in Nacson et al. (2018a); Soudry et al. (2017; 2018) that if the loss function (·) is monotonically strictly decreasing and satisfies proper tail conditions (e.g., the exponential loss), and the data are linearly separable, then GD converges to the solution w with infinite norm and the maximum margin direction of the data, although there is no explicit regularization towards the maxmargin direction in the objective function. Such a phenomenon is referred to as the implicit bias of GD, and can help to explain some experimental . For example, even when the training error achieves zero (i.e., the ing model enters into the linearly separable region that correctly classifies the data), the testing error continues to decrease, because the direction of the model parameter continues to have an improved margin. Such a study has been further generalized to hold for various other types of gradient-based algorithms. analyzed the convergence of GD with no assumption on the data separability, and characterized the implicit regularization to be in a subspace-based form. The focus of this paper is on the following two fundamental issues, which have not been well addressed by existing studies.• Existing studies so far focused only on the linear classifier model. An important question one naturally asks is what happens for the more general nonlinear leaky ReLU and ReLU models. Will GD still converge, and if so will it converge to the max-margin direction? Our study here provides new insights for the ReLU model that have not been observed for the linear model in the previous studies.• Existing studies mainly analyzed the convergence of GD with the only exceptions; Nacson et al. (2018b) on SGD. did not establish the convergence to the max-margin direction for SGD, and Nacson et al. (2018b) established the convergence to the max-margin solution only epochwisely for cyclic SGD (not iterationwise for SGD under random sampling with replacement). Moreover, both studies considered only the linear model. Here, our interest is to explore the iterationwise convergence of SGD under random sampling with replacement to the max-margin direction, and our can shed insights for online SGD. Furthermore, our study provides new understanding for the nonlinear ReLU and leaky ReLU models. We summarize our main contributions, where our focus is on the exponential loss function under ReLU model. We first characterize the landscape of the empirical risk function under the ReLU model, which is nonconvex and nonsmooth. We show that such a risk function has asymptotic global minima and asymptotic spurious local minima. Such a landscape is in sharp contrast to that under the linear model previously studied in , where there exist only equivalent global minima. Based on the landscape property, we show that the implicit bias property in the course of the convergence of GD can fall into four cases: converges to the asymptotic global minimum along the max-margin direction, converges to an asymptotic local minimum along a local max-margin direction, stops at a finite spurious local minimum, or oscillates between the linearly separable and misclassified regions without convergence. Such a diverse behavior is also in sharp difference from that under the linear model , where GD always converges to the max-margin direction. We then take a further step to study the implicit bias of SGD. We show that the expected averaged weight vector normalized by its expected l 2 norm converges to the global max-margin direction or local max-margin direction, as long as SGD stays either in the linearly separable region or in a region of the local minima defined by a subset of data samples with positive label. The proof here requires considerable new technical developments, which are very different from the traditional analysis of SGD, e.g.,;;;;; BID0 BID1. This is because our focus here is on the exponential loss function without attainable global/local minima, whereas traditional analysis typically assumed that the minimum of the loss function is attainable. Furthermore, our goal is to analyze the implicit bias property of SGD, which is also beyond traditional analysis of SGD.We further extend our analysis to the leaky ReLU model and multi-neuron networks. Implicit bias of gradient descent: studied the implicit bias of GD and SGD for minimizing the squared loss function under bounded global minimum, and showed that some of these algorithms converge to a global minimum that is closest to the initial point. Another collection of papers showed that AdaBoost converges to an approximate max-margin classifier. Soudry et al. (2017; 2018) studied the convergence of GD in logistic regression with linearly separable data and showed that GD converges in direction to the solution of support vector machine at a rate of 1/ ln(t). Nacson et al. (2018a) improved this rate to ln(t)/ √ t under the exponential loss via normalized gradient descent. further showed that steepest descent can lead to margin maximization under generic norms. analyzed the convergence of GD on an arbitrary dataset, and provided the convergence rates along the strongly convex subspace and the separable subspace. Our work studies the convergence of GD and SGD under the nonlinear ReLU model with the exponential loss, as opposed to the linear model studied by all the above previous work on the same type of loss functions. analyzed the average SGD (under random sampling) with fixed learning rate and proved the convergence of the population risk, but did not establish the parameter convergence of SGD in the max-margin direction. Nacson et al. (2018b) established the convergence of cyclic SGD epochwisely in direction to the max-margin classifier at a rate O(1/ ln t). Our work differs from these two studies first in that we study the ReLU model, whereas both of these studies analyzed the linear model. Furthermore, we showed that under SGD with random sampling, the expectation of the averaged weight vector converges in direction to the max-margin classifier at a rate O(1/ √ ln t). There have been extensive studies of the convergence and generalization performance of SGD under various models, of which we cannot provide a comprehensive list due to the space limitations. In general, these type of studies either characterize the convergence rate of SGD or provide the generalization error bounds at the convergence of SGD, e.g.,;; , but did not characterize the implicit regularization property of SGD, such as the convergence to the max-margin direction as provided in our paper. We consider the binary classification problem, in which we are given a set of training samples {z 1, . . ., z n}. Each training sample z i = (x i, y i) contains an input data x i and a corresponding binary label y i ∈ {−1, +1}. We denote I +:= {i : y i = +1} as the set of indices of samples with label +1 and denote I −:= {i : y i = −1} in a similar way. Their cardinalities are denoted as n + and n −, respectively, and are assumed to be non-zero. We consider all datasets that are linearly separable, i.e., there exists a linear classifier w such that y i w x i > 0 for all i = 1,..., n. We are interested in training a ReLU model for the classification task. In specific, for a given input data x, the model outputs σ(w x i), where σ(v) = max{0, v} is the ReLU activation function and w denotes the weight parameters. The predicted label is set to be sgn(w x). Our goal is to learn a classifier by solving the following empirical risk minimization problem, where we adopt the exponential loss. DISPLAYFORM0 The ReLU activation causes the loss function in problem (P) to be nonconvex and nonsmooth. Therefore, it is important to first understand the landscape property of the loss function, which is critical for characterizing the implicit bias property of the GD and SGD algorithms. In order to understand the convergence of GD under the ReLU model, we first study the landscape of the loss function in problem (P), which turns out to be very different from that under the linear activation model. As been shown in; , the loss function in problem (P) under linear activation is convex, and achieves asymptotic global minimum, i.e., ∇L(αw *) α → 0 and L(αw *) α → 0 as the scaling constant α → +∞, only if w * is in the linearly separable region. In contrast, under the ReLU model, the asymptotic critical points can be either global minimum or (spurious) local minimum depending on the training datasets, and hence the convergence property of GD can be very different in nature from that under the linear model. The following theorem characterizes the landscape properties of problem (P). Throughout, we denote the infimum of the objective function in problem (P) as L * = n − n. Furthermore, we call a direction w * asymptotically critical if it satisfies ∇L(αw *)→0 as α → +∞.Theorem 3.1 (Asymptotic landscape property). For problem (P) under the ReLU model, any corresponding asymptotic critical direction w * fall into one of the following cases: DISPLAYFORM0 To further elaborate Theorem 3.1, if w * classifies all data correctly (i.e., item 1), then the objective function possibly achieves global minimum L * along this direction. On the other hand, if w * classifies some data with label +1 as −1 (item 2), then the objective function achieves a sub-optimal value along this direction. In the worst case where all data samples are classified as −1 (item 3), the ReLU unit is never activated and hence the corresponding objective function has constant value 1. We note that the cases in items 2 and 3 may or may not take place depending on specific datasets, but if they do occur, the corresponding w * are spurious (asymptotic) local minima. In summary, the landscape under the ReLU model can be partitioned into different regions, where gradient descent algorithms can have different implicit bias as we show next. In this subsection, we analyze the convergence of GD in learning the ReLU model. At each iteration t, GD performs the update DISPLAYFORM0 where η denotes the stepsize. For the linear model whose loss function has infinitely many asymptotic global minima, it has been shown in that GD always converges to the max-margin direction. Such a phenomenon is regarded as the implicit bias property of GD. Here, for the ReLU model, we are also interested in analyzing whether such an implicit-bias property still holds. Furthermore, since the loss function under the ReLU model possibly contains spurious asymptotic local minima, the convergence of GD under the ReLU model should be very different from that under the linear model. Next, we introduce various notions of margin in order to characterize the implicit bias under the ReLU model. The global max-margin direction of samples in I + is defined as DISPLAYFORM1 Such a notion of max-margin is natural because the ReLU activation function can suppress negative inputs. We note that here w + may not locate in the linearly separable region, and hence it may not be parallel to any (asymptotic) global minimum. As we show next, only when w + is in the linearly separable region, GD may converge in direction to such a max-margin direction under the ReLU model. Furthermore, for each given subset J + ⊆ I +, we define the associated local max-margin direction w DISPLAYFORM2 We further denote the set of asymptotic local minima with respect to J + ⊆ I + (see Theorem 3.1 item 2) as DISPLAYFORM3 Theorem 3.2. Apply GD to solve problem (P) with arbitrary initialization and a small enough constant stepsize. Then, the sequence {w t} t generated by GD falls into one of the following cases. DISPLAYFORM4 DISPLAYFORM5 n, and w t =ŵ + J, where J + = ∅, i.e., GD terminates within finite steps. Theorem 3.2 characterizes various instances of implicit bias of GD in learning the ReLU model, which the nature of the convergence is different from that in learning the linear model. In specific, GD can either converge in direction to the global max-margin direction w + that leads to the global minimum, or converge to the local max-margin direction w + J that leads to a spurious local minimum. Furthermore, it may occur that GD oscillates between the linearly separable region and the misclassified region due to the suppression effect of ReLU function. In this case, GD does not have an implicit bias property and convergence guarantee. We provide two simple examples in the supplementary material to further elaborate these cases. In this subsection, we analyze the convergence property and the implicit bias of SGD for solving problem (P). At each iteration t, SGD samples an index ξ t ∈ {1, . . ., n} uniformly at random with replacement, and performs the update DISPLAYFORM0 Similarly to the convergence of GD characterized in Theorem 3.2, SGD may oscillate between the linearly separable and misclassified regions. Therefore, our major interest here is the implicit bias of SGD when it does converge either to the asymptotic global minimum or local minimum. Thus, without loss of generality, we implicitly assume that w + is in the linearly separable region, and the relevant w + J ∈ W + J. Otherwise, SGD does not even converge. The implicit bias of SGD with replacement sampling has not been studied in the existing literature, and the proof of the convergence and the characterization of the implicit bias requires substantial new technical developments. In particular, traditional analysis of SGD under convex functions requires the assumption that the variance of the gradient is bounded; BID1; BID0. Instead of making such an assumption, we next prove that SGD enjoys a nearlyconstant bound on the variance up to a logarithmic factor of t in learning the ReLU model. Proposition 1 (Variance bound). Apply SGD to solve problem (P) with any initialization. If there exists T such that for all t > T, w t either stays in the linearly separable region, or in W + J, then with stepsize η k = (k + 1) −α where 0.5 < α < 1, the variances of the stochastic gradients sampled by SGD along the iteration path satisfy that for all t, DISPLAYFORM1 Proposition 1 shows that the summation of the norms of the stochastic gradients grows logarithmically fast. This implies that the variance of the stochastic gradients is well-controlled. In particular, if we choose η k = (k+1) −1/2, then the bound in Proposition 1 implies that the term E ∇ (w k, z ξ k) 2 stays at a constant level. Based on the variance bound in Proposition 1, we next establish the convergence rate of SGD for learning the ReLU model. Throughout, we denote w t:= then with the stepsize η k = (k + 1) −α, where 0.5 < α < 1, the averaged iterates generated by SGD satisfies DISPLAYFORM2 If there exist T such that for all t > T, w t stays in W + J, then with the same stepsize DISPLAYFORM3 Theorem 3.3 establishes the convergence rate of the expected risk of the averaged iterates generated by SGD. It can be seen that the convergence of SGD achieves different loss values corresponding to global and local minimum in different regions. The stepsize is set to be diminishing to compensate the variance introduced by SGD. In particular, if α is chosen to be sufficiently close to 0.5, then the convergence rate is nearly of the order O(ln 2 t/ √ t), which matches the standard of SGD in convex optimization up to an logarithmic order. Theorem 3.3 also implies that the convergence of SGD is attained as Ew t → +∞ at a rate of O(ln t). We note that the analysis of Theorem 3.3 is different from that of SGD in traditional convex optimization, which requires the global minimum to be achieved at a bounded point and assumes the variance of the stochastic gradients is bounded by a constant Shalev-Shwartz et al. FORMULA19 −α where 0.5 < α < 1, the sequence of the averaged iterate {w t} t generated by SGD satisfies DISPLAYFORM4 If there exist T such that for all t > T, w t stays in W + J, then with the same stepsize DISPLAYFORM5 Theorem 3.4 shows that the direction of the expected averaged iterate E[w t] generated by SGD converges to the max-margin direction w +, without any explicit regularizer in the objective function. The proof of Theorem 3.4 requires a detailed analysis of the SGD update under the ReLU model and is substantially different from that under the linear model;; Nacson et al. (2018a; b). In particular, we need to handle the variance of the stochastic gradients introduced by SGD and exploit its classification properties under the ReLU model. We next provide an example class of datasets (which has been studied in), for which we show that SGD stays stably in the linearly separable region. Proposition 2. If the linear separable samples {z 1, . . ., z n} satisfy the following conditions given in Combes et al. FORMULA0: DISPLAYFORM6 then there exists at ∈ N such that for all t ≥t the sequence generated by SGD stays in the linearly separable region, as long as SGD is not initialized at the local minima described in item 3 of Theorem 3.1. The leaky ReLU activation takes the form σ(v) = max (αv, v), where the parameter (0 ≤ α ≤ 1). Clearly, leaky ReLU takes the linear and ReLU models as two special cases, respectively corresponding to α = 0 and α = 1. Since the convergence of GD/SGD of the ReLU model is very different from that of the linear model, a natural question to ask is whether leaky ReLU with intermediate parameters 0 < α < 1 takes the same behavior as the linear or ReLU model. It can be shown that the loss function in problem (P) under the leaky ReLU model has only asymptotic global minima achieved by w * in the separable region with infinite norm (there does not exist asymptotic local minima). Hence, the convergence of GD is similar to that under the linear model, where the only difference is that the max-margin classifier needs to be defined based on leaky ReLU as follows. For the given set of linearly separable data samples, we construct a new set of data z * DISPLAYFORM0 +, x * i = αx i, ∀i ∈ I −, and y * i = y i, ∀i ∈ I + ∪ I −. Essentially, the data samples with label −1 are scaled by the parameter α of leaky ReLU. Without loss of generality, we assume that the max-margin classifier for data {x * i} passes through the origin after a proper translation. Then, we define the max-margin direction of data X * as DISPLAYFORM1 Then, following the under the linear model in , it can be shown that GD with arbitrary initialization and small constant stepsize for solving problem (P) under the leaky ReLU model satisfies that L(w) converges to zero, and w converges to the max-margin direction, i.e., lim t→∞ wt wt = w *, with its norm going to infinity. Furthermore, following our of Theorem 3.4, it can be shown that for SGD applied to solve problem (P) with any initialization, if there exists T such that for all t > T w t stays in the linearly separable region, then with the stepsize η k = (k + 1) −α, 0.5 < α < 1, the sequence of the averaged iterate {w t} t generated by SGD satisfies DISPLAYFORM2 Thus, for SGD under the leaky ReLU model, the normalized average of the parameter vector converges in direction to the max-margin classifier. In this subsection, we extend our study of the ReLU model to the problem of training a one-hiddenlayer ReLU neural network with K hidden neurons for binary classification. Here, we do not assume linear separability of the dataset. The output of the network is given by DISPLAYFORM0 where W = [w 1, w 2, · · ·, w K] with each column w k representing the weights of the kth neuron in the hidden layer, DISPLAYFORM1 denotes the weights of the output neuron, and σ(·) represents the entry-wise ReLU activation function. We assume that v is a fixed vector whose entries are nonzero and have both positive and negative values. Such an assumption is natural as it allows the model to have enough capacity to achieve zero loss. The predicted label is set to be the sign of f (x), and the objective function under the exponential loss is given by DISPLAYFORM2 Our goal is to characterize the implicit bias of GD and SGD for learning the weight parameters W of the multi-neuron model. In general, such a problem is challenging, as we have shown that GD may not converge to a desirable classifier even under the single-neuron ReLU model. For this reason, we adopt the same setting as that in (, Corollary 8), which assumes that the activated neurons do not change their activation status and the training error converges to zero after a sufficient number of iterations, but our presented below characterizes the implicit bias of GD and SGD in the original feature space, which is different from that in (, Corollary 8). We define a set of vectors DISPLAYFORM3, where A j i = 1 if the sample x i is activated on the jth neuron, i.e., w j x i > 0, and set A j i = 0 otherwise. Such an A i vector is referred to as the activation pattern of x i. We then partition the set of all training samples into m subsets B 1, B 2, · · ·, B m, so that the samples in the same subset have the same ReLU activation pattern, and the samples in different subsets have different ReLU activation patterns. We call B h, h ∈ [m] as the h-th pattern partition. Let w h = k∈{j:A j h =1} v k w k. Then, for any sample x ∈ B h, the output of the network is given by DISPLAYFORM4 We next present our characterization of the implicit bias property of GD and SGD under the above ReLU network model. We define the corresponding max-margin direction of the samples in B h as DISPLAYFORM5 Then the following theorem characterizes the implicit bias of GD under the multi-neuron network. Theorem 4.1. Suppose that GD optimizes the loss L(W) in eq. to zero and there exists T such that for all t > T, the neurons in the hidden layer do not change their activation status. If A h1 ∧ A h2 = 0 (where "∧" denotes the entry-wise logic operator "AND" between digits zero or one) for any h 1 = h 2, then the samples in the same pattern partition of the ReLU activation have the same label, and DISPLAYFORM6 Differently from (, Corollary 8) which studies the convergence of the vectorized weight matrix so that the implicit bias of GD is with respect to features being lifted to an extended dimensional space, Theorem 4.1 characterizes the convergence of the weight parameters and the implicit bias in the original feature space. In particular, Theorem 4.1 implies that although the ReLU neural network is a nonlinear classifier, f (x) is equivalent to a ReLU classifier for the samples in the same pattern partition (that are from the same class), which converges in direction to the maxmargin classifier w h of those data samples. We next letw k=0 w h (t). Then the following theorem establishes the implicit bias of SGD. Theorem 4.2. Suppose that SGD optimizes the loss L(W) in eq. so that there exists T such that for any t > T, L(W) < 1/n, the neurons in the hidden layer do not change their activation status, and for any h 1 = h 2, A h1 ∧ A h2 = 0. Then, for the stepsize η k = (k + 1) −α, 0.5 < α < 1, the samples in the same pattern partition of the ReLU activation have the same label, and DISPLAYFORM7 Similarly to GD, the averaged SGD in expectation maximizes the margin for every sample partition. At the high level, Theorem 4.1 and Theorem 4.2 imply the following generalization performance of the ReLU network under study. After a sufficiently large number of iterations, the neural network partitions the data samples into different subsets, and for each subset, the distance from the samples to the decision boundary is maximized by GD and SGD. Thus, the learned classifier is robust to small perturbations of the data, ing in good generalization performance. In this paper, we study the problem of learning a ReLU neural network via gradient descent methods, and establish the corresponding risk and parameter convergence under the exponential loss function. In particular, we show that due to the possible existence of spurious asymptotic local minima, GD and SGD can converge either to the global or local max-margin direction, which in the nature of convergence is very different from that under the linear model in the previous studies. We also discuss the extensions of our analysis to the more general leaky ReLU model and multi-neuron networks. In the future, it is worthy to explore the implicit bias of GD and SGD in learning multilayer neural network models and under more general (not necessarily linearly separable) datasets. A PROOF OF THEOREM 3.1The gradient ∇L(w) is given by DISPLAYFORM0 If y i w * x i ≥ 0 for all x i ∈ I + ∪ I −, then as α → +∞, we have,, DISPLAYFORM1 and ∇L(αw DISPLAYFORM2 and ∇L(αw DISPLAYFORM3 The proof is now complete. First consider the case when w + is in linearly separable region and the local minimum does not exist along the updating path. We call the region where all vectors w ∈ R d satisfy w x i < 0 for all i ∈ I − as negative correctly classified region. As shown in , L(w) is non-negative and L-smooth, which implies that DISPLAYFORM0 Based on the above inequality, we have DISPLAYFORM1 which, in conjunction with 0 < η < 2/L, implies that DISPLAYFORM2 Thus, we have ∇L(w k) 2 → 0 as k → +∞. By Theorem 3.1, ∇L(w k) vanishes only when all samples with label −1 are correctly classified, and thus GD enters into the negative correctly classified region eventually and diverges to infinity. Theorem 3 shows that when GD diverges to infinity, it simultaneously converges in the direction of the max-margin classifier of all samples satisfying w t x i > 0. Thus, under our setting, GD either converges in the direction of the global max-margin classifer w +: DISPLAYFORM3 or the local max-margin classifier w DISPLAYFORM4 Next, consider the case when w + is not in linearly separable region, and the local minimum does not exist along the updating path. In such a case, we conclude that GD cannot stay in the linearly separable region. Otherwise, it converges in the direction of w + that is not in linearly separable region, which leads to a contradiction. If the asymptotic local minimum w + J exists, then GD may converge in its direction. If w + J does not exist, GD cannot stay in both the misclassified region and linearly separable region, and thus oscillates between these two regions. In the case when GD reaches a local minimum, by Theorem 3.1, we have ∇L(w *) = 0, and thus GD stops immediately and does not diverges to infinity. Example 1 FIG4. The dataset consists of two samples with label +1 and one sample with label −1. These samples satisfy x 1 x 3 < 0 and x 1 x 2 < 0.For this example, if we initialize GD at the green classifier, then GD converges to the max-margin direction of the sample (x 1, +1). Clearly, such a classifier misclassifies the data sample (x 2, +1). Example 2 FIG4. The dataset consists of one sample with label +1 and one sample with label −1. These two samples satisfy 0 < x 1 x 2 ≤ 0.5 x 2 2.For this example, if we initialize at the green classifier, then GD oscillates around the direction x 2 / x 2 and does not converge. Consider the first iteration. Note that the sample z 3 has label −1, and from the illustration of FIG4 (left) we have w 0 x 3 < 0, w 0 x 2 < 0 and w 0 x 1 > 0. Therefore, only the sample z 1 contributes to the gradient, which is given by DISPLAYFORM0 By the update rule of GD, we obtain that for all t DISPLAYFORM1 By telescoping eq. FORMULA37, it is clear that any w t x 2 < 0 for all t since x 1 x 2 < 0. This implies that the sample z 2 is always misclassified. Since we initialize GD at w 0 such that w 0 x 1 > 0 and w 0 x 2 < 0, the sample z 2 does not contribute to the GD update due to the ReLU activation. Next, we argue that there must exists a t such that w t x 2 > 0. Suppose such t does not exist, we always have w t x 1 = (w 0 + t−1 k=0 exp(−w k x 1)x 1 ) x 1 > 0. Then, the linear classifier w t generated by GD stays between x 1 and x 2, and the corresponding objective function reduces to a linear model that depends on the sample z 1 (Note that z 2 contributes a constant due to ReLU activation). Following from the in; Soudry et al. FORMULA0 for linear model, we conclude that w t converges to the max-margin direction x1 x1 as t → +∞. Since x 1 x 2 > 0, this implies that w t x 2 > 0 as t → +∞, contradicting with the assumption. Next, we consider the t such that w t x 1 > 0 and w t x 2 > 0, the objective function is given by L(w t) = exp(−w t x 1) + exp(w t x 2), and the corresponding gradient is given by DISPLAYFORM0 Next, we consider the case that w t x 1 > 0 for all t. Otherwise, both of x 1 and x 2 are on the negative side of the classifier and GD cannot make any progress as the corresponding gradient is zero. In the case that w t x 1 > 0 for all t, by the update rule of GD, we obtain that DISPLAYFORM1 Clearly, the sequence {w t x 2} t is strictly decreasing with a constant gap, and hence within finite steps we must have w t x 2 ≤ 0. Since SGD stays in the linearly separable region eventually, and hence only the data samples in I + contribute to the gradient update due to the ReLU activation function. For this reason, we reduce the original minimization problem (P) to the following optimization DISPLAYFORM0 which corresponds to a linear model with samples in I +. Similarly, if SGD stays in W + J, only the data samples in J + contribute the the gradient update, the original minimization problem (P) is reduced to DISPLAYFORM1 The proof contains three main steps. Step 1: For any u, bounding the term E w t − u 2: By the update rule of SGD, we have DISPLAYFORM2 where M t = 2η t−1 ∇L(w t−1) − ∇ (w t−1, z ξt), w t−1 − u. By convexity we obtain that ∇L(w t−1), w t−1 − u ≥ L(w t−1) − L(u). Then, eq. further becomes DISPLAYFORM3 Telescoping the above inequality yields that DISPLAYFORM4 Taking expectation on both sides of the above inequality and note that EM t = 0 for all t, we further obtain that DISPLAYFORM5 Note that ≤ 1 whenever the data samples are correctly classified and for all i ∈ I +, x i ≤ B, and without loss of generality, we can assume B < √ 2. Hence, the term E ∇ (w k, z ξ k) 2 can be upper bounded by DISPLAYFORM6 Then, noting that η k ≤ 1, eq. can be upper bounded by DISPLAYFORM7 Next, set u = (ln(t)/γ)ŵ + and note thatŵ + x i ≥ γ for all i ∈ I +, we conclude that DISPLAYFORM8 Substituting this into the above inequality and noting that η k = (k + 1) −α and 0.5 < α < 1, we further obtain that DISPLAYFORM9 Step 2: lower bounding E w t − u 2: Note that only the samples in I + contribute to the update rule. By the update rule of SGD, we obtain that DISPLAYFORM10 which further implies that DISPLAYFORM11 Then, we can lower bound w t − u as DISPLAYFORM12 Taking the expectation of w t − u 2: DISPLAYFORM13 where (i) follows from Jensen's inequality. Step 3: Upper bounding DISPLAYFORM14 Combining the upper bound obtained in step 1 and the lower bound obtained in step 2 yields that DISPLAYFORM15 Solving the above quadratic inequality yields that DISPLAYFORM16 E PROOF OF THEOREM 3.3The proof exploits the iteration properties of SGD and the bound on the variance of SGD established in Proposition 1.We start the proof from eq. FORMULA0, following which we obtain DISPLAYFORM17 Taking the expectation on both sides of the above inequality yields that DISPLAYFORM18 which, after telescoping, further yields that DISPLAYFORM19 DISPLAYFORM20 in conjunction with eq., yields that DISPLAYFORM21 where (i) follows from the fact that DISPLAYFORM22 Thus, we can see that L(w t) decreases to 0 at a rate of O(ln 2 (t)/t 1−α ). If we choose α to be close to 0.5, the best convergence rate that can be achieved is O(ln 2 (t)/ √ t).F PROOF OF THEOREM 3.4 DISPLAYFORM23 We first present four technical lemmas that are useful for the proof of the main theorem. Lemma F.1. Given the stepsize η k+1 = 1/(k + 1) −α and the initialization w 0s, then for t ≥ 1, we have DISPLAYFORM24 Lemma F.2. Let X + represent the data matrix of all samples with the label +1, with each row representing one sample. Then we have: DISPLAYFORM25 ∆ n−1 is the simplex in R n. If the equality holds (i.e., the strong duality holds) at q andŵ +, then they satisfyŵ DISPLAYFORM26 andŵ + is the max-margin classifier of samples with the label +1. DISPLAYFORM27 We next apply the lemmas to prove the main theorem. Taking the expectation of the SGD update rule yields that DISPLAYFORM28 ). Applying the above equation recursively, we further obtain that DISPLAYFORM29 which further leads to DISPLAYFORM30 Next, we prove the convergence of the direction of E[w t] to the max-margin direction as follows. 1 2 DISPLAYFORM31 where (i) follows from Lemma F.4, (ii) follows from Lemma F.3 and (iii) is due to eq.. Since following from Lemma F.1 we have that Ew t = O(ln(t)), the above inequality then implies that DISPLAYFORM32 Proof of Lemma F.1. Since x i ≤ B for all i, we obtain that DISPLAYFORM0 By convexity, we have that EL(w t) ≥ L(Ew t), combining which with the above bounds further yields DISPLAYFORM1 That is, the increasing rate of E w t is at least O(ln(t)).Proof of Lemma F.2. Following from the definition of the max-margin, we have DISPLAYFORM2 where f * (a) = max and g(e) = e, respectively, where a, b, c, d are generic vectors. We also denote ∂f (c) and ∂g(e) the subgradient set of f and g at e and c respectively. By the Fenchel-Rockafellar duality , we obtain that DISPLAYFORM3 In particular, the strong duality holds at q andŵ + if and only if −X + w ∈ ∂f (q) andŵ + ∈ ∂g(X + q). Thus, we conclude thatŵ DISPLAYFORM4 Proof of Lemma F.3. By Taylor's expansion and the update of SGD, we obtain that DISPLAYFORM5 where w = θw k−1 + (1 − θ)w k for certain 0 ≤ θ ≤ 1, and is in the linear separable region. Note that for any v, DISPLAYFORM6 where S is the maximum of L(w) in the linearly separable region. We note that S < +∞ because w → ∞ in the linearly separable region and hence L(w) → 0. Taking the expectation on both sides of eq. and recalling that DISPLAYFORM7 we obtain that DISPLAYFORM8 Denote X ∈ R n×d as the data matrix with each row corresponding to one data sample. The derivative of the empirical risk can be written as ∇L(w) = X T l(w)/n, where l(w) = [(w, z 1), (w, z 2),..., (w, z n)]. Then, we obtain that EL(w p) = 1 n + i∈I + E exp(−w p x i) = 1 n + E(l(w p)) 1 and E∇L(w p) = 1 n + i∈I + E exp(−w p x i)x i = 1 n + X + E(l(w p)).Based on the above relationships and Lemma F.2, we obtain that Proof of Lemma F.4. Define h(y) = ln 1 n + i∈I + exp(y i), and then its dual function h * (q) = ln n + + q i ln(q i) ≤ ln n +. Following from Lemma F.2,ŵ + = 1 γ X +T q. Then, by the FenchelYoung inequality, we obtain that DISPLAYFORM9 (ln E(L(w k)) + ln n + ). Under our ReLU model, in the linearly separable region, the gradient ∇L(w) is given by ∇L(w) = − 1 n n i=1 y i 1 {w xi>0} exp(−y i w x i) x i = − 1 n i∈I + exp(−w x i)x i.Thus, only samples with positive classification output, i.e. σ(w t x ξt) > 0, contribute to the SGD updates. We first prove w t < +∞ when there exist misclassified samples. Suppose, toward contradiction, that w t = +∞ as t → +∞ when misclassified samples exist. Note that DISPLAYFORM0 Since w t is infinite, at least one of the coefficients α i, i = 1, · · ·, n is infinite. No loss of generality, we assume α p = +∞. Then, the inner product DISPLAYFORM1 Based on the data selected in Proposition 2, we obtain for ∀i ∈ I − ∪ I + ∀j ∈ I +, y i x i x j > 0 ∀j ∈ I −, y i x i x j < 0, which, in conjunction with eq., implies that, if there exist j ∈ I +, then the first term in the right side of eq. FORMULA19 is finite, the second term is positive, and the third term is positive and infinite. As a , we conclude that for ∀j ∈ I +, w t x j > 0 as t → +∞. Similarly, we can prove that for ∀j ∈ I −, w t x j ≤ 0 as t → +∞, which contracts that w t x j > 0. Thus, if there exist misclassified samples, then we have w t < +∞.Based on the update rule of SGD, we have, for any j w t+1 x j − w t x j = η exp(−y ξt w t x ξt) y ξt x ξt x j = ξt,j.It can be shown that ∀j ∈ I − ∪ I +, y ξt ξt,j > 0, which, combined with eq., implies that, if one sample is correctly classified at iteration t, it remains to be correctly classified in the following iterations. Next, we prove that when w t < +∞, all samples are correctly classified within finite steps. Define DISPLAYFORM2 Since w t < ∞, there exists a constant C such that w t < C for all t. Let D = max i∈I + x i. Then, we obtain, for any j ∈ I + and ξ t ∈ I +, ξt,j = η exp(−w t x ξt)x ξt x j ≥ η exp(−CD) ++, and for any j ∈ I + and ξ t ∈ I −, ξt,j = −η exp(w t x ξt)x ξt x j ≥ η +−.Combining the above two inequalities ∀j ∈ I + yields ξt,j ≥ η min {exp(−CD) ++, η +− }.
We study the implicit bias of gradient methods in solving a binary classification problem with nonlinear ReLU models.
1,365
scitldr
Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the ing compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance. Not all computations in a deep neural network are of equal importance. In a typical deep learning pipeline, an expert crafts a neural architecture, which is trained using a prepared dataset. The success of training a deep model often requires trial and error, and such loop usually has little control on prioritizing the computations happening in the neural network. Recently researchers started to develop model-simplification methods for convolutional neural networks (CNNs), bearing in mind that some computations are indeed non-critical or redundant and hence can be safely removed from a trained model without substantially degrading the model's performance. Such methods not only accelerate computational efficiency but also possibly alleviate the model's overfitting effects. Discovering which subsets of the computations of a trained CNN are more reasonable to prune, however, is nontrivial. Existing methods can be categorized from either the learning perspective or from the computational perspective. From the learning perspective, some methods use a dataindependent approach where the training data does not assist in determining which part of a trained CNN should be pruned, e.g. BID7 and, while others use a datadependent approach through typically a joint optimization in generating pruning decisions, e.g., BID4 and BID1. From the computational perspective, while most approaches focus on setting the dense weights of convolutions or linear maps to be structured sparse, we propose here a method adopting a new conception to achieve in effect the same goal. Instead of regarding the computations of a CNN as a collection of separate computations sitting at different layers, we view it as a network flow that delivers information from the input to the output through different channels across different layers. We believe saving computations of a CNN is not only about reducing what are calculated in an individual layer, but perhaps more importantly also about understanding how each channel is contributing to the entire information flow in the underlying passing graph as well as removing channels that are less responsible to such process. Inspired by this new conception, we propose to design a "gate" at each channel of a CNN, controlling whether its received information is actually sent out to other channels after processing. If a channel "gate" closes, its output will always be a constant. In fact, each designed "gate" will have a prior intention to close, unless it has a "strong" duty in sending some of its received information from the input to subsequent layers. We find that implementing this idea in pruning CNNs is unsophisticated, as will be detailed in Sec 4.Our method neither introduces any extra parameters to the existing CNN, nor changes its computation graph. In fact, it only introduces marginal overheads to existing gradient training of CNNs. It also possess an attractive feature that one can successively build multiple compact models with different inference performances in a single round of resource-intensive training (as in our experiments). This eases the process to choose a balanced model to deploy in production. Probably, the only applicability constraint of our method is that all convolutional layers and fully-connected layer (except the last layer) in the CNN should be batch normalized BID9. Given batch normalization has becomes a widely adopted ingredient in designing state-of-the-art deep learning models, and many successful CNN models are using it, we believe our approach has a wide scope of potential impacts. In this paper, we start from rethinking a basic assumption widely explored in existing channel pruning work. We point out several issues and gaps in realizing this assumption successfully. Then, we propose our alternative approach, which works around several numerical difficulties. Finally, we experiment our method across different benchmarks and validate its usefulness and strengths. Reducing the size of neural network for speeding up its computational performance at inference time has been a long-studied topic in the communities of neural network and deep learning. Pioneer works include Optimal Brain Damage BID11 and Optimal Brain Surgeon BID5. More recent developments focused on either reducing the structural complexity of a provided network or training a compact or simplified network from scratch. Our work can be categorized into the former, thus the literature review below revolves around reducing the structural complexity. To reduce the structural complexity of deep learning models, previous work have largely focused on sparsifying the weights of convolutional kernels or the feature maps across multiple layers in a network BID1 BID4. Some recent efforts proposed to impose structured sparsity on those vector components motivated from the implementation perspective on specialized hardware BID18 BID21 BID0 BID10 ). Yet as argued by BID16, regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. Their method relies on global rescaling of criteria for all layers and does not require sensitivity estimation, a beneficial feature that our approach also has. To our knowledge, it is also unclear how widely useful those works are in deep learning. In Section 3, we discuss in details the potential issues in regularization-based pruning techniques potentially hurting them being widely applicable, especially for those that regularize high-dimensional tensor parameters or use magnitude-based pruning methods. Our approach works around the mentioned issues by constraining the anticipated pruning operations only to batchnormalized convolutional layers. Instead of posing structured sparsity on kernels or feature maps, we enforce sparsity on the scaling parameter γ in batch normalization operator. This blocks the sample-wise information passing through part of the channels in convolution layer, and in effect implies one can safely remove those channels. A recent work by BID8 used a similar technique as ours to remove unimportant residual modules in ResNet by introducing extra scaling factors to the original network. However, some optimization subtleties as to be pointed out in our paper were not well explained. Another recent work called Network-Slimming BID15 ) also aims to sparsify the scaling parameters of batch normalization. But instead of using off-the-shelf gradient learning like theirs, we propose a new algorithmic approach based on ISTA and rescaling trick, improving robustness and speed of the undergoing optimization. In particular, the work of BID15 was able to prune VGG-A model on ImageNet. It is unclear how their work would deal with the γ-W rescaling effect and whether their approach can be adopted to large pre-trained models, such as ResNets and Inceptions. We experimented with the pre-trained ResNet-101 and compared to most recent work that were shown to work well with large CNNs. We also experimented with an image segmentation model which has an inception-like module (pre-trained on ImageNet) to locate foreground objects. In most regularized linear regressions, a large-norm coefficient is often a strong indicator of a highly informative feature. This has been widely perceived in statistics and machine learning communities. Removing features which have a small coefficient does not substantially affect the regression errors. Therefore, it has been an established practice to use tractable norm to regularize the parameters in optimizing a model and pick the important ones by comparing their norms after training. However, this assumption is not unconditional. By using Lasso or ridge regression to select important predictors in linear models, one always has to first normalize each predictor variable. Otherwise, the might not be explanatory. For example, ridge regression penalizes more the predictors which has low variance, and Lasso regression enforces sparsity of coefficients which are already small in OLS. Such normalization condition for the right use of regularization is often unsatisfied for nonconvex learning. For example, one has to carefully consider two issues outlined below. We provides these two cases to exemplify how regularization could fail or be of limited usage. There definitely exist ways to avoid the described failures. Model Reparameterization. In the first case, we show that it is not easy to have fine-grained control of the weights' norms across different layers. One has to either choose a uniform penalty in all layers or struggle with the reparameterization patterns. Consider to find a deep linear (convolutional) network subject to a least square with Lasso: for λ > 0, DISPLAYFORM0 The above formulation is not a well-defined problem because for any parameter set DISPLAYFORM1, one can always find another parameter set {W i} 2n i=1 such that it achieves a smaller total loss while keeping the corresponding l 0 norm unchanged by actually setting DISPLAYFORM2 where α > 1. In another word, for any > 0, one can always find a parameter set DISPLAYFORM3 (which is usually non-sparse) that minimizes the first least square loss while having its second Lasso term less than.We note that gradient-based learning is highly inefficient in exploring such model reparameterization patterns. In fact, there are some recent discussions around this BID3. If one adopts a pre-trained model, and augments its original objective with a new norm-based parameter regularization, the new gradient updates may just increase rapidly or it may take a very long time for the variables traveling along the model's reparameterization trajectory. This highlights a theoretical gap questioning existing sparsity-inducing formulation and actual computational algorithms whether they can achieve widely satisfactory parameter sparsification for deep learning models. Transform Invariance. In the second case, we show that batch normalization is not compatible with weight regularization. The example is penalizing l 1 -or l 2 -norms of filters in convolution layer which is then followed by a batch normalization: at the l-th layer, we let DISPLAYFORM4 where γ and β are vectors whose length is the number of channels. Likewise, one can clearly see that any uniform scaling of W l which changes its l 1 -and l 2 -norms would have no effects on the output x l+1. Alternatively speaking, if one is interested in minimizing the weight norms of multiple layers together, it becomes unclear how to choose proper penalty for each layer. Theoretically, there always exists an optimizer that can change the weight to one with infinitesimal magnitude without hurting any inference performance. As pointed by one of the reviewers, one can tentatively avoid this issue by projecting the weights to the surface of unit ball. Then one has to deal with a non-convex feasible set of parameters, causing extra difficulties in developing optimization for data-dependent pruning methods. It is also worth noting that some existing work used such strategy in a layer-by-layer greedy way BID7.Based on this discussion, many existing works which claim to use Lasso, group Lasso (e.g. BID18 ; BID1), or thresholding (e.g. BID16) to enforce parameter sparsity have some theoretical gaps to bridge. In fact, many heuristic algorithms in neural net pruning actually do not naturally generate a sparse parameterized solution. More often, thresholding is used to directly set certain subset of the parameters in the network to zeros, which can be problematic. The reason is in essence around two questions. First, by setting parameters less than a threshold to zeros, will the functionality of neural net be preserved approximately with certain guarantees? If yes, then under what conditions? Second, how should one set those thresholds for weights across different layers? Not every layer contributes equally in a neural net. It is expected that some layers act critically for the performance but only use a small computation and memory budget, while some other layers help marginally for the performance but consume a lot resources. It is naturally more desirable to prune calculations in the latter kind of layers than the former. In contrast with these existing approaches, we focus on enforcing sparsity of a tiny set of parameters in CNN -scale parameter γs in all batch normalization. Not only placing sparse constraints on γ is simpler and easier to monitor, but more importantly, we have two strong reasons:1. Every γ always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of γ; 2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of γ parameter are independent across different layers. Nevertheless, our current work still falls short of a strong theoretical guarantee. We believe by working with normalized feature inputs and their regularized coefficients together, one is closer to a more robust and meaningful approach. Sparsity is not the goal, but to find less important channels using sparsity inducing formulation is. We describe the basic principle and algorithm of our channel pruning technique. Pruning constant channels. Consider convolution with batch normalization: DISPLAYFORM0 For the ease of notation, we let γ = γ l. Note that if some element in γ is set to zero, say, γ[k] = 0, its output image x l+1:,:,:,k becomes a constant β k, and a convolution of a constant image channel is almost everywhere constant (except for padding regions, an issue to be discussed later). Therefore, we show those constant image channels can be pruned while the same functionality of network is approximately kept:• If the subsequent convolution layer does not have batch normalization, DISPLAYFORM1 its values (a.k.a. elements in β) is absorbed into the bias term by the following equation DISPLAYFORM2 new, 0, where * γ denotes the convolution operator which is only calculated along channels indexed by non-zeros of γ. Remark that DISPLAYFORM3 • If the subsequent convolution layer has batch normalization, DISPLAYFORM4 instead its moving average is updated as DISPLAYFORM5 Remark that the approximation (≈) is strictly equivalence (=) if no padding is used in the convolution operator *, a feature that the parallel work Liu et al. FORMULA11 does not possess. When the original model uses padding in computing convolution layers, the network function is not strictly preserved after pruning. In our practice, we fine-tune the pruned network to fix such performance degradation at last. In short, we formulate the network pruning problem as simple as to set more elements in γ to zero. It is also much easier to deploy the pruned model, because no extra parameters or layers are introduced into the original model. To better understand how it works in an entire CNN, imagine a channel-to-channel computation graph formed by the connections between layers. In this graph, each channel is a node, their inference dependencies are represented by directed edges. The γ parameter serves as a "dam" at each node, deciding whether let the received information "flood" through to other nodes following the graph. An end-to-end training of channel pruning is essentially like a flood control system. There suppose to be rich information of the input distribution, and in two ways, much of the original input information is lost along the way of CNN inference, and the useful part -that is supposed to be preserved by the network inference -should be label sensitive. Conventional CNN has one way to reduce information: transforming feature maps (non-invertible) via forward propagation. Our approach introduces the other way: block information at each channel by forcing its output being constant using ISTA.ISTA. Despite the gap between Lasso and sparsity in the non-convex settings, we found that ISTA BID2 ) is still a useful sparse promoting method. But we just need to use it more carefully. Specifically, we adopt ISTA in the updates of γs. The basic idea is to project the parameter at every step of gradient descent to a potentially more sparse one subject to a proxy problem: let l denote the training loss of interest, at the (t + 1)-th step, we set DISPLAYFORM6 where ∇ γ l t is the derivative with respect to γ computed at step t, µ t is the learning rate, λ is the penalty. In the stochastic learning, ∇ γ l t is estimated from a mini-batch at each step. Eq. has closed form solution as DISPLAYFORM7 where prox η (x) = max{|x| − η, 0} · sgn(x). The ISTA method essentially serves as a "flood control system" in our end-to-end learning, where the functionality of each γ is like that of a dam. When γ is zero, the information flood is totally blocked, while γ = 0, the same amount of information is passed through in form of geometric quantities whose magnitudes are proportional to γ. Scaling effect. One can also see that if γ is scaled by α meanwhile W l+1 is scaled by 1/α, that is, DISPLAYFORM8 the output x l+2 is unchanged for the same input x l. Despite not changing the output, scaling of γ and W l+1 also scales the gradients ∇ γ l and ∇ W l+1 l by 1/α and α, respectively. As we observed, the parameter dynamics of gradient learning with ISTA depends on the scaling factor α if one decides to choose it other than 1.0. Intuitively, if α is large, the optimization of W l+1 is progressed much slower than that of γ. We describe our algorithm below. The following method applies to both training from scratch or re-training from a pre-trained model. Given a training loss l, a convolutional neural net N, and hyper-parameters ρ, α, µ 0, our method proceeds as follows:1. Computation of sparse penalty for each layer. Compute the memory cost per channel for each layer denoted by λ l and set the ISTA penalty for layer l to ρλ l. Here DISPLAYFORM0 where DISPLAYFORM1 h is the size of input image of the neural network. 3. End-to-End training with ISTA on γ. Train N by the regular SGD, with the exception that γ l s are updated by ISTA, where the initial learning rate is µ 0. Train N until the loss l plateaus, the total sparsity of γ l s converges, and Lasso ρ l λ l γ l 1 converges. 4. Post-process to remove constant channels. Prune channels in layer l whose elements in γ l are zero and output the pruned model N by absorbing all constant channels into subsequent layers (as described in the earlier section.). DISPLAYFORM2 5. γ-W rescaling trick. For γ l s and weights in N which were scaled in Step 2 before training, scale them by 1/α and α respectively (scaling back).6. Fine-tune N using regular stochastic gradient learning. Remark that choosing a proper α as used in Steps 2 and 5 is necessary for using a large µ t · ρ in ISTA, which makes the sparsification progress of γ l s faster. We summarize the sensitivity of hyper-parameters and their impacts for optimization below:• µ (learning rate): larger µ leads to fewer iterations for convergence and faster progress of sparsity. But if if µ too large, the SGD approach wouldn't converge.• ρ (sparse penalty): larger ρ leads to more sparse model at convergence. If trained with a very large ρ, all channels will be eventually pruned.• α (rescaling): we use α other than 1. only for pretrained models, we typically choose α from {0.001, 0.01, 0.1, 1} and smaller α warms up the progress of sparsity. We recommend the following parameter tuning strategy. First, check the cross-entropy loss and the regularization loss, select ρ such that these two quantities are comparable at the beginning. Second, choose a reasonable learning rate. Third, if the model is pretrained, check the average magnitude of γs in the network, choose α such that the magnitude of rescaled γ l is around 100µλ l ρ. We found as long as one choose those parameters in the right range of magnitudes, the optimization progress is enough robust. Again one can monitor the mentioned three quantities during the training and terminate the iterations when all three quantities plateaus. There are several patterns we found during experiments that may suggest the parameter tuning has not been successful. If during the first few epochs the Lasso-based regularization loss keeps decreasing linearly while the sparsity of γs stays near zero, one may decrease α and restart. If during the first few epochs the sparsity of γs quickly raise up to 100%, one may decrease ρ and restart. If during the first few epochs the cross-entropy loss keeps at or increases dramatically to a non-informative level, one may decrease µ or ρ and restart. We experiment with the standard image classification benchmark CIFAR-10 with two different network architectures: ConvNet and ResNet-20. We resize images to 32 × 32 and zero-pad them to 40 × 40. We pre-process the padded images by randomly cropping with size 32 × 32, randomly flipping, randomly adjusting brightness and contrast, and standardizing them such that their pixel values have zero mean and one variance. ConvNet For reducing the channels in ConvNet, we are interested in studying whether one can easily convert a over-parameterized network into a compact one. We start with a standard 4-layer convolutional neural network whose network attributes are specified in Table 1. We use a fixed learning rate µ t = 0.01, scaling parameter α = 1.0, and set batch size to 125.Model A is trained from scratch using the base model with an initial warm-up ρ = 0.0002 for 30k steps, and then is trained by raising up ρ to 0.001. After the termination criterion are met, we prune the channels of the base model to generate a smaller network called model A. We evaluate the classification performance of model A with the running exponential average of its parameters. It is found that the test accuracy of model A is even better than the base model. Next, we start from the pre-trained model A to create model B by raising ρ up to 0.002. We end up with a smaller network called model B, which is about 1% worse than model A, but saves about one third parameters. Likewise, we start from the pre-trained model B to create model C. The detailed statistics and its pruned channel size are reported in Table 1. We also train a reference ConvNet from scratch whose channel sizes are 32-64-64-128 with totally 224,008 parameters and test accuracy being 86.3%. The referenced model is not as good as Model B, which has smaller number of parameters and higher accuracy. We have two major observations from the experiment: When the base network is overparameterized, our approach not only significantly reduces the number of channels of the base model but also improves its generalization performance on the test set. Performance degradation seems unavoidable when the channels in a network are saturated, and our approach gives satisfactory tradeoff between test accuracy and model efficiency. We also want to verify our second observation with the state-of-art models. We choose the popular ResNet-20 as our base model for the CIFAR-10 benchmark, whose test accuracy is 92%. We focus on pruning the channels in the residual modules in ResNet-20, which has 9 convolutions in total. As detailed in Table 2, model A is trained from scratch using ResNet-20's network structure as its base model. We use a warm-up ρ = 0.001 for 30k steps and then train with ρ = 0.005. We are able to remove 37% parameters from ResNet-20 with only about 1 percent accuracy loss. Likewise, Model B is created from model A with a higher penalty ρ = 0.01. Table 2: Comparisons between ResNet-20 and its two pruned versions. The last columns are the number of channels of each residual modules after pruning. We experiment our approach with the pre-trained ResNet-101 on ILSVRC2012 image classification dataset. ResNet-101 is one of the state-of-the-art network architecture in ImageNet Challenge. We follow the standard pipeline to pre-process images to 224×224 for training ResNets. We adopt the pre-trained TensorFlow ResNet-101 model whose single crop error rate is 23.6% with about 4.47 × 10 7 parameters. 2 We set the scaling parameter α = 0.01, the initial learning rate µ t = 0.001, the sparsity penalty ρ = 0.1, and the batch size = 128 (across 4 GPUs). The learning rate is decayed every four epochs with rate 0.86. We create two pruned models from the different iterations of training ResNet-101: one has 2.36 × 10 7 parameters and the other has 1.73 × 10 7 parameters. We then fine-tune these two models using the standard way for training ResNet-101, and report their error rates. The Top-5 error rate increases of both models are less than 0.5%. The Top-1 error rates are summarized in TAB2. To our knowledge, only a few works have reported their performance on this very large-scale benchmark w.r.t. the Top-1 errors. We compare our approach with some recent works in terms of model parameter size, flops, and error rates. As shown in TAB2, our model v2 has achieved a compression ratio more than 2.5 while maintaining more than 1% lower error rates than that of other state-of-the-art models at comparable size of parameters. In the first experiment (CIFAR-10), we train the network from scratch and allocate enough steps for both γ and W adjusting their own scales. Thus, initialization of an improper scale of γ-W is not really an issue given we optimize with enough steps. But for the pre-trained models which were originally optimized without any constraints of γ, the γs scales are often unanticipated. It actually takes as many steps as that of training from scratch for γ to warm up. By adopting the rescaling trick setting α to a smaller value, we are able to skip the warm-up stage and quick start to sparsify γs. For example, it might take more than a hundred epoch to train ResNet-101, but it only takes about 5-10 epochs to complete the pruning and a few more epochs to fine-tune. network param size. flops error (%) ratio resnet-50 pruned BID8 As we have discussed about the two major observations in Section 5.1, a more appealing scenario is to apply our approach in pruning channels of over-parameterized model. It often happens when one adopts a pre-trained network on a large task (such as ImageNet classification) and fine-tunes the model to a different and smaller task BID16. In this case, one might expect that some channels that have been useful in the first pre-training task are not quite contributing to the outputs of the second task. We describe an image segmentation experiment whose neural network model is composed from an inception-like network branch and a densenet network branch. The entire network takes a 224 × 224 image and outputs binary mask at the same size. The inception branch is mainly used for locating the foreground objects while the densenet network branch is used to refine the boundaries around the segmented objects. This model was originally trained on multiple datasets. In our experiment, we attempt to prune channels in both the inception branch and densenet branch. We set α = 0.01, ρ = 0.5, µ t = 2 × 10 −5, and batch size = 24. We train the pre-trained base model until all termination criterion are met, and build the pruned model for fine-tuning. The pruned model saves 86% parameters and 81% flops of the base model. We also compare the fine-tuned pruned model with the pre-trained base model across different test benchmark. Mean IOU is used as the evaluation metric.3 It shows that pruned model actually improves over the base model on four of the five test datasets with about 2% ∼ 5%, while it performs worse than the base model on the most challenged dataset DUT-Omron, whose foregrounds might contain multiple objects. base model pruned model test dataset (#images) mIOU mIOU MSRA10K 83.4% 85.5% DUT-Omron 83.2% 79.1% Adobe Flickr-portrait BID17 FORMULA11 88.6% 93.3% Adobe Flickr-hp BID17 84.5% 89.5% COCO-person BID13 84.1% 87.5% param. size 1.02 × 10 Table 4: mIOU reported on different test datasets for the base model and the pruned model. We proposed a model pruning technique that focuses on simplifying the computation graph of a deep convolutional neural network. Our approach adopts ISTA to update the γ parameter in batch normalization operator embedded in each convolution. To accelerate the progress of model pruning, we use a γ-W rescaling trick before and after stochastic training. Our method cleverly avoids some possible numerical difficulties such as mentioned in other regularization-based related work, hence is easier to apply for practitioners. We empirically validated our method through several benchmarks and showed its usefulness and competitiveness in building compact CNN models. Figure 1: Visualization of the number of pruned channels at each convolution in the inception branch. Colored regions represents the number of channels kept. The height of each bar represents the size of feature map, and the width of each bar represents the size of channels. It is observed that most of channels in the bottom layers are kept while most of channels in the top layers are pruned.
A CNN model pruning method using ISTA and rescaling trick to enforce sparsity of scaling parameters in batch normalization.
1,366
scitldr
Stochastic gradient descent (SGD) has been the dominant optimization method for training deep neural networks due to its many desirable properties. One of the more remarkable and least understood quality of SGD is that it generalizes relatively well on unseen data even when the neural network has millions of parameters. We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting (DGB). The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem. Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner. The ing weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass. When implemented as a separate input normalization layer (INN) the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers. As opposed to batch normalization (BN), INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks. Boosting, along side deep learning, has been a very successful machine learning technique that consistently outperforms other methods on numerous data science challenges. In a nutshell, the basic idea of boosting is to sequentially combine many simple predictors in such a way that their combined performance is better than each individual predictor. Frequently, these so called weak learners are implemented as simple decision trees and one of the first successful embodiment of this idea was AdaBoost proposed by. No too long after this, and made the important observation that AdaBoost performs in fact a gradient descent in functional space and re-derived it as such. went on to define a general statistical framework for training boosting-like classifiers and regressors using arbitrary loss functions. Together with they showed that boosting minimizes a loss function by iteratively choosing a weak learner that approximately points in the negative gradient direction of a functional space. Neural networks, in particular deep neural nets with many layers, are also trained using a form of gradient descent. Stochastic gradient descent (SGD) has been the main optimization method for deep neural nets due to its many desirable properties like good generalization error and ability to scale well with large data sets. At a basic level, neural networks are composed of stacked linear layers with differentiable non-linearities in between. The output of the last layer is then compared to a target value using a differentiable loss function. Training such a model using SGD involves updating the network parameters in the direction of the negative gradient of the loss function. The crucial step of this algorithm is calculating the parameter gradients and this is efficiently done by the backpropagation algorithm . Backpropagation has many variations that try to achieve either faster convergence or better generalization through some form of regularization. However, despite superior training outcomes, accelerated optimization methods such as Adam , Adagrad or RMSprop have been found to generalize poorly compared to stochastic gradient descent . Therefore even before using an explicit regularization method, like dropout or batch normalization , SGD shows very good performance on validation data sets when compared to other methods. The prevalent explanation for this empirical observation has been that SGD prefers "flat" over "sharp" minima, which in turn makes these states robust to perturbations. Despite its intuitive appeal, recent work by cast doubt on this explanation. This work introduces a simple extension of SGD by combining backpropagation with gradient boosting. We propose that each iteration of the backpropagation algorithm can be reinterpreted by solving, at each layer, a regularized linear regression problem where the independent variables are the layer inputs and the dependent variables are gradients at the output of each layer, before non-linearity is applied. We call this approach deep gradient boosting (DGB), since it is effectively a layer-wise boosting approach where the typical decision trees are replaced by linear regressors. Under this model, SGD naturally emerges as an extreme case where the network weights are highly regularized, in the L2 norm sense. We hypothesize that for some learning problems the regularization criteria doesn't need to be too strict. These could be cases where the data domain is more restricted or the learning task is posed as a matrix decomposition or another coding problem. Based on this idea we further introduce INN, a novel layer normalization method free of learnable parameters and show that it achieves competitive on benchmark image recognition problems when compared to batch normalization (BN). Consider a learning problem where training examples (x, y) are generated from a probability distribution P (x, y) where x ∈ R N is the space of measurements and y ∈ R is the space of labels. The goal of gradient boosting is to find an approximationF (x) to a function F (x) that minimizes a specified loss function Gradient boosting solves Eq. 1 by seeking functions h t (x), so called weak learners, from a differentiable class H such thatF = t h t (x). Starting from a constant function F 0 and applying steepest descent we can findF by iteratively updating with learning rate γ: The weak learners h t are thus found by fitting the training data to the pseudo-residuals r t = ∇ Ft−1 L(F t−1 (x), y) hence in practice: Next, consider a neural network F (x) with parameters. Similarly, starting from parameter w 0 and following steepest descent the network weights are updated at iteration t: At each iteration, for a fixed layer l the gradient of the parameter w ij can be calculated using the chain rule: where o [l] i is the output of the layer l − 1 at iteration t and position i and the gradients ∂Lt ∂F [l] t are calculated using the backpropagation algorithm. Alternatively, using the gradient boosting formula from Eq. 3 and plugging in t we obtain, after dropping the layer indexes [l] for simplicity: o We choose the weak learner h(o t) in this case to be a linear transformation as well h(o t) = o T t v t and fit it to the pseudo-residuals r t = ∂Lt ∂F [l] t by solving the associated regularized linear regression: The primal form solution of Eq. 7 isv t = (o T t o t + αI) −1 o t r t and plugging it into Eq. 6 yields the following update rule: −1 o t r t where I ∈ R N ×N is the identity matrix. For a given input batch of size B, the update rule becomes: where W t ∈ R n×m is the weight matrix for a given layer, O t ∈ R B×n is the input matrix at the same layer while R t ∈ R B×m is the pseudo-residual matrix ∇ Ft L t. In practice, the inverse of matrix O T t O t can be hard to calculate for neural networks with large number of neurons per layer. In consequence, it is often more convenient to use the dual form formulation of the ridge regression solution of Eq. 7 that involves calculating the inverse of O t O T t instead. This is a well known formulation (e.g. see Chapter 1 of ) and leads to the analogous weight update rule: The weight update formulas from Eq. 8 and 9 have the drawback that they change in magnitude with either the number of neurons in the neural network layer or batch size. This is an undesirable behavior that can lead to differences in scale between weights from layers of different sizes. In consequence, the terms O In the following we will compare the performance of DGB and SGD on a variety of reference data sets and explore the effect of the α parameter using simple dense layer neural networks. A summary of the data sets used is given in The first data set is the MNIST database of handwritten digits comprised of 60,000 28x28 grayscale images for training and 10,000 for testing. In the case of the Higgs data set we used a randomly chosen subset with 1 million samples out of 10 million for training, and 500 thousands samples for testing. This data comes from the field of high energy physics and was generated by Monte Carlo simulations. The third data set (Reuters-21578) is comprised of 11,228 english language news articles from Reuters international news agency, labeled over 46 topics. The forth data set is the Human Resource (HR) data set which was put together by IBM Inc. to predict employee attrition based on 47 features and was randomly split into 1,029 training samples and 441 test samples. The fifth data set is a regression problem for predicting a measure of air quality based on the outputs from an array of chemical sensors together with air temperature and humidity . All experiments were performed until test set convergence using a gradient update momentum of 0.9 and no decay. For simplicity, network architecture was fixed for each data set and all activation functions were rectified linear units (ReLU) . The network weights were initialized using a uniform distribution according to . The loss functions used were cross-entropy loss for classification problems and mean squared loss for regression problems. The performance metrics used were accuracy for classification tasks and root mean squared error (RMSE) for regression tasks. In addition, standard deviations were calculated and reported for all performance measures by repeating each experiment 10 times with different random seeds. For this experiment we found that were relatively invariant to network architecture passed a certain complexity, in consequence we chose a dense neural network with three hidden layers each with 500 ReLU nodes. All models were trained using 100 sample batch size for 25 epochs using a fixed 0.1 learning rate and achieved 100% accuracy on the training set. The two DGB variants were labeled DGB(l) if it used the left pseudo-inverse from Eq. 8 and DGB(r) corresponding to the right pseudo-inverse from Eq. 9. Model α Learning rate Performance 97.99% ± 0.11 DGB(r) 0.01 0.1 96.25% ± 0.29 98.03% ± 0.10 DGB(l) 0.01 0.1 96.86% ± 0.13 Table 2: Performance on the MNIST data set measured as accuracy. Both left and right variants achieve increasingly better test set performance with larger α values culminating with 98.59% average accuracy for the DGB(r) model with α = 1.0. This is marginally better than 98.41% average accuracy for SGD but more importantly this shows how increasing the regularization parameter alpha successfully improves test set performance for an image classification problem prone to over-fitting. Overall DGB(l) and DGB(r) show similar performance with the drawback that for relatively small batch sizes DGB(l) is considerably slower. This is because it needs to calculate the inverse of 500x500 matrix as opposed to calculating the inverse of 100x100 matrix, as is the case for DGB(r). When training on a single V-100 Nvidia GPU one SGD iteration took on average 0.0166 sec, one DGB(r) iteration took 0.0188 sec while in contrast DGB(l) took 0.1390 sec. In consequence, for the rest of experiments we will just use DGB(r) and simply label it DGB. Higgs data set was created using Monte Carlo simulations of particle collisions and has 11 mil samples with 28 features. This is a binary classification problem designed to distinguish between a signal process which produces Higgs bosons and a process which does not. For this exercise we kept only 1 mil samples for training and 500k for testing. All 28 features were normalized to be between 0 and 1. We used an architecture composed of two hidden linear layers with 500 ReLU activations each that was trained using a small batch size of 30 samples over 50 epochs. The learning rate was kept constant at 0.01 for SGD and DGB for high α values and was reduced to 0.001 for α ≤ 0.1. This data set poses a considerable challenge with its relatively few features and large sample size. Performance on the training set is relatively poor and doesn't approach 100% accuracy like in the other experiments. The best test set accuracy is obtained for a smaller α value (Table A1). Reuters is a text categorization data set comprised or articles organized into 46 news topics. It was randomly divided into 8982 training and 2246 test samples that were processed using a binary tokenizer for the top 1,000 most common words. This data was then fed into a dense neural network with two hidden layers each with 500 ReLU nodes and trained for 100 epochs with learning rate fixed at 0.01. Table A2 shows the for experiments run using the SGD optimizer and DGB with α values 1.0, 0.1 and 0.01. This is a relatively small data set with a large number of features that is usually more successfully addressed using more advanced approaches like recurrent neural networks or convolutional neural nets. In this case, the are close for all experiments and only marginally better at 78.05% mean accuracy for DGB with α = 0.1. Similar to the other data sets, min-max normalization was used on all 47 features of the Human resource attrition data (HR). This is a binary classification task designed to predict employee attrition. To this end we employed a dense neural network with one hidden layer composed of 100 ReLU nodes. As before, we used a batch size of 30 samples for speed. All models were trained for 500 epochs with a fixed learning rate of 0.001. With only 1029 training samples and 441 test samples the error bars (i.e. standard deviation) are too big to make a claim that DGB outperforms SGD in this case (see Table A3). The Air quality data set contains 9358 instances of hourly averaged reads from chemical sensors designed to measure air quality. After removing samples with missing values we divided the remaining data equally into 3788 training and 3788 test samples, and all the features were min-max normalized to be between 0 and 1. Unlike the previous experiments this was a regression problem with the goal of predicting concentrations of benzene, a common pollutant associated with road traffic. For this we employed a neural network with two hidden layers with 100 ReLU activations each, trained for 1000 epochs using a fixed learning rate of 0.1 and a batch size of 30 samples. Just as the previous experiments, DGB with larger α values were closer in performance to SGD (Table A4). In this case, relaxing the regularization parameter led to a gradual increase in test set performance with almost half the root mean squared error for DGB with α = 0.001 when compared to SGD. We showed in the previous section that deep gradient boosting outperforms regular stochastic gradient descent in some cases. However, this comes at a cost because DGB gradient updates involve calculating the inverse of a matrix at each step and for each layer. The computational cost for matrix inversion is roughly O(n 3) and this can be significant even for small batch sizes as the ones used in the previous experiments. A straight forward way of speeding matrix inversion is to keep just the diagonal terms. This is equivalent of making the simplified assumption that for a layer input matrix O ∈ R B×n the rows are independent (i.e. k o ik o jk for i = j) in the case of DGB(r) or the columns are independent (i.e. k o ki o kj for i = j) in the case of DGB(l). For a layer with n input dimensions and a sample batch of size B, plugging in the diagonal approximations of matrices O t O T t and O T t O t into Eq. 8 & 9 the gradient updates become: in the case of DGB(r) andô in the case of DGB(l). In the above notationÔ t is the normalized version of layer input matrix O t with elementsô ij. In the following paragraphs, FDGB(r) will refer to fast DGB(r) from Eq. 11 and FDGB(l) will refer to fast DGB(l) from Eq. 12. Convolutional neural networks (CNN) are a class of neural networks particularly suited to analyzing visual imagery. Typically the input at each CNN layer is a tensor of size (B × c × w × h) where B is the batch size, c is the number of feature maps while w and h are the width and height of the map. Instead of using a dense network with different weight connecting each point in the (c × w × h) map, CNNs work by dividing the feature map into usually overlapping sections by scanning the input with a kernel of a size much smaller than (w×h), usually (3×3). In this way CNNs effectively increase the number of input samples and simultaneously decrease the number of weights that need to be learned. This process can be viewed as a form of regularization and it has proven to greatly outperform regular dense networks that are usually prone to over-fitting on image recognition problems. For a given CNN layer, assume we have an input of size (B × c 0 × w 0 × h 0) and an output of size (B × c 1 × w 1 × h 1) that was generated with a kernel of size (p × q). In this case, the equivalent DGB formulation based on Eq. 8 & 9 would have an input matrix O t of size (B × w 1 × h 1, c 0 × p × q). It would be extremely costly to first unfold the input as described and then calculate the matrix inverse of either In consequence, it is advisable to use the fast versions of DGB however, one would still have to unfold the input matrix which is still a costly operation. The key observation for extending FDGB(l) to convolutional neural networks is that for small kernels with a small step size each position in the kernel matrix will "visit" roughly the same pixels in a feature map. Hence, calculating the raw second moment of each feature needs to be done only once per map. After resizing the input tensor O t to (B × w 0 × h 0, c 0) the new normalized inputÔ t becomes:ô It is not unreasonable to assume a small kernel size given that most modern convolutional network architectures use kernels of size for image recognition. It is clear from Eq. 10 and previous paragraphs that DGB's gradient update formulation can also be viewed as input normalization defined as N (X):= X(X T X + αI) −1 or N (X):= (XX T + αI) −1 X followed by a regular SGD update. Hence this process can also be implemented as a separate normalization layer, similar to how other layer normalization work like batch norm (BN) , layer norm (LN) , instance norm (IN) and others. The advantages of these normalization methods are well documented: they accelerate descent by enabling higher learning rates without the risk of divergence which in turn may help avoid local sharp minima , make training more resilient to parameter or input data scale , and they stabilize layer output variance which prevents gradient exposition in very deep networks (a). Interestingly, for formulations that operate across samples like batch norm they also have a regularization effect and virtually eliminate the need for other regularization methods like dropout . This is because training examples are seen in conjunction with other examples in the mini-batch, hence the layer outputs are no longer deterministic. This added noise was found to be beneficial to the generalization of the network. Batch norm, layer norm and instance norm perform a z-score standardization of features/inputs by subtracting the mean and dividing by standard deviation which is typically applied to the output of a layer, before the activation function. This procedure runs the risk of keeping the output of a layer inside the linear regime of activation functions that are symmetric around zero. In order to recover the expressive power of the network, after normalization is performed two new parameters are introduced that scale and shift the output. These parameters are in turn learned during training like the rest of the network weights. Interestingly, the scale and shift parameters help convergence even for neural networks with non-symmetric activation functions (e.g. ReLU). In contrast to these methods, the normalization procedure described here is applied to the layer input, it is not scale invariant and has only one parameter α which is predefined before training, not learned. In addition, "normalization" in our case doesn't refer to the process of making the data look more like a standard normal distribution but instead refers to an algebraic transformation of the data that improves its condition number. In the following we will study the effect of input normalization as defined in the previous paragraph to the simple problem of ordinary least squares solved by gradient descent. Consider a multiple linear regression model arg min w (y − Xw) T (y − Xw) where X ∈ R B×n is an input matrix, y ∈ R B is the output vector and w ∈ R n is the vector of parameters. It is well known that the rate of convergence of gradient descent in this case depends on the condition number of the Hessian H = X T X where high values lead to slow convergence. If X = U ΣV T is the singular value decomposition of input matrix X, where U and V are orthogonal matrices and Σ is a diagonal matrix, then the condition number of where σ max and σ min are the largest and respectively smallest singular values of X. Let N (X):= X(X T X + αI) −1 and arg min w (y − N (X)w) T (y − N (X)w) be the ing linear regression problem after applying input normalization. The new Hessian matrix is After plugging in X = U ΣV T and some linear algebra manipulation the new condition number becomes: This shows that input normalization leads to a smaller condition number of the Hessian matrix in the case of multiple linear regression. The same can be obtained when using the alternative input normalization N (X):= (XX T + αI) −1 X (see Prop. A.1 & A.2 in the appendix). In practice, we will define Input Normalization (INN) using the faster diagonal approximations from Eq. 11 & 12. In addition, similar to other normalization layers we will also mean center the input data. For a single layer network, centering further improves conditioning of the Hessian of the loss , it brings the input data near the critical point of most activation functions thus removing the bias towards certain neurons and, it has a regularization effect by shifting the data in a nondeterministic way during training based on the content of the mini-batch. For an input matrix X ∈ R B,n with elements x ij left input normalization is labeled INN(l) and defined as while right input normalization is labeled INN(r) and defined as In the case of INN(l) formulation the batch statistics ij can cause non-deterministic predictions at test time. Instead, running exponential averages are calculated during training for both m 1 and m 2. These new values are then used at test time. We tested the implicit (i.e. FDGB) and explicit (i.e. INN) input normalization models on the CI-FAR10 data set and compared them to Batch Normalization (BN). We then validated the best performing models on the ImageNet object recognition data set (ILSVRC2015) . In both cases we used versions of the VGG architecture first introduced by and then modified them to accommodate the different models as described in Appendix A. The CIFAR10 data set consists of 60000 images organized into 10 classes. There are 50000 training and 10000 test images. According to Table 3 Table 3: Performance on the CIFAR10 data set measured as accuracy. The ImageNet data set is part of the Large Scale Image Recognition Challenge 2015 and consists of 1,281,167 training and 50,000 validation images organized into 1,000 categories. For this experiment we compared the performance of VGG19 and ResNet101 with batch normalization (VGG19 BN(a) and respectively ResNet101 BN) to the equivalent architecture with input normalization layers (VGG19 INN(l) and respectively ResNet101 INN(l)) (See Appendix A for more details). In addition, for VGG19 BN(b) we disabled the learnable affine parameters which reduces BN to a basic standardization operation. This work introduces Deep Gradient Boosting (DGB), a simple extension of Stochastic Gradient Descent (SGD) that allows for finer control over the intrinsic generalization properties of SGD. We empirically show how DGB can outperform SGD in certain cases among a variety of classification and regression tasks. We then propose a faster approximation of DGB and extend it to convolutional layers (FDGB). Finally, we reinterpret DGB as a layer-wise algebraic manipulation of the input data and implement it as a separate normalization layer (INN). We then test INN on image classification tasks where its performance proves to be on par with batch normalization without the need for additional parameters. A APPENDIX Table A4: Performance on the Air data set measured as root mean squared error. has singular values of the form Let X = U ΣV T be the singular value decomposition of X then: values of the form Let X = U ΣV T be the singular value decomposition of X then: For this experiment we used a version of the VGG11 network introduced by that has 8 convolutional layers followed by a linear layer with 512 ReLU nodes, a dropout layer with probability 0.5 and then a final softmax layer for assigning the classification probabilities. A second version of this architecture (VGG11 BN) has batch normalization applied at the output of each convolutional layer, before the ReLU activation as recommended by We modified this architecture by first removing all the batch normalization and dropout layers. We then either replaced all convolutional and linear layers with ones that implement the fast version of DGB for the FDGB(l) architecture or added INN(l) layers in front of each of the original convolutional and linear layers. Both FDGB(l) and INN(l) models implement input normalization based on the left pseudo-inverse (see Eq. 12 & 16) in order to take advantage of its regularization effect. All weights were initialized according to and were trained using stochastic gradient descent with momentum 0.9 and batch size 128. For the FDGB(l) model the gradients were calculated according to Eq. 12 for linear and 13 for convolutional layers. Training was started with learning rate 0.1 and reduced to 0.01 after 250 epochs and continued for 350 epochs. All experiments were repeated 10 times with different random seeds and performance was reported on the validation set as mean accuracy ± standard deviation. Similar to the CIFAR10 data set we based these experiments on the larger VGG19 architecture of. The VGG19 network has 16 convolutional layers interspersed with max-pool operations, followed by two linear layers with 4096 ReLU nodes and a final softmax layer with 1000 outputs. The original VGG19 model uses dropout regularization after each rectified linear layer. A second version of this architecture (VGG19 BN) has batch normalization applied at the output of each convolutional layer, before the ReLU activation. We tested VGG19 BN with and without affine parameters (VGG19 BN(a) and respectively VGG19 BN(b)). After removing all the dropout and batch normalization layers we created the INN(l) version by adding input normalizations in front of each remaining layer. In order to explore the performance of INN on network architectures with residual connections we employed ResNet101 proposed by He et al. (2016b). The original network uses batch normalization between convolutional layers by default (ResNet101 BN). The INN version of this network (ResNet101 INN(l)) was created by first removing all BN layers, then adding INN operations in front of each convolutional and dense layers, with the exception of down-sample convolutions. Stochastic gradient descent was used to minimize the cross entropy loss of each network over 120 epochs, using an initial learning rate of 0.05, 0.9 momentum, a batch size of 128 images, and 0.0001 weight decay. Every 30 epochs the learning rate was decreased by a factor of 10. All networks were evaluated by computing the top-1 and top-5 validation errors on a held out data set using 10 crops per image.
What can we learn about training neural networks if we treat each layer as a gradient boosting problem?
1,367
scitldr
We propose to extend existing deep reinforcement learning (Deep RL) algorithms by allowing them to additionally choose sequences of actions as a part of their policy. This modification forces the network to anticipate the reward of action sequences, which, as we show, improves the exploration leading to better convergence. Our proposal is simple, flexible, and can be easily incorporated into any Deep RL framework. We show the power of our scheme by consistently outperforming the state-of-the-art GA3C algorithm on several popular Atari Games. Basic reinforcement learning has an environment and an agent. The agent interacts with the environment by taking some actions and observing some states and rewards. At each time step t, the agent observes a state s t and performs an action a t based on a policy π(a t |s t ; θ). In return to the action, the environment provides a reward r t and the next state s t+1. This process goes on until the agent reaches a terminal state. The learning goal is to find a policy that gives the best overall reward. The main challenges here are that the agent does not have information about the reward and the next state until the action is performed. Also, a certain action may yield low instant reward, but it may pave the way for a good reward in the future. Deep Reinforcement Learning BID6 has taken the success of deep supervised learning a step further. Prior work on reinforcement learning suffered from myopic handcrafted designs. The introduction of Deep Q-Learning Networks (DQN) was the major advancement in showing that Deep Neural Networks (DNNs) can approximate value and policy functions. By storing the agent's data in an experience replay memory, the data can be batched BID8 BID9 or randomly sampled BID4 BID12 from different time-steps and learning the deep network becomes a standard supervised learning task with several input-output pairs to train the parameters. As a consequence, several video games could be played by directly observing raw image pixels BID1 and demonstrating super-human performance on the ancient board game Go.In order to solve the problem of heavy computational requirements in training DQN, several followups have emerged leading to useful changes in training formulations and DNN architectures. Methods that increase parallelism while decreasing the computational cost and memory footprint were also proposed BID7 BID6, which showed impressive performance. A breakthrough was shown in BID6, where the authors propose a novel lightweight and parallel method called Asynchronous Advantage Actor-Critic (A3C). A3C achieves the stateof-the-art on many gaming tasks. When the proper learning rate is used, A3C learns to play an Atari game from raw screen inputs more quickly and efficiently than previous methods. In a remarkable followup to A3C, BID0 proposed a careful implementation of A3C on GPUs(called GA3C) and showed the A3C can accelerated significantly over GPUs, leading to the best publicly available Deep RL implementation, known till date. Slow Progress with Deep RL: However, even for very simple Atari games, existing methods take several hours to reach good performance. There is still a major fundamental barrier in the current Deep RL algorithms, which is slow progress due to poor exploration. During the early phases, when the network is just initialized, the policy is nearly random. Thus, the initial experience are primarily several random sequences of actions with very low rewards. Once, we observe sequences which gives high rewards, the network starts to observe actions and associate them with positive rewards and starts learning. Unfortunately, finding a good sequence via network exploration can take a significantly long time, especially when the network is far from convergence and the taken actions are near random. The problem becomes more severe if there are only very rare sequence of actions which gives high rewards, while most others give on low or zero rewards. The exploration can take a significantly long time to hit on those rare combinations of good moves. In this work, we show that there is an unusual, and surprising, opportunity of improving the convergence of deep reinforcement learning. In particular, we show that instead of learning to map the reward over a basic action space A for each state, we should force the network to anticipate the rewards over an enlarged action space A + = K k=1 A k which contains sequential actions like (a 1, a 2, ..., a k). Our proposal is a strict generalization of existing Deep RL framework where we allow to take a premeditated sequence of action at a given state s t, rather than only taking a single action and re-deciding the next action based on the outcome of the first action and so on. Thus the algorithm can pre-decide on a sequence of actions, instead of just the next best action, if the anticipated reward of the sequence is good enough. Our experiments shows that by simply making the network anticipate the reward for a sequence of action, instead of just the next best actions, the network shows significantly better convergence behavior consistently. We even outperform the fastest known implementation, the GPU accelerated version of A3C (GA3C). The most exciting part is that that anticipation can be naturally incorporated in any existing implementation, including Deep Q Network and A3C. We simply have to extend the action set to also include extra sequences of actions and calculate rewards with them for training, which is quite straightforward. Methods for reinforcement learning can be classified into three broad classes of solutions: Valuebased, Policy-based and Actor-Critic. The main idea in Value based methods is to define a function called Q-function (Q stands for Quality) which estimates the future reward for a given state-action pair. One popular way to construct and learn a Q-function is called Deep-Q learning BID5. The Q-function is iteratively learned by minimizing the following loss function DISPLAYFORM0 Here, s is the current state, a is the action, r is the reward earned for action a and s is next state that we end up. The recursive definition DISPLAYFORM1 comes from the Bellman equation in Dynamic Programming. This is called 1-step Q-Learning as we only perform one action and observe the reward. If we instead observe a sequence of k actions and the states ing from those actions, we can define the Q function as follows DISPLAYFORM2 In policy-based model-free methods, a function approximator such as a neural network computes the policy π(a t |s t ;), where θ is the set of parameters of the function. θ is updated by maximizing the cumulative reward as per Bellman Equation given by DISPLAYFORM0 One of the popular approaches in policy-based methods is REINFORCE BID13 ). An intuitive baseline is the mean of all previous rewards. If the current reward is higher than the mean of all previous rewards, then the current action is'good'. Otherwise, it is'bad'. That is encapsulated in the loss function directly. Baseline b t being independent of current state s t is not the beneficial because it has no context of the current state. Hence, we would like to redefine it as b t (s t). One such popular function is DISPLAYFORM0 Here, V π is the Value function. This approach marks the transition from pure Policy-Based Methods to a blend of Policy-based and Value-based methods. Here, the policy function acts as an actor because it is responsible for taking actions and the Value function is called the critic because it evaluates the actions taken by the actor. This approach is called the Actor-Critic Framework (Sutton & Barto). We still solve the parameters for policy function but use a Value function to decide on the'goodness' of a reward. A3C BID6 ) is currently the state-of-the-art algorithm on several popular games. It uses an Asynchronous framework in which multiple agents access a common policy, called central policy, and play simultaneously. They communicate the gradients after atmost t max actions. All the communicated gradients from multiple agents are then used to update the central policy. Once the policy parameters are updated, they are communicated back to all the agents playing. The framework uses a shared neural network which gives 2 outputs, one is the policy distribution, and the other is the Value function. Policy π(a t |s t, θ) is the output of softmax(because it is a distribution) and Value function V (s t, θ) is the output of a linear layer. The objective function for policy update of A3C is as follows(note that we maximize the policy objective) DISPLAYFORM0 Here, the first part is typical actor-critic framework except that the Value function now shares parameters θ. The second part is the entropy over the policy distribution of actions. From information theory, we know that entropy is maximum when all actions are equally likely. Hence, this term favors exploration of new actions by enforcing some probability to unlikely actions. The weight β decides how much priority we give to exploration. Please note that A3C pseudocode in the original paper doesn't mention anything about entropy, but we include it here as it is discussed in various other references. Since, V (s t ; θ) is also a function of θ, we also get value-function-gradients from V by minimizing the DQN-type loss function DISPLAYFORM1 Both the gradients are calculated and stored by each agent until they terminate or perform t max actions. The collection of gradients is then communicated, and the updated central network is now available for all agents. The major concern with A3C is that it relies on sequential training. More generally, all Reinforcement Learning paradigms are plagued by the fact that we do not have a pre-decided training and testing data and we have to leverage information while training. That renders GPUs and other parallelizations useless for implementing RL algorithms, particularly A3C. GA3C BID0 was proposed as a follow-up and an alternative framework for A3C that enables the usage of GPU. The broad idea of GA3C is to use larger batches of inputoutput(output in our case refers to reward) pairs to facilitate better usage of GPUs like usual supervised learning. Since we need to perform actions and observe rewards, every agent GA3C maintains two queues called P redictionQueue and T rainingQueue. Every Agent queues up Policy requests in P redictionQueue and submits a batch of input-reward pairs to the T rainingQueue. Instead of having a central policy that every agent uses to predict, GA3C has a predictor that takes P redictionQueues from as many agents as possible and sends an inference query to GPU(this is where the batch size increases thereby making use of GPU). Predictor then sends updated policy to all agents that sent their P redictionQueues. On the other hand, there's a trainer component of GA3C which takes the input-reward batches from as many agents as possible and updates model parameters by sending the batches to a GPU.GA3C presents new challenges as it has to deal with trade-offs like size of data trasfer vs number of data transfers to GPU, number of predictors N P vs size of prediction batches etc. While we build our idea on GA3C, we set most of these parameters to their defaults. Our proposal is an unusually straightforward extension, and a strict generalization of the existing deep reinforcement learning algorithms. At high level, by anticipation we extend the basic action set A to an enlarged action space DISPLAYFORM0 A k, which also includes sequences of actions up to length K. As an illustration, let us say A = {L, R} and we allow 2-step anticipation, therefore our new action space is A + = A ∪ A 2 = {L, R, LL, LR, RL, RR}. Each element a + belonging to A + is called a meta-action, which could be a single basic action or a sequence of actions. Typical deep reinforcement learning algorithms have a DNN to output the estimated Q values or policy distributions according to basic action set A. In our algorithm, we instead let the DNN output values for each meta-action in the enlarged action set A +. Overall, we are forcing the network to anticipate the "goodness" of meta-actions a little further, and have a better vision of the possibilities earlier in the exploration phase. From human observations and experiences in both sports and video games, we know the importance of "Combo" actions. Sometimes single actions individually do not have much power, but several of common actions could become very powerful while performed in a sequential order. For example, in the popular game CounterStrike, jump-shoot combo would be a very good action sequence. This kind of observation inspires us to explore the potential of "Combos", i.e. multi-step anticipatory actions in reinforcement learning. Moreover, the advantage of anticipatory actions over the standard ones for improving exploration is analogous to how higher n-grams statistics help in better modeling compared to just unigrams in NLP.Another subtle advantage of anticipating rewards for sequence of actions is better parameter sharing which is linked with multi-task learning and generalization. Parameter Sharing: Back in 1997, BID3 showed the advantage of parameter sharing. In particular, it showed that a single representation for several dependent tasks is better for generalization of neural networks than only learning from one task. With the addition of meta-action (or extra actions sequences), we are forcing the network layers to learn a representation which is not only useful in predicting the best actions but also predicts the suitability of meta-actions, which is a related task. A forced multi-task learning is intrinsically happening here. As illustrated in Figure 1, the black box parameters are a shared representation which is simultaneously learned from the gradients of basic actions as well as meta-actions. This additional constraint on the network to predict more observable behaviors regularizes the representation, especially in the early stages. Anticipatory Deep Q Network: Although our main proposal is A4C which improves the current state-of-the-art A3C algorithm, to illustrate the generality of our idea, we start with a simpler algorithm -Anticipatory Deep Q Network (ADQN). DQN is a value-based algorithm whose network approximates Q values for each action. If we see each gradient update as a training sample sent to the network, DQN generates 1 training sample for each action-reward frame. We believe one frame could provide more information than that. With meta-action, i.e., ADQN algorithm, instead we force the network to output Q values for each meta-action in the enlarged action space. For example, in CartPole game, the basic actions are L, R. In ADQN, we let the output values be over A + = {L, R, LL, LR, RL, RR}. For an experience sequence (..., s i, a i, r i, s i+1, a i+1, r i+1, s i+2, ...), we will get two updates for state s i: DISPLAYFORM0 In this way, we could abtain two gradient updates for each state. This update improves the intermediate representaion (parameter sharing) aggresively leading to superior convergence. In practice, we could organize them into one single training vector, as illustrated in the Figure 1. This algorithm performs very well on CartPole game (see Section 4.1).Figure 1: A toy example for ADQN with an enlarged action set {L, R, LL, LR, RL, RR}. For input s 0, we have 2 gradients, one for action L and other for action LR. In the previous section, we have shown that anticipation can be used on value-based reinforcement methods like DQN. However, DQN is not the state-of-art algorithm, and it converges relatively slowly on more complex tasks like Atari games. Due to the simplicity and generality of our method of anticipation, it is also directly applicable to -Asynchronous Advantage Actor-Critic (A3C) algorithm. As mentioned earlier, A3C uses a single deep neural network with |A| policy nodes and 1 value node. To enforce anticipation, we can just enlarge the number of policy nodes in the layer without changing other network architecture. Generally, if we want to support up to K steps of action sequences, we need |A + | policy nodes for the output layer, where DISPLAYFORM0 The new action space A + contains both basic single actions and sequences of actions. This improved algorithm is called Anticipatory asynchronous advantage actor-critic (A4C).In A4C algorithm, the neural network is used for two parts: prediction and training. In the prediction part, A4C lets the neural network output a distribution of actions from A +. For each state, we choose a meta-action a + according to the output distribution. If a + contains only one action, this single action will be executed. If a + corresponds to an action sequence (a 1, a 2, ..., a k), these actions will be executed one by one in order. A4C is a strict generalization of A3C, and it allows for three kinds of gradient updates for given action-reward frame: dependent updating (DU), independent updating (IU), and switching. A meta-action a + can be viewed as a combination of single actions. On the other hand, several basic actions taken sequentially could be viewed as a meta-action. From here comes our intuition of dependent updating, where each meta-action has its dependent basic actions. When we take a meta-action and get rewards, we not only calculate the gradients for this meta-action, but also for its corresponding basic actions. And for a sequence of basic actions, even if they were not taken as a meta-action, we also update the network as it takes the corresponding meta-action. For example, in a 2-step anticipation setting, we get an experience queue of (s 0, a 0, r 0, s 1, a 1, r 1, s 2, ...). No matter (a 0) was taken as a basic action or (a 0, a 1) was taken as a meta-action, we will update both of them for state s 0. In this case, we get 2 times more gradient updates as A3C for the same amount of episodes, ing in aggressive updates which lead to accelerated convergence, especially during the initial phases of the learning. We call this kind of dependent updating version of A4C as DU-A4C. Our pseudocode for DU-A4C is presented in Algorithm 1.Algorithm 1 Anticipatory asynchronous advantage actor-critic with Dependent Updating (DU-A4C) -pseudocode for each actor learner thread // Assume global shared parameter vectors θ and θ v and global shared T = 0 // Assume thread-specific parameter vector θ and θ v // Assume a basic set A = {a i} and the corresponding enlarged action set A + = {a DISPLAYFORM0 Initialize thread step counter t ← 1 repeat Reset gradients: dθ ← 0 and dθ v ← 0 Synchronize thread-specific parameters θ = θ and θ v = θ t start = t Get state s t repeat Choose a + t according to policy π(a + t |s t ; θ) for a i in the basic action sequence (a 1, a 2, ...) corresponding to a + t do Perform a i, receive reward r t and new state s t+1 t ← t + 1 T ← T + 1 end for until terminal s t or t − t start >= t max R = 0 for terminal s t V (s t, θ v) for non-terminal s t for i ∈ {t − 1, ..., t start} do R ← r i + γR for j ∈ {i, ..., min(i + K, t − 1)} do Let a + ij be the meta-action corresponding to the sequence (a i, ..., a j) Accumulate gradients wrt θ: dθ ← dθ + ∇ θ log π(a DISPLAYFORM1 2 /∂θ v end for Perform asynchronous update of θ using dθ and of θ v using dθ v . until T > T max Independent update is a very simple and straightforward updating method that we could just view each meta-action a + as a separate action offered by the environment. The reward of a + is the sum of rewards of taking all the basic actions in a + one by one in order. The next state of a + is the state after taking all the actions in the sequence. While updating, we only use the information of reward, and the next state of a + without regards to the dependencies and relations between meta-actions. The pseudocode is in Algorithm 2 (in Supplementary Materterials).Clearly, IU leads to less aggressive updates compared to DU. Even though independent updating makes no use of extra information from the intrinsic relations of meta-actions, it still has superior performance in experiments. The reason is that there exist some patterns of actions that yield high rewards consistently and anticipatory action space enables the network to explore this kind of action patterns. Our experiment suggests, DU-A4C converges faster over Atari games for the first few hours of training. DU-A4C shows a big gap over the speed of original A3C. However, after training for a longer time, we observe that aggressive updates cause the network to saturate quickly. This phenomenon is analogous to Stochastic Gradient Descent (SGD) updates where initial updates are aggressive but over time we should decay the learning rate BID2.Technically, dependent updating makes good use of information from the anticipatory actions and yields fast convergence. Independent updating method offers a less aggressive way of updating but it can sustain the growth for longer durations. Thus, we propose a switching method to combine the advantages of these two updating methods. Switching is simple: we first use dependent updating method to train the network for a while, then switch over to independent updating method from there on. Since the network is the same for both updating methods, the switching process is quite trivial to implement. We notice that this approach consistently stabilizes training on many scenarios(explained in the Section 4). As mentioned, switching is analogous to decaying learning rate with epochs in a typical Neural Network training, the difference being our approach is a hard reduction while learning rate decay is a soft reduction. The tricky part is when should we switch. Currently, it is more of a heuristic way: for each game, we typically switch half-way. The reason we choose half is that we want to utilize DU to converge quickly in first half and IU to stabilize and continuously increase in the second half. In our experiments, we realize that switching seems to have robust performance in experiments with regards to different choice of switching points. 4.1 STUDY OF EXPLORATION USING CARTPOLE GAME Figure 2: Results and analysis of CartPole game. Left: ADQN vs DQN on CartPole-v0; Right: The performed action distributions at different training stages. We divide the total 5000 episodes into 5 stages, and plot the distribution at each stage. To understand the dynamics of the Anticipatory network, we use a simple, classic control game Cartpole. Cartpole game has only 2 basic actions Left and Right, and its state space is R 4. We perform a 2-step Anticipatory DQN(mentioned in section 3.1) with Dependent Updates(DU) and compare against regular DQN. Owing to the simplicity of CartPole, we do not compare A4C vs A3C here, which we reserve for Atari games. We notice a significant jump in the score by using metaactions space given by {L, R, LL, LR, RL, RR}, instead of just {L, R}. Although CartPole game In the right plot (Figure 2(b) ), we also show the probability (frequency) distributions of 6 metaactions in different learning periods. It is clear from the plots that as learning goes on, the probability of basic actions increases and the probability of multi-step action drops. This trend shows that multi-step actions help the agent to better explore initially, with the anticipated vision of the future, obtaining better rewarding actions. Once the network has seen enough good actions, it figures our the right policy and seems to select basic actions only. Next, we demonstrate out A4C experiments on 4 popular Atari-2600 games namely Pong, Qbert, BeamRider, and SpaceInvaders. We use the environments provided by OPENAI GYM for both these classes of games. Atari-2600 games are the standard benchmarks for Reinforcement Learning Algorithms. We compare our against the state-of-the-art GPU based Asynchronous Actor-Critic (GA3C) framework from NVIDIA whose code is publicly available(at https://github.com/ NVlabs/GA3C)). In order to have uniform playing fields for both A4C and GA3C, we ran the baseline GA3C code on various games on our machine with a single P-100 GPU. We ran each experiment for 3 times on each game and plotted the average scores. To test the robustness of approach, we experimented with various hyperparameter values like minimum training batch size, max norm of gradient (whether to clip gradients; if so MaxNorm=40 by default), learning rate and even the time instant where switching begins. We noticed that our approach is better than baseline on all the settings. Nevertheless, the plots we present are for the optimal setting (MinTrainingBatchSize=40, GradientClipping=False, LearningRate=0.003). These values are also suggested to be optimal in the GA3C code. Note that we compare using the same setting for all 3 variants of A4C and also for the baseline GA3C. FIG0 shows the comparison of three variants of A4C updates against GA3C for four games. Note that the baseline GA3C plots(in red) are very similar to the ones reported in the original paper. We notice that the Independent Updates(IU) performs significantly better than GA3C on all occasions except Qbert, where it is very similar GA3C. In particular, IU achieves a score of 4300 on BeamRider game which is way better than the best mentioned in GA3C paper. IU crosses 3000 score in just 12.5 hrs while it takes 21 hrs for GA3C to achieve the same score. IU also achieves a score of > 750 on SpaceInvaders game where the best in GA3C paper achieves < 600. At the same time, the Dependent Updates(DU) method (in blue) starts to rise faster than GA3C but doesn't sustain the growth after sometime owing to reasons mentioned in Section 3.2.3. The only case where DU maintains the growth is Pong. The hybrid switching method(Sw) performs remarkably well consistently on all the games, achieving higher scores than the best of GA3C. For example, on Qbert game, the hybrid Sw method achieves a score of 12000 in just 7 hrs. The best mentioned in original GA3C paper achieves similar score in 20 hrs. The other re-runs of Qbert in GA3C paper stall at a score of 8000. Sw outperforms GA3C on other games as well, but it is still behind IU on BeamRider and SpaceInvaders games. In all, we notice that Switching from DU to IU after few hours is the most robust method while IU alone is good on 2 games. We propose a simple yet effective technique of adding anticipatory actions to the state-of-the-art GA3C method for reinforcement learning and achieve significant improvements in convergence and overall scores on several popular Atari-2600 games. We also identify issues that challenge the sustainability of our approach and propose simple workarounds to leverage most of the information from higher-order action space. There is scope for even higher order actions. However, the action space grows exponentially with the order of anticipation. Addressing large action space, therefore, remains a pressing concern for future work. We believe human behavior information will help us select the best higher order actions.
Anticipation improves convergence of deep reinforcement learning.
1,368
scitldr
To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs. Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested. Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L^2$ Hilbert space. We examine how typical networks behave in this space, and compare how parameter $\ell^2$ distances compare to function $L^2$ distances between various points of an optimization trajectory. We find that the two distances are nontrivially related. In particular, the $L^2/\ell^2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus. We then investigate how the $L^2$ distance could be applied directly to optimization. We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks. Secondly, we propose a new learning rule that constrains the distance a network can travel through $L^2$-space in any one update. This allows new examples to be learned in a way that minimally interferes with what has previously been learned. These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature. A neural network's parameters collectively encode a function that maps inputs to outputs. The goal of learning is to converge upon a good input/output function. In analysis, then, a researcher should ideally consider how a network's input/output function changes relative to the space of possible functions. However, since this space is not often considered tractable, most techniques and analyses consider the parameters of neural networks. Most regularization techniques, for example, act directly on the parameters (e.g. weight decay, or the implicit constraints stochastic gradient descent (SGD) places upon movement). These techniques are valuable to the extent that parameter space can be taken as a proxy for function space. Since the two might not always be easily related, and since we ultimately care most about the input/output function, it is important to develop metrics that are directly applicable in function space. In this work we show that it is relatively straightforward to measure the distance between two networks in function space, at least if one chooses the right space. Here we examine L 2 -space, which is a Hilbert space. Distance in L 2 space is simply the expected 2 distance between the outputs of two functions when given the same inputs. This computation relies only on function inference. Using this idea of function space, we first focus on characterizing how networks move in function space during optimization with SGD. Do random initializations track similar trajectories? What happens in the overfitting regime? We are particularly interested in the relationship between trajectories in function space and parameter space. If the two are tightly coupled, then parameter change can be taken as a proxy for function change. This common assumption (e.g. Lipschitz bounds) might not always be the case. Next, we demonstrate two possibilities as to how a function space metric could assist optimization. In the first setting we consider multitask learning, and the phenomenon of catastrophic forgetting that makes it difficult. Many well-known methods prevent forgetting by regularizing how much the parameters are allowed to shift due to retraining (usually scaled by a precision matrix calculated on previous tasks). We show that one can instead directly regularize changes in the input/output function of early tasks. Though this requires a "working memory" of earlier examples, this scheme turns out to be quite data-efficient (and more so than actually retraining on examples from old tasks).In the second setting we propose a learning rule for supervised learning that constrains how much a network's function can change any one update. This rule, which we call Hilbert-constrained gradient descent (HCGD), penalizes each step of SGD to reduce the magnitude of the ing step in L 2 -space. This learning rule thus changes the course of learning to track a shorter path in function space. If SGD generalizes in part because large changes to the function are prohibited, then this rule will have advantages over SGD. Interestingly, HCGD is conceptually related to the natural gradient. As we derive in §3.2.1, the natural gradient can be viewed as ing from constrains changes in a function space measured by the Kullbeck-Leibler divergence. We propose to examine the trajectories of networks in the space of functions defined by the inner product f, g = X f (x)g(x)dµ(x), which yields the following norm: DISPLAYFORM0 Here µ is a measure and corresponds to the probability density of the input distribution X. Note that this norm is over an empirical distribution of data and not over the uniform distribution of all possible inputs. The | · | 2 operator refers to the 2-norm and can apply to vector-valued functions. While we refer to this space as a Hilbert space, we make no use of an inner product and can also speak of this as any normed vector space, e.g. a Banach space. This norm leads to a notion of distance between two functions f and g given by DISPLAYFORM1 Since µ is a density, X dµ = 1, and we can write DISPLAYFORM2 The expectation can be approximated as an empirical expectation over a batch of examples drawn from the input distribution: DISPLAYFORM3 The quality of the empirical distance, of course, will depend on the shape and variance of the distribution of data as well as the form of f and g. In section 2.3, we empirically investigate the quality of this estimator for reasonably sample sizes N. We wish to compare at high level how networks move through parameter and function space. Our first approach is to compare a low-dimensional embedding of the trajectories through these spaces. In FIG2, we take a convolutional neural network and train three random initializations on a 5000-image subset of CIFAR-10. By saving the parameters of the network at each epoch as well as the output on a single large validation batch, we can later compute the 2 parameter distance and the FIG2: Visualization of the trajectories of three random initializations of a network through function space, left, and parameter space, right. The network is a convolutional network trained on a 5,000 image subset of CIFAR-10. At each epoch, we compute the L 2 and 2 distances between all previous epochs, forming two distance matrices, and then recompute the 2D embedding from these matrices using multidimensional scaling. Each point on the plots represents the network at a new epoch of training. The black arrows represent the direction of movement. 2 function distance between the snapshots of network at each epoch. The ing distance matrix is then visualized as a two-dimensional embedding. In parameter space, the networks are initialized at very different points and proceed to diverge yet further from these points. Despite this divergence, each trajectory yields a network that has learned the training data perfectly and generalizes with ∼ 50% accuracy to a test set. This illustrates the wide range of parameter settings that can be used to represent a given neural network function. The behavior of the same initializations in function space is quite different. First, note that all three initializations begin at approximately the same point in function space. This is an intriguing property of random initializations that, rather than encoding entirely random functions, random sets of parameters lead on average to the same function (for related work, see e.g. BID5). The initializations then largely follow an identical path for the initial stage of learning. Different initializations thus learn in similar manners, even if the distance between their parameters diverges. During late-stage optimization, random initializations turn from a shared trajectory and begin to diverge in L 2 space. These differences underlie the general principle that L 2 distances behave differently than 2 distances, and that functional regularization could assist training and reduce overfitting. How well do parameter distances reflect function distances? The answer to this question is relevant for any method that directly considers the behavior of parameters. Certain theoretical analyses, furthermore, desire bounds on function distances but instead find bounds on parameter distances and relate the two with a Lipschitz constant (e.g. BID6). Thus, for theoretical analyses and optimization methods alike, it is important to empirically evaluate how well parameter distances correspond to function distances in typical situations. We can compare these two situations by plotting a change in parameters ∆θ against the corresponding change in the function f θ − f θ+∆θ. In FIG0 we display this relation for several relevant distance during the optimization of a CNN on CIFAR-10. There are three scales: the distance between individual updates, the distance between epochs, and the distance from initialization. What changes strikingly throughout optimization is the relationship between parameter and function space. There is a qualitative difference in the ratio of parameter distances to function distances that visible at all three distance scales. Early epochs generally see larger changes in L 2 space for a given change in parameters. Intriguingly, the ratio of the two distances appears to converge to a single value at late optimization, after test error saturates. This is not because the network ceases to move, as noted above. Rather, the loss landscape shifts such that this ratio become constant. It is also clear from these plots that there is not a consistent positive correlation between the parameter and function distances between any two points on the optimization trajectory. For example, the parameter distance between successive epochs is negatively correlated with the L 2 distance for most of optimization FIG0. The distance from initialization shows a clean and positive relationship, but the relationship changes during optimization. Between successive batches, L 2 distance correlates with parameter distance at late epochs, but less so early in optimization when learning is quickest. Thus, at different stages of optimization, the L 2 / 2 ratio is often quite different. The usage of Batch Normalization (BN) and weight decay in this analysis somewhat affects the trends in the L 2 / 2 ratio. In Appendix A we reproduce these plots for networks trained without BN and without weight decay. The overall message that the L 2 / 2 ratio changes during optimization is unchanged. However, these methods both change the scale of updates, and appear to do so differently throughout optimization, and thus some trends are different. In Appendix B, we also isolate the effect of training data, by reproducing these plots for a CNN trained on MNIST and find similar trends. Overall, the correspondence between parameter and function distances depends strongly on the context. It might be worried that since function space is of infinite dimension, one would require prohibitively many examples to estimate a function distance. However, we find that one can compute a distance between two functions with a relatively small amount of examples. FIG4 shows how the estimated L 2 distance converges with an increasing number examples. In general, we find that only a few hundred examples are necessary to converge to an estimation within a few percent. If, after having been trained on a task, a neural network is retrained on a new task, it often forgets the first task. This phenomenon is termed'catastrophic forgetting'. It is the central difficulty of multitask training as well as applications requiring that learning be done online (especially in non-IID situations). Essentially, new information must be encoded in the network, but the the information pertinent to the previous task must not be overwritten. Most efforts to combat catastrophic forgetting rely on restricting how much parameters can change between tasks. Elastic Weight Consolidation (EWC; BID10), for example, adds a penalty to the loss on a new task B that is the distance from the weights after learning on an earlier FIG4: The variance of the the L 2 estimator is small enough that it can be reasonably estimated from a few hundred examples. In panels A and D, we reproduced L 2 distances seen in the panels of FIG0. As we increase the number of validation examples these distances are computed over, the estimations become more accurate. Panels B and E show the 95% confidence bounds for the estimation; on 95% of batches, the value will lie bewteen these bounds. These bounds can be obtained from the standard deviation of the L 2 distance on single examples. In panel C we show that the standard deviation scales linearly with the L 2 distance when measured between updates, meaning that a fixed batch size will often give similar percentage errors. This is not true for the distance from initialization, in panel F; early optimization has higher variance relative to magnitude, meaning that more examples are needed for the same uncertainty. In the Appendix, we also display the convergence of the L 2 distance estimator between epochs.task A, multiplied by the diagonal of the Fisher information matrix F (calculated on task A): DISPLAYFORM0 This idea is closely related to well-studied approaches to Bayesian online learning, if F is interpreted as a precision matrix BID7, BID15 ). Other similar approaches include that of BID21, who use a more accurate approximation of the Fisher, and Synaptic Intelligence (SI; BID28), which discounts parameter change via a diagonal matrix in which each entry reflects the sum contribution of that parameter to the loss. Each of these method discourages catastrophic forgetting by restricting movement in parameter space between tasks, scaled by a (perhaps diagonal) precision matrix calculated on previous tasks. Using a function space metric, it is not hard to ensure that the network's output function on previous tasks does not change during learning. In this case, the loss for a new task B is modified to be: DISPLAYFORM1 The regularization term is the L 2 distance between the current function f θ B and the function after training on task A, f θ A. Since our function space metric is defined over a domain of examples, we will store a small set of previously seen examples in a working memory, as well as the output on those examples. This memory set will be used to calculate the L 2 distance between the current iteration and the snapshot after training. This is a simple scheme, but novel, and we are not aware of direct precedence in the literature. A working memory approach is employed in related work BID12 BID20 ). Note, however, that storing old examples violates the rules of strict online learning. Nevertheless, for large networks it will be more memory-efficient. EWC, for example, requires storing a snapshot of each parameter at the end of the previous task, as well as a diagonal precision matrix with as many entries as parameters. For the 2 hidden layer network with 400 nodes each that was used in the MNIST task in BID10, this is 1,148,820 new parameters, or as many pixels as 1,465 MNIST images. When each layer has as many as 2,000 nodes, as in FIG4 of BID10, the extra stored parameters are comparable to 15,489 MNIST images. The working memory approach that is required to regularize function change from old tasks is thus comparable or cheaper in memory. We compared the performance of our approach at the benchmark task of permuted MNIST. This task requires a single MLP to learn to classify a sequence of MNIST tasks in which the pixels have been randomly permuted differently on each task. We trained an MLP with 2 hidden layers, 400 nodes each, for 10 epochs on each of 8 such permuted datasets. In FIG5, we display how the test accuracy on the first of 8 tasks degrades with subsequent learning. To build the working memory, we keep 1024 examples from previous tasks, making sure that the number of examples from each task is equal. We also remember the predictions on those examples at the end of training on their originating tasks. To calculate the L 2 distance, we simply re-infer on the examples in working memory, and regularize the distance from the current outputs to the remembered outputs. We chose λ = 1.3 as the regularizing hyperparameter from a logarithmic grid search. In FIG5, we compare this method to four comparison methods. The "ADAM" method is ADAM with a learning rate of 0.001, which nearly forgets the first task completely at the end of the 8 tasks. The "ADAM+retrain" method is augmented with a working memory of 1024 examples that are stored from previous tasks. Every n iterations (we found n = 10 to be best), a step is taken to decrease the loss on the memory cache. This method serves as a control for the working memory concept. We also include EWC and SI as comparisons, using the hyperparameters used in their publications (λ = 500, = c = 0.1). Overall, we found that regularizing the L 2 distance on a working memory cache was more successful than simply retraining on the same cache. It also outperformed EWC, but not SI. Note that these methods store diagonal matrices and the old parameters, and in this circumstance these were larger in memory than the memory cache. Figure 4: Regularizing the L 2 distance from old tasks (calculated over a working memory cache of size 1024) can successfully prevent catastrophic forgetting. Here we display the test performance on the first task as 7 subsequent tasks are learned. Our method outperforms simply retraining on the same cache (ADAM+retrain), which potentially overfits to the cache. Also displayed are ADAM without modifications, EWC, and SI. In this section we propose that the L 2 distance can be used for regularization in a single supervised task. In the space of parameters, SGD is a strongly local update rule and large jumps are generally prohibited. SGD is thus more likely to find solutions that are close to the initialization, and furthermore to trace a path of limited length. This discourages the sampling a large volume of parameter space during optimization. If the mapping between parameter and function space is not already very tight, and locality is important for generalization, then additionally constricting changes in function space should help. On the basis of this logic, we propose a learning rule that directly constrains the path length of the optimization trajectory L 2 space. If a network would have been trained to adjust the parameters θ to minimize some cost C 0, we will instead minimize at each step t a new cost given by: DISPLAYFORM0 Like all regularization terms, this can also be viewed as a Langrangian that satisfies a constraint. Here, this constraint ensures that the change in L 2 -space does not exceed some constant value. To evaluate Equation 1, we can approximate the norm with an empirical expectation over X: DISPLAYFORM1 This cost function imposes a penalty upon the difference between the output of the current network at time t and the proposed network at t + 1. The data x i may derive from some validation batch but must pull from the same distribution X. It would also be possible to use unlabeled data. We can write an update rule to minimize Equation 1 that is a modification of gradient descent. We call the rule Hilbert-constrained gradient descent (HCGD). It minimizes C in Equation 1 via an inner loop of gradient descent. To optimize C via gradient descent, we first replace C 0 with its first order approximation J T ∆θ, where J is the Jacobian. Thus we seek to converge to a ∆θ at each update step, where DISPLAYFORM2 Minimization of the proper ∆θ can be performed in an inner loop by a first order method. We first propose some ∆θ 0 = − J = − ∇ θ C 0 (for learning rate) and then iteratively correct this proposal by gradient descent towards ∆θ. If only one correction is performed, we simply add the derivative of the Hilbert-constraining term after ∆θ 0 has been proposed. We found empirically that a single correction was often sufficient. In Appendix C, we demonstrate that this algorithm does actually decrease the distance traveled in function space, as expected. This algorithm is shown in Algorithm 1. DISPLAYFORM3 Calculate correction 9: DISPLAYFORM4 10: DISPLAYFORM5 return θ t Note that the "proposed update" is presented as an SGD step, but could be a step of another optimizer (e.g. ADAM). In the Appendix, we display an extended version of this algorithm. This version allows for multiple corrective iterations in each step. It also allows for a form of momentum. In standard momentum for SGD, one follows a "velocity" term v which is adjusted at each step with the rule v ← βv + J (e.g. see BID26). For HCGD, we also keep a velocity term but update it with the final Hilbert-constrained update ∆θ rather than J. The velocity is used to propose the initial ∆θ 0 in the next update step. We found that this modification of momentum both quickened optimization and lowered generalization error. The natural gradient turns out to carry a similar interpretation as HCGD, in that the natural gradient also regularizes the change in functions' output distributions. Specifically, the natural gradient can be derived from a penalty upon the change in a network's output distribution as measured by the Kullbeck-Leibler divergence (rather than the L 2 distance).To show this, we start with a similar goal of function regularization and will come upon the natural gradient. Let us seek to regularize the change in a network's output distribution P θ throughout optimization of the parameters θ, choosing the Kullbeck-Leibler (KL) divergence as a measure of similarity between any two distributions. To ensure the output distribution changes little throughout optimization, we define a new cost function DISPLAYFORM0 where C 0 is the original cost function and λ is a hyperparameter that controls the importance of this regularization term. Optimization would be performed with respect to the proposed update θ t+1.Evaluating the KL divergence directly is problematic because it is infeasible to define the output density P θ everywhere. One can obtain a more calculable form by expanding D KL (P θt+1 P θt) around θ t to second order with respect to θ. The Hessian of the KL divergence is the Fisher information metric F. With ∆θ ≡ (θ t+1 − θ t), we can rewrite our regularized cost function as DISPLAYFORM1 To optimize C via gradient descent we first replace C 0 with its first order approximation. DISPLAYFORM2 At each evaluation, J is evaluated before any step is made, and we seek the value of ∆θ that minimizes Equation FORMULA15. By setting the derivative with respect to ∆θ to be zero, we can see that this value is DISPLAYFORM3 When λ = 1 this update is equal to the natural gradient. Thus, the natural gradient emerges as the optimal update when one regularizes the change in the output distribution during learning. In Appendix E, we show how one can approximate the natural gradient with an inner first-order optimization loop, like in HCGD. We note that HCGD is computationally cheaper than the exact natural gradient. It does not require any matrix inversions, nor the calculation of separate per-example gradients. When the validation batch X V is drawn anew for each of n corrective iterations (step 8 in Algorithm 1), HCGD requires an additional two forward passes and one backwards pass for each correction, for a total of 2 + 3n passes each outer step. In addition to being seen as a regularizer of functional change, it in an interesting aside to note that variants of the natural gradient have appeared with many justifications. These include data efficiency, minimizing a regret bound during learning, speeding optimization, and the benefits of whitened gradients. Amari originally developed the natural gradient in the light of information geometry and efficiency BID1; ). If some directions in parameter space are more informative of the network's outputs than others, then updates should be scaled by each dimension's informativeness. Equivalently, if not all examples carry equal information about a distribution, then the update step should be modified to make use of highly informative examples. That is, we wish to find a Fisherefficient algorithm (see BID2). The natural gradient uses the Fisher information matrix to scale the update by parameters' informativeness. There is also a connection between the natural gradient (and thus HCGD) and techniques that normalize and whiten gradients. The term F −1 J, after all, simply ensures that steps are made in a parameter space that is whitened by the covariance of the gradients. Whitening the gradients thus has the effect that SGD becomes more similar to the natural gradient. It appears that many approaches to normalize and whiten activations or gradients have been forwarded in the literature BID19; BID25; BID24; BID3 BID27; BID11; BID23; BID22 ). A similar effect is able to be learned with Batch Normalization, as well BID9 ). By normalizing and whitening the gradients, or by proxy, the activations, these various methods ensure that parameter space is a better proxy for function space. We compared HCGD and SGD on feedforward and recurrent architectures. If it is important that SGD limits changes in function space, and parameter and function space are loosely coupled, then HCGD should improve upon SGD. In all tests, we used a tuned learning rate for SGD, and then used the same learning rate for HCGD. We use values of λ = 0.5 and η = 0.02, generally about 10 times less than the principal learning rate. (For the n = 1 version, λ can be folded into the inner learning rate η. Values were chosen so that λη = 0.01.) We chose the batch size for the "validation" batch to be 256. While the examples in each "validation" batch were different than the training batch, they were also drawn from the train set. All models were implemented in PyTorch BID17 ).We tested HCGD as applied to the CIFAR-10 image classification problem. For reproducibility, we trained a Squeezenet v1.1, a convolutional neural network model with batch normalization optimized for parameter efficiency BID8 ). Overall HCGD does not outperform SGD in the final learning stage when trained with the same learning rate as SGD (initial = 0.1), though it does perform better in the early stage while the learning rate is high (FIG1). When we increase the initial learning rate to = 0.3 (red trace), the training accuracy decreases but the test accuracy is still marginally higher than SGD. Given the difference in relative performance between the high and low learning rate stages, it is possible that HCGD requires a different learning rate schedule to achieve the same level of gradient noise. HCGD thus decreases the test error at a given learning rate, but needs to be trained at a higher learning rate to achieve the same level of gradient noise. We next tested the performance of HCGD on a recurrent task. We trained an LSTM on the sequential MNIST task, in which pixels are input one at a time. The order of the pixels was permuted to further complicate the task. We found that HCGD outperformed SGD (Figure 6 . We used 1 correction step, as before, but found that using more correction steps yielded even better performance. However, HCGD underperformed ADAM. While not the ideal optimizer for this task, the fact that SGD can be improved indicates that SGD does not move as locally in function space as it should. Parameter space thus a poor proxy for function space in recurrent networks. HCGD first proposes an update by SGD, and then corrects it, but the first update step can also be other optimizers. Since Adam worked well for the sequential MNIST task, we tested if Adam could also be improved by taking a step to penalize the change in function space. We found that this is indeed the case, and show the as well in Figure 6 . To differentiate the SGD-and Adam-based methods, we refer to in the figure as SGD+HC and Adam+HC. This combination of Adam and L 2 functional regularization could help to achieve state-of-the-art performance on recurrent tasks. Neural networks encode functions, and it is important that analyses discuss the empirical relationship between function space and the more direct parameter space. Here, we argued that the L 2 Hilbert space defined over an input distribution is a tractable and useful space for analysis. We found that networks traverse this function space qualitatively differently than they do parameter space. Depending on the situation, a distance of parameters cannot be taken to represent a proportional distance between functions. We proposed two possibilities for how the L 2 distance could be used directly in applications. The first addresses multitask learning. By remembering enough examples in a working memory to accurately Figure 6 : Results of a singlelayer LSTM with 128 hidden units trained on the sequential MNIST task with permuted pixels. Shown are the traces for SGD and Adam (both with learning rate 0.01). We then take variants of the HCGD algorithm in which the first proposed step is taken to be an SGD step (SGD+HC) or an Adam step (Adam+HC). For SGD+HC we also show the effect of introducing more iterations n in the SGD+HC step.estimate an L 2 distance, we can ensure that the function (as defined on old tasks) does not change as a new task is learned. This regularization term is agnostic to the architecture or parameterization of the network. We found that this scheme outperforms simply retraining on the same number of stored examples. For large networks with millions of parameters, this approach may be more appealing than comparable methods like EWC and SI, which require storing large diagonal matrices. We also proposed a learning rule that reduces movement in function space during single-task optimization. Hilbert-constrained gradient descent (HCGD) constrains the change in L 2 space between successive updates. This approach limits the movement of the encoded function in a similar way as gradient descent limits movement of the parameters. It also carries a similar intuition as the forgetting application: to learn from current examples only in ways that will not affect what has already been learned from other examples. HCGD can increase test performance at image classification in recurrent situations, indicating both that the locality of function movement is important to SGD and that it can be improved upon. However, HCGD did not always improve , indicating either that SGD is stable in those regimes or that other principles are more important to generalization. This is by no means the only possibility for using an L 2 norm to improve optimization. It may be possible, for example, to use the norm to regularize the confidence of the output function (e.g. BID18). We are particularly interested in exploring if more implicit, architectural methods, like normalization layers, could be designed with the L 2 norm in mind. It interesting to ask if there is support in neuroscience for learning rules that diminish the size of changes when that change would have a large effect on other tasks. One otherwise perplexing finding is that behavioral learning rates in motor tasks are dependent on the direction of an error but independent of the magnitude of that error BID4. This is not expected by most models of gradient descent, but would be expected if the size of the change in the output distribution (i.e. behavior) were regulated to be constant. Regularization upon behavioral change (rather than synaptic change) would predict that neurons central to many actions, like neurons in motor pools of the spinal cord, would learn very slowly after early development, despite the fact that their gradient to the error on any one task (if indeed it is calculated) is likely to be quite large. Given our general resistance to overfitting during learning, and the great variety of roles of neurons, it is likely that some type of regularization of behavioral and perceptual change is at play. Figure A.5: Same as above, but for a network trained without Batch Normalization and also without weight decay. Weight decay has a strong effect. The main effect is that decreases the 2 distance traveled at all three scales (from last update, last epoch, and initialization), especially at late optimization. This explains the left column, and some of the middle and right columns. (It is helpful to look at the "white point" on the color scale, which indicates the point halfway through training. Note that parameter distances continue to change after the white point when WD is not used). An additional and counterintuitive property is that the L 2 distance from the last epoch increases in scale during optimization when WD is not used, but decreases if it is. These comparisons show that WD has a strong effect on the L 2 / 2 ratio, but that this ratio still changes considerable throughout training. This is in line with this paper's motivation to consider L 2 distances directly. Figure B.6: Here we reproduce the of FIG0 and FIG4 for the MNIST task, again using a CNN with batch normalization trained with SGD with momentum. It can be seen first that the majority of function space movement occurs very early in optimization, mostly within the first epoch. The standard deviation of the L 2 estimator, which sets the number of examples needed to accurately estimate a consistent value, is somewhat higher than for CIFAR-10. Finally, at right, it can be seen that the relationship between parameter distance traveled and function distance is similar to that of a CNN on CIFAR-10, include the qualitative change after test error converges (which here is around epoch 1). 2 SPACE Figure C.7: The HCGD algorithm is designed to reduce motion through L2-space. To confirm this, here we plot the cumulative squared distance traveled during optimization for a simple MLP trained on MNIST. This is calculated by the simple cumulative sum of the squared distances between consecutive updates. (The squared distance is nice because Brownian motion will present as a linear increase in its cumulative sum). It can be seen that SGD continues to drift in L2-space during the overfitting regime (around epoch 15, which is when test error saturates), while HCGD plateaus. This indicates that the function has converged to a single location; it ceases to change. With SGD, on the other hand, the network continues to cahnge even long after test error saturates. It is interesting to note that HCGD allows the parameters to continue to drift even though the function has generally converged. This version of the algorithm includes momentum. It also allows for multiple corrections. J ← ∇ θ C 0 (X) Calculate gradients 8: DISPLAYFORM0 ∆θ 0 ← −v Proposed update 10: DISPLAYFORM1 12: DISPLAYFORM2 Update buffer 14:for 1 < j < n do Optional additional corrections 15: ∆θ j ← ∆θ j−1 − η(g L 2) DISPLAYFORM3 18: DISPLAYFORM4 19:θ t ← θ t−1 + ∆θ 20:return θ t In order to better compare the natural gradient to the Hilbert-constrained gradient, we propose a natural gradient algorithm of a similar style. Previous work on the natural gradient has aimed to approximate F −1 as best and as cheaply as possible. This is equivalent to minimizing Equation 2 (i.e. J∆θ + λ 2 ∆θ T F ∆θ) with a single iteration of a second-order optimizer. For very large neural networks, however, it is much cheaper to calculate matrix-vector products than to approximately invert a large matrix. It is possible that the natural gradient may be more accessible via an inner gradient descent, which would be performed during each update step as an inner loop. We describe this idea at high level in Algorithm 2. After an update step is proposed by a standard optimizer, the algorithm iteratively corrects this update step towards the natural gradient. To start with a good initial proposed update, it is better to use a fast diagonal approximation of the natural gradient (such as Adagrad or RMSprop) as the main optimizer. Each additional correction requires just one matrix-vector product after the gradients are calculated. Depending on the quality of the proposed update, the number of iterations required is likely to be small, and even a small number of iterations will improve the update. ∆θ i+1 = ∆θ i − η(J + λF ∆θ i)Step towards DISPLAYFORM0 θ ← θ + ∆θ 8:return θ tSince the Fisher matrix F can be calculated from the covariance of gradients, it never needs to be fully stored. Instead, for an array of gradients G of size (# parameters, # examples), we can write DISPLAYFORM1 The choice of G is an important one. It cannot be a vector of aggregated gradients (i.e. J), as that would destroy covariance structure and would in a rank-1 Fisher matrix. Thus, we must calculate the gradients on a per-example basis. To compute G efficiently it is required that a deep learning framework implement forward-mode differentiation, which is currently not supported in popular frameworks. If we choose G to be the array of per-example gradients on the minibatch, F is known as the'empirical Fisher'. As explained in BID13 and in BID16, the proper method is to calculate G from the predictive (output) distribution of the network, P θ (y|x). This can be done as in BID14 by sampling randomly from the output distribution and re-running backpropagation on these fictitious targets, using (by necessity) the activations from the minibatch. Alternatively, as done in BID16, one may also use unlabeled or validation data to calculate G on each batch.
We find movement in function space is not proportional to movement in parameter space during optimization. We propose a new natural-gradient style optimizer to address this.
1,369
scitldr
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem which maximizes the information the representation has about the task, I(T;Y), while ensuring that a minimum level of compression r is achieved (i.e., I(X;T) <= r). For practical reasons the problem is usually solved by maximizing the IB Lagrangian for many values of the Lagrange multiplier, therefore drawing the IB curve (i.e., the curve of maximal I(T;Y) for a given I(X;Y)) and selecting the representation of desired predictability and compression. It is known when Y is a deterministic function of X, the IB curve cannot be explored and other Lagrangians have been proposed to tackle this problem (e.g., the squared IB Lagrangian). In this paper we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) prove that if these Lagrangians are used, there is a one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes, hence, freeing from the burden of solving the optimization problem for many values of the Lagrange multiplier. Let X and Y be two statistically dependent random variables with joint distribution p(x, y). The information bottleneck (IB) investigates the problem of extracting the relevant information from X for the task of predicting Y. For this purpose, the IB defines a bottleneck variable T obeying the Markov chain Y ↔ X ↔ T so that T acts as a representation of X. define the relevant information as the information the representation keeps from Y after the compression of X (i.e., I(T ; Y)), provided a minimum level of compression (i.e, I(X; T) ≤ r). Therefore, we select the representation which yields the value of the IB curve that best fits our requirements. Definition 1 (IB functional). Let X and Y be statistically dependent variables. Let ∆ be the set of random variables T obeying the Markov condition Y ↔ X ↔ T. Then the IB functional is F IB,max (r) = max T ∈∆ {I(T ; Y)} s.t. I(X; T) ≤ r, ∀r ∈ [0, ∞). Definition 2 (IB curve). The IB curve is the set of points defined by the solutions of F IB,max (r) for varying values of r ∈ [0, ∞). Definition 3 (Information plane). The plane is defined by the axes I(T ; Y) and I(X; T). In practice, solving a constrained optimization problem such as the IB functional is difficult. Thus, in order to avoid the non-linear constraints from the IB functional the IB Lagrangian is defined. Definition 4 (IB Lagrangian). Let X and Y be statistically dependent variables. Let ∆ be the set of random variables T obeying the Markov condition Y ↔ X ↔ T. Then we define the IB Lagrangian as L β IB (T) = I(T ; Y) − βI(X; T). Here β ∈ is the Lagrange multiplier which controls the trade-off between the information of Y retained and the compression of X. Note we consider β ∈ because (i) for β ≤ 0 many uncompressed solutions such as T = X maximizes L β IB, and (ii) for β ≥ 1 the IB Lagrangian is non-positive due to the data processing inequality (DPI) (Theorem 2.8.1 from) and trivial solutions like T = const are maximizers with L β IB = 0 . We know the solutions of the IB Lagrangian optimization (if existent) are solutions of the IB functional by the Lagrange's sufficiency theorem (Theorem 5 in Appendix A of). Moreover, since the IB functional is concave (Lemma 5 of) we know they exist (Theorem 6 in Appendix A of). Therefore, the problem is usually solved by maximizing the IB Lagrangian with adaptations of the Blahut-Arimoto algorithm , deterministic annealing approaches or a bottom-up greedy agglomerative clustering or its improved sequential counterpart . However, when provided with high-dimensional random variables X such as images, these algorithms do not scale well and deep learning based techniques, where the IB Lagrangian is used as the objective function, prevailed (; ; . Note the IB Lagrangian optimization yields a representation T with a given performance (I(X; T), I(T ; Y)) for a given β. However there is no one-to-one mapping between β and I(X; T). Hence, we cannot directly optimize for a desired compression level r but we need to perform several optimizations for different values of β and select the representation with the desired performance (e.g.,). The Lagrange multiplier selection is important since (i) sometimes even choices of β < 1 lead to trivial representations such that p T |X (t|x) = p T (t), and (ii) there exist some discontinuities on the performance level w.r.t. the values of β . Moreover, recently showed how in deterministic scenarios (such as many classification problems where an input x i belongs to a single particluar class y i) the IB Lagrangian could not explore the IB curve. Particularly, they showed that multiple β yielded the same performance level and that a single value of β could in different performance levels. To solve this issue, they introduced the squared IB Lagrangian, L βsq sq-IB = I(T ; Y) − β sq I(X; T) 2, which is able to explore the IB curve in any scenario by optimizing for different values of β sq. However, even though they realized a one-to-one mapping between β s q existed, they did not find such mapping. Hence, multiple optimizations of the Lagrangian were still required to fing the best traded-off solution. The main contributions of this article are: 1. We introduce a general family of Lagrangians (the convex IB Lagrangians) which are able to explore the IB curve in any scenario for which the squared IB Lagrangian is a particular case of. More importantly, the analysis made for deriving this family of Lagrangians can serve as inspiration for obtaining new Lagrangian families which solve other objective functions with intrinsic trade-off such as the IB Lagrangian. 2. We show that in deterministic scenarios (and other scenarios where the IB curve shape is known) one can use the convex IB Lagrangian to obtain a desired level of performance with a single optimization. That is, there is a one-to-one mapping between the Lagrange multiplier used for the optmization and the level of compression and informativeness obtained, and we know such mapping. Therefore, eliminating the need of multiple optimizations to select a suitable representation. Furthermore, we provide some insight for explaining why there are discontinuities in the performance levels w.r.t. the values of the Lagrange multipliers. In a classification setting, we connect those discontinuities with the intrinsic clusterization of the representations when optimizing the IB bottleneck objective. The structure of the article is the following: in Section 2 we motivate the usage of the IB in supervised learning settings. Then, in Section 3 we outline the important used about the IB curve in deterministic scenarios. Later, in Section 4 we introduce the convex IB Lagrangian and explain some of its properties. After that, we support our (proved) claims with some empirical evidence on the MNIST dataset in Section 5. The reader can download the PyTorch implementation at https://gofile.io/?c=G9Dl1L. In this section we will first give an overview of supervised learning in order to later motivate the usage of the information bottleneck in this setting. In supervised learning we are given a dataset of n pairs of input features and task outputs. In this case, X and Y are the random variables of the input features and the task outputs. We assume x i and y i are sampled i.i.d. from the true distribution p XY (x, y) = p Y |X (y|x)p X (x). The usual aim of supervised learning is to use the dataset D n to learn a particular conditional distribution qŶ |X,θ (ŷ|x) of the task outputs given the input features, parametrized by θ, which is a good approximation of p Y |X (y|x). We useŶ andŷ to indicate the predicted task output random variable and its outcome. We call a supervised learning task regression when Y is continuous-valued and classification when it is discrete. Usually supervised learning methods employ intermediate representations of the inputs before making predictions about the outputs; e.g., hidden layers in neural networks (Chapter 5 from) or transformations in a feature space through the kernel trick in kernel machines like SVMs or RVMs (Sections 7.1 and 7.2 from). Let T be a possibly stochastic function of the input features X with a parametrized conditional distribution q T |X,θ (t|x), then, T obeys the Markov condition Y ↔ X ↔ T. The mapping from the representation to the predicted task outputs is defined by the parametrized conditional distribution qŶ |T,θ (ŷ|t). Therefore, in representation-based machine learning methods the full Markov Chain is Y ↔ X ↔ T ↔Ŷ. Hence, the overall estimation of the conditional probability p Y |X (y|x) is given by the marginalization of the representations, In order to achieve the goal of having a good estimation of the conditional probability distribution p Y |X (y|x), we usually define an instantaneous cost function j θ (x, y): X × Y → R. This serves as a heuristic to measure the loss our algorithm (parametrized by θ) obtains when trying to predict the realization of the task output y with the input realization x. Clearly, we are interested in minimizing the expectation of the instantaneous cost function over all the possible input features and task outputs, which we call the cost function. However, since we only have a finite dataset D n we have instead to minimize the empirical cost function. Definition 5 (Cost function and empirical cost function). Let X and Y be the input features and task output random variables and x ∈ X and y ∈ Y their realizations. Let also j θ (x, y) be the instantaneous cost function, θ the parametrization of our learning algorithm, and the given dataset. Then we define: 1. The cost function: 2. The emprical cost function: The discrepancy between the normal and empirical cost functions is called the generalization gap or generalization error (see Section 1 of , for instance) and intuitevely, the smaller this gap is, the better our model generalizes (i.e., the better it will perform to new, unseen samples in terms of our cost function). Definition 6 (Generalization gap). Let J(θ) andĴ(θ, D n) be the cost and the empirical cost functions as defined in Definition 5. Then, the generalization gap is defined as and it represents the error incurred when the selected distribution is the one parametrized by θ when the ruleĴ(θ, D n) is used instead of J(θ) as the function to minimize. Ideally, we would want to minimize the cost function. Hence, we usually try to minimize the empirical cost function and the generalization gap simultaneously. The modifications to our learning algorithm which intend to reduce the generalization gap but not hurt the performance on the empirical cost function are known as regularization. Definition 7 (Representation cross-entropy cost function). Let X and Y be two statistically dependent variables with joint distribution p XY (x, y) = p Y |X (y|x)p X (x). Let also T be a random variable obeying the Markov condition Y ↔ X ↔ T and q T |X,θ (t|x) and qŶ |T,θ (ŷ|t) be the encoding and decoding distributions of our model, parametrized by θ. Finally, let C(p(z)||q(z)) = −E p(Z) [log(q(z))] be the cross entropy between two probability distributions p and q. Then, the cross-entropy cost function is where j CE,θ (x, y) = C(q T |X,θ (t|x)||qŶ |T,θ (ŷ|t)) is the instantaneous representation cross-entropy cost function and The cross-entropy is a widely used cost function in classification tasks (e.g., ; ;) which has many interesting properties . Moreover, it is known that minimizing the J CE (θ) maximizes the mutual information I(T ; Y) (see Section 2 of or Section II A. of). Definition 8 (Nuisance). A nuisance is any random variable which affects the observed data X but is not informative to the task we are trying to solve. That is, Ξ is a nuisance for Similarly, we know that minimizing I(X; T) minimizes the generalization gap for restricted classes when using the cross-entropy cost function (Theorem 1 of), and when using I(T ; Y) directly as an objective to maximize (Theorem 4 of). in Proposition 3.1 upper bound the information of the input representations, T, with nuisances that affect the observed data, Ξ, with I(X; T). Therefore minimizing I(X; T) helps generalization by not keeping useless information of Ξ in our representations. Thus, jointly maximizing I(T ; Y) and minimizing I(X; T) is a good choice both in terms of performance in the available dataset and in new, unseen data, which motivates studies on the IB. showed that when Y is a deterministic function of X (i.e., Y = f (X)), the IB curve is piecewise linear. More precisely, it is shaped as stated in Proposition 1. Proposition 1 (The IB curve is piecewise linear in deterministic scenarios). Let X be a random variable and Y = f (X) be a deterministic function of X. Let also T be the bottleneck variable that solves the IB functional. Then the IB curve in the information plane is defined by the following equation: Furthermore, they showed that the IB curve could not be explored by optimizing the IB Lagrangian for multiple β because the curve was not strictly concave. That is, there was not a one-to-one relationship between β and the performance level. Theorem 1 (In deterministic scenarios, the IB curve cannot be explored using the IB Lagrangian). Let X be a random variable and Y = f (X) be a deterministic function of X. Let also T be the bottleneck variable that solves arg max T ∈∆ {L β IB} with ∆ the set of r.v. obeying the Markov condition Y ↔ X ↔ T. Then: is the only solution β ∈ yields. Clearly, a situation like the one depicted in Theorem 1 is not desirable, since we cannot aim for different levels of compression or performance. For this reason, we generalize the effort from and look for families of Lagrangians which are able to explore the IB curve. Inspired by the squared IB Lagrangian,, we look at the conditions a function of I(X; T) requires in order to be able to explore the IB curve. In this way, we realize that any monotonically increasing and strictly convex function will be able to do so, and we call the family of Lagrangians with these characteristics the convex IB Lagrangians, due to the nature of the introduced function. Theorem 2 (Convex IB Lagrangians). Let ∆ be the set of r.v. T obeying the Markov condition Y ↔ X ↔ T. Then, if h is a monotonically increasing and strictly convex function, the IB curve can always be recovered by the solutions of arg max T ∈∆ {L That is, for each point (I(X; T), IB,h (T) achieves this solution. Furthermore, β h is strictly decreasing w.r.t. The proof of this theorem can be found on Appendix A. Furthermore, by exploiting the IB curve duality (Lemma 10 of) we were able to derive other families of Lagrangians which allow for the exploration of the IB curve (Appendix E). Remark 1. Clearly, we can see how if h is the identity function (i.e., h(I(X; T)) = I(X; T)) then we end up with the normal IB Lagrangian. However, since the identity function is not strictly convex, it cannot ensure the exploration of the IB curve. Let B h denote the domain of Lagrange multipliers β h for which we can find solutions in the IB curve with the convex IB Lagrangian. Then the convex IB Lagrangians do not only allow us to explore the IB curve with different β h. They also allow us to identify the specific β h that obtains a given point (I(X; T), I(T ; Y)), provided we know the IB curve in the information plane. Conversely, the convex IB Lagrangian allows to find the specific point (I(X; T), I(T ; Y)) that is obtained by a given β h. Proposition 2 (Bijective mapping between IB curve point and convex IB Lagrange multiplier). Let the IB curve in the information plane be known; i.e., I(T ; Y) = f IB (I(X; T)) is known. Then there is a bijective mapping from Lagrange multipliers β h ∈ B h \{0} from the convex IB Lagrangian to points in the IB curve (I(X; T), f IB (I(X; T)). Furthermore, these mappings are: and I(X; T) = (h) where h is the derivative of h and (h) −1 is the inverse of h. It is interesting since in deterministic scenarios we know the shape of the IB curve (Theorem 1) and since the convex IB Lagrangians allow for the exploration of the IB curve (Theorem 2). A proof for Proposition 2 can be found in Appendix B. A direct derived from this proposition is that we know the domain of Lagrange multipliers, B h, which allow for the exploration of the IB curve if the shape of the IB curve is known. Furthermore, if the shape is not known we can at least bound that range. Corollary 1 (Domain of convex IB Lagrange multiplier with known IB curve shape). Let the IB curve in the information plane be I(T ; Y) = f IB (I(X; T)) and let I max = I(X; Y). Let also I(X; T) = r max be the minimum mutual information s.t. f IB (r max) = I max (i.e., r max = min r {f IB (r) = I max }). Then, the range of Lagrange multipliers that allow the exploration of the IB curve with the convex IB Lagrangian is B h = [β h,min, β h,max], with where f IB (r) and h (r) are the derivatives of f IB (I(X; T)) and h(I(X; T)) w.r.t. I(X; T) evaluated at r respectively. Corollary 2 (Domain of convex IB Lagrange multiplier bound). h (r) is the derivative of h(I(X; T)) w.r.t. I(X; T) evaluated at r, X is the set of possible realizations of X and β 0 1 and Ω x are defined as in . That is, Corollaries 1 and 2 allow us to reduce the range search for β when we want to explore the IB curve. Practically, inf Ωx⊂X {β 0 (Ω x)} might be difficult to calculate so derived an algorithm to approximate it. However, we still recommend 1 for simplicity. The proofs for both corollaries are found in Appendices C and D. In order to showcase our claims we use the MNIST dataset . We simply modify the nonlinear-IB method, which is a neural network that minimizes the cross-entropy while also minimizing a differentiable kernel-based estimate of I(X; T) . Then we use this technique to maximize a lower bound on the convex IB Lagrangians by applying the functions h to the I(X; T) estimate. For a fair comparison, we use the same network architecture as that in: Here f θ,enc is a three fully-conected layer encoder with 800 ReLU units on the first two layers and 2 linear units on the last layer. Second, a deterministic decoder qŶ |T,θ (ŷ|t) = f θ,dec (t). Here, f θ,dec is a fully-conected 800 ReLU unit layers followed by an output layer with 10 softmax units. For further details about the experiment setup and additional for different values of α and η please refer to Appendix F. In Figure 1 we show our for two particularizations of the convex IB Lagrangians: 1 Note in they consider the dual problem (see Appendix E) so when they refer to β −1 it translates to β in this article. 2 The encoder needs to be stochastic to (i) ensure a finite and well-defined mutual information and (ii) make gradient-based optimization methods over the IB Lagrangian useful . 3 The clusters were obtained using the DBSCAN algorithm . We can clearly see how both Lagrangians are able to explore the IB curve (first column from Figure 1) and how the theoretical performance trend of the Lagrangians matches the experimental (second and third columns from Figure 1). There are small mismatches between the theoretical and experimental performance. This is because using the nonlinear-IB, as stated by , does not guarantee that we find optimal representations due to factors like: (i) innacurate estimation of I(X; T), (ii) restrictions on the structure of T, (iii) use of an estimation of the decoder instead of the real one and (iv) the typical non-convex optimization issues that arise with gradient-based methods. The main difference comes from the discontinuities in performance for in-creasing β, which cause is still unknown (cf.). It has been observed, however, that the bottleneck variable performs an intrinsic clusterization in classification tasks (see, for instance) or Figure 2b ). We realized how this clusterization matches with the quantized performance levels observed (e.g., compare Figure 2a with the top center graph in Figure 1); with maximum performance when the number of clusters is equal to the cardinality of Y and reducing performance with a reduction of the number of clusters. We do not have a mathematical proof for the exact relationship between these two phenomena; however, we agree with that it is an interesting matter and hope this realization serves as motivation to derive new theory. To sum up, in order to achieve a desired level of performance with the convex IB Lagrangian as an objective one should: 1. In a deterministic or close to deterministic setting (see -deterministic definition in): Use the adequate β h for that performance using Proposition 2. Then if the perfomance is lower than desired (i.e., we are placed in the wrong performance plateau), gradually reduce the value of β h until reaching the previous performance plateau. 2. In a stochastic setting: Draw the IB curve with multiple values of β h on the range defined by Corollary 2 and select the representations that best fit their interests. = In practice, there are different criterions for choosing the function h. For instance, the exponential IB Lagrangian could be more desirable than the power IB Lagrangian when we want to draw the IB curve since it has a finite range of β h. This is −1, ∞) for the power IB Lagrangian. Furthermore, there is a trade-off between (i) how much the selected h function ressembles the identity (e.g., with α or η close to zero), since it will suffer from similar problems as the original IB Lagrangian; and (ii) how fast it grows (e.g., higher values of α or η), since it will suffer from value convergence; i.e., optimizing for separate values of β h will achieve similar levels of performance (Figure 3). Please, refer to Appendix G for a more thorough explanation of this phenomenon. The information bottleneck is a widely used and studied technique. However, it is known that the IB Lagrangian cannot be used to achieve varying levels of performance in deterministic scenarios. Moreover, in order to achieve a particular level of performance multiple optimizations with different Lagrange multipliers must be done to draw the IB curve and select the best traded-off representation. In this article we introduced a general family of Lagrangians which allow to (i) achieve varying levels of performance in any scenario, and (ii) pinpoint a specific Lagrange multiplier β h to optimize for a specific performance level in known IB curve scenarios (e.g., deterministic). Furthermore, we showed the β h domain when the IB curve is known and a β h domain bound for exploring the IB curve when it is unkown. This way we can reduce and/or avoid multiple optimizations and, hence, reduce the computational effort for finding well traded-off representations. Finally, (iii) we provided some insight to the discontinuities on the performance levels w.r.t. the Lagange multipliers by connecting those with the intrinsic clusterization of the bottleneck variable. Proof. We start the proof by remembering the optimization problem at hand (Definition 1): We can modify the optimization problem by iff h is a monotonically non-decreasing function since otherwise h(I(X; T)) ≤ h(r) would not hold necessarily. Now, let us assume ∃T * ∈ ∆ and β * h s.t. T * maximizes L β * h IB,h (T) over all T ∈ ∆, and I(X; T *) ≤ r. Then, we can operate as follows: Here, the equality from equation comes from the fact that since I(X; T) ≤ r, then ∃ξ ≥ 0 s.t. h(I(X; T)) − h(r) + ξ = 0. Then, the inequality from equation holds since we have expanded the optimization search space. Finally, in equation we use that T * maximizes L β * h IB,h (T) and that I(X; T *) ≤ r. Now, we can exploit that h(r) and ξ do not depend on T and drop them in the maximization in equation. We can then realize we are maximizing over L Therefore, since I(T * ; Y) satisfies both the maximization with T * ∈ ∆ and the constraint. Now, we know if such β * h exists, then the solution of the Lagrangian will be a solution for F IB,max (r). Then, if we consider Theorem 6 from the Appendix of and consider the maximization problem instead of the minimization problem, we know if both I(T ; Y) and −h(I(X; T)) are concave functions, then a set of Lagrange multipliers S * h exists with these conditions. We can make this consideration because f is concave if −f is convex and max{f} = min{−f}. We know I(T ; Y) is a concave function of T for T ∈ ∆ (Lemma 5 of) and I(X; T) is convex w.r.t. T given p X (x) is fixed (Theorem 2.7.4 of). Thus, if we want −h(I(X; T)) to be concave we need h to be a convex function. Finally, we will look at the conditions of h so that for every point (I(X; T), I(T ; Y)) in the IB curve, there exists a unique β * h s.t. L β * h IB,h (T) is maximized. That is, the conditions of h s.t. |S * h | = 1. For this purpose we will look at the solutions of the Lagrangian optimization: Now, if we integrate both sides of equation over all T ∈ ∆ we obtain where β is the Lagrange multiplier from the IB Lagrangian and h (I(X; T)) is dh(I(X;T)) dI(X;T). Also, if we want to avoid indeterminations of β h we need h (I(X; T)) not to be 0. Since we already imposed h to be monotonically non-decreasing, we can solve this issue by strengthening this condition. That is, we will require h to be monotonically increasing. We would like β h to be continuous, this way there would be a unique β h for each value of I(X; T). We know β is a non-increasing function of I(X; T) (Lemma 6 of). Hence, if we want β h to be a strictly decreasing function of I(X; T), we will require h to be an strictly increasing function of I(X; T). Therefore, we will require h to be a strictly convex function. Thus, if h is an strictly convex and monotonically increasing function, for each point (I(X; T), I(T ; Y)) in the IB curve s.t. dI(T ; Y)/dI(X; T) > 0 there is a unique β h for which maximizing L β h IB,h (T) achieves this solution. Proof. In Theorem 2 we showed how each point of the IB curve (I(X; T), I(T ; Y)) can be found with a unique β h maximizing L β h IB,h. Therefore since we also proved L β h IB,h is strictly concave w.r.t. T we can find the values of β h that maximize the Lagrangian for fixed I(X; T). First, we look at the solutions of the Lagrangian maximization: Then as before we can integrate at both sides for all T ∈ ∆ and solve for β h: Moreover, since h is a strictly convex function its derivative h is strictly decreasing. Hence, h is an invertible function (since a strictly decreasing function is bijective and a function is invertible iff it is bijective by definition). Now, if we consider β h > 0 to be known and I(X; T) to be the unknown we can solve for I(X; T) and get: Note we require β h not to be 0 so the mapping is defined. Proof., we see that maximizing this Lagrangian is directly maximizing I(T ; Y). We know I(T ; Y) is a concave function of T for T ∈ ∆ (Theorem 2.7.4 from); hence it has a maximum. We also know I(T ; Y) ≤ I(X; Y). Moreover, we know I(X; Y) can be achieved if, for example, Y is a deterministic function of T (since then the Markov Chain X ↔ T ↔ Y is formed). Thus, max T ∈∆ {L 0 For β h = 0 we know maximizing L IB,h (T) can obtain the point in the IB curve (r max, I max) (Lemma 1). Moreover, we know that for every point (I(X; T), f IB (I(X; T))), ∃!β h s.t. max{L β h IB,h (T)} achieves that point (Theorem 2). Thus, ∃!β h,min s.t. lim r→r − max (r, f IB (r)) is achieved. From Proposition 2 we know this β h,min is given by Since we know f IB (I(X; T)) is a concave non-decreasing function in (0, r max) (Lemma 5 of) we know it is continuous in this interval. In addition we know β h is strictly decreasing w.r.t. I(X; T) (Theorem 2). Furthermore, by definition of r max and knowing I(T ; Y) ≤ I(X; Y) we know f IB (r) = 0, ∀r > r max. Therefore, we cannot ensure the exploration of the IB curve for β h s.t. 0 < β h < β h,min. Then, since h is a strictly increasing function in (0, r max), h is positive in that interval. Hence, taking into account β h is strictly decreasing we can find a maximum β h when I(X; T) approaches to 0. That is, D PROOF OF COROLLARY 2 Proof. If we use Corollary 1, it is straightforward to see that for all IB curves f IB and functions h. Therefore, we look at a domain bound dependent on the function choice. That is, if we can find β min ≤ f IB (r) and β max ≥ f IB (r) for all IB curves and all values of r, then The region for all possible IB curves regardless of the relationship between X and Y is depicted in Figure 4. The hard limits are imposed by the DPI (Theorem 2.8.1 from) and the fact that the mutual information is non-negative (Corollary 2.90 for discrete and first Corollary of Theorem 8.6.1 for continuous random variables from). Hence, a minimum and maximum values of f IB are given by the minimum and maximum values of the slope of the Pareto frontier. Which means Note 0/(lim r→r − max {h (r)}) = 0 since h is monotonically increasing and, thus, h will never be 0. Figure 4: Graphical representation of the IB curve in the information plane. Dashed lines in orange represent tight bounds confining the region (in light orange) of possible IB curves (delimited by the red line, also known as the Pareto frontier). Black dotted lines are informative values. In blue we show an example of a possible IB curve confining a region (in darker orange) of an IB curve which does not achieve the Pareto frontier. Finally, the yellow star represents the point where the representation keeps the same information about the input and the output. Finally, we can tighten the bound using the from , where, in Theorem 2, they showed the slope of the Pareto frontier could be bounded in the origin by f IB ≤ (inf Ωx⊂X {β 0 (Ω x)}) −1. Finally, we know that in deterministic classification tasks inf Ωx⊂X {β 0 (Ω x)} = 1, which aligns with and what we can observe from Figure 4. Therefore, We can use the same ideas we used for the convex IB Lagrangian to formulate new families of Lagrangians that allow the exploration of the IB curve. For that we will use the duality of the IB curve (Lemma 10 of ). That is: Definition 9 (IB dual functional). Let X and Y be statistically dependent variables. Let also ∆ be the set of random variables T obeying the Markov condition Y ↔ X ↔ T. Then the IB dual functional is Theorem 3 (IB curve duality). Let the IB curve be defined by the solutions of F IB,max (r) for varying and ∀i∃r s.t. (F IB,min (i), i) = (r, F IB,max (r)). From this definition it follows that minimizing the dual IB Lagrangian, L βdual IB,dual (T) = I(X; T) − β dual I(T ; Y), for β dual = β −1 is equivalent to maximizing the IB Lagrangian. In fact, the original Lagrangian for solving the problem was defined this way . We decided to use the maximization version because the domain of useful β is bounded while it is not for β dual. Following the same reasoning as we did in the proof of Theorem 2, we can ensure the IB curve can be explored if: Here, h is a monotonically increasing strictly convex function, g is a monotonically increasing strictly concave function, and β g, β g,dual, β h,dual are the Lagrange multipliers of the families of Lagrangians defined above. In a similar manner, one could obtain relationships between the Lagrange multipliers of the IB Lagrangian and the convex IB Lagrangian with these Lagrangian families. Also, one could find a range of values for these Lagrangians to allow for the IB curve exploration and define a bijective mapping between their Lagrange multipliers and the IB curve. However, (i) as mentioned in Section 2.2, I(T ; Y) is particularly interesting to maximize without transformations because of its meaning. Moreover, (ii) like β dual, the domain of useful β g and β h,dual is not upper bounded. These two reasons make these other Lagrangians less preferable. We only include them here for completeness. Nonetheless, we encourage the curiours reader to explore these families of Lagrangians too. In order to generate the empirical support from Section 5 we used the nonlinear IB on the MNIST dataset . This dataset contains 60,000 training samples and 10,000 testing samples of hand-written digits. The samples are 28x28 pixels and are labeled from 0 to 9; i.e., X = R 784 and Y = {0, 1, ..., 9}. As in we trained the neural network with the Adam optimization algorithm with a learning rate of 10 −4 but we introduced a 0.6 decay rate every 10 iterations. After talking with the authors of the nonlinear IB, we decided to estimate the gradients of both I θ (X; T) and the cross entropy with the same mini-batch of 128 samples. Moreover, we did not learn the covariance of the mixture of Gaussians used for the kernel density estimation of I θ (X; T) and we set it to (exp(−1)) 2. We trained for 100 epochs 6. All the weights were initialized according to the method described by using a Gaussian distribution. The reader can find the PyTorch implementation at https://gofile.io/?c=G9Dl1L. Then, we used the DBSCAN algorithm for clustering. Particularly, we used the scikit-learn implementation with = 0.3 and min samples = 50. In Figure 5 we show how the IB curve can be explored with different values of α for the power IB Lagrangian and in Figure 6 for different values of η and the exponential IB Lagrangian. Finally, in Figure 7 we show the clusterization for the same values of α and η as in Figures 5 and 6. In this way the connection between the performance discontinuities and the clusterization is more evident. Furthermore, we can also observe how the exponential IB Lagrangian maintains better the theoretical performance than the power IB Lagrangian (see Appendix G for an explanation of why). When chossing the right h function, it is important to find the right balance between avoiding value convergence and aiming for strong convexity. Practically, this balance is found by looking at how much faster h grows w.r.t. the identity function. In order to explain this issue we are going to use the example of classification on MNIST , where I(X; Y) = H(Y) = log 2, and again the power and exponential IB Lagrangians. If we use Proposition 2 on both Lagrangians we obtain the bijective mapping between their Lagrange multipliers and a certain level of compression in the classification setting: 1. Power IB Lagrangian: 2. Exponential IB Lagrangian: β exp = (η exp(ηI(X; T))) −1 and I(X; T) = − log(ηβ exp)/η. Hence, we can simply plot the curves of I(X; T) vs. β h for different hyperparameters α and η (see Figure 8). In this way we can observe how increasing the growth of the function (e.g., increasing α or η in this case) too much provokes that many different values of β h converge to very similar values of I(X; T). This is an issue both for drawing the curve (for obvious reasons) and for aiming for a specific performance level. Due to the nature of the estimation of the IB Lagrangian, the theoretical and practical value of β h that yield a specific I(X; T) may vary slightly (see Figure 1). Then if we select a function with too high growth, a small change in β h can in a big change in the performance obtained. Definition 10 (µ-Strong convexity). If a function f (r) is twice continuous differentiable and its domain is confined in the real line, then it is µ-strong convex if f (r) ≥ µ ≥ 0 ∀r. Experimentally, we observed when the growth of our function h(r) is small in the domain of interest r > 0 the convex IB Lagrangian does not perform well. Later we realized that this was closely related with the strength of the convexity of our function. In Theorem 2 we imposed the function h to be strictly convex to enforce having a unique β h for each value of I(X; T). Hence, since in practice we are not exactly computing the Lagrangian but an estimation of it (e.g., with the nonlinear IB) we require strong convexity in order to be able to explore the IB curve. We now look at the second derivative of the power and exponential function: h (r) = (1+α)αr α−1 and h (r) = η 2 exp(ηr) respectivelly. Here we see how both functions are inherently 0-strong convex for r > 0 and α, η > 0. However, values of α < 1 and η < 1 could lead to low µ-strong convexity in certain domains of r. Particularly, the case of α < 1 is dangerous because the function approaches 0-strong convexity as r increases, so the power IB Lagrangian performs poorly when low α are used to find high performances.
We introduce a general family of Lagrangians that allow exploring the IB curve in all scenarios. When these are used, and the IB curve is known, one can optimize directly for a performance/compression level directly.
1,370
scitldr
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone. Deep learning has achieved great success in various applications such as speech recognition , image classification , machine translation BID21;; ), and game playing (; BID17 . Starting from , however, there has been a debate over the problem of systematicity (such as understanding recursive systems) in connectionist models (; ;).Logic systems can naturally process symbolic rules in language understanding and reasoning. Inductive logic programming (ILP) BID7 BID8 ) has been developed for learning logic rules from examples. Roughly speaking, given a collection of positive and negative examples, ILP systems learn a set of rules (with uncertainty) that entails all of the positive examples but none of the negative examples. Combining both symbols and probabilities, many problems arose from high-level cognitive abilities, such as systematicity, can be naturally resolved. However, due to an exponentially large searching space of the compositional rules, it is difficult for ILP to scale beyond small-sized rule sets (; ;).To make the discussion concrete, let us consider the classic blocks world problem BID10 ). As shown in Figure 1, we are given a set of blocks on the ground. We can move a block x and place it on the top of another block y or the ground, as long as x is moveable and y is placeable. We call this operation Move(x, y). A block is said to be moveable or placeable if there are no other blocks on it. The ground is always placeable, implying that we can place all blocks on the ground. Given an initial configuration of blocks world, our goal is to transform it into a target configuration by taking a sequence of Move operations. Although the blocks world problem may appear simple at first glance, four major challenges exist in building a learning system to automatically accomplish this task:1. The learning system should recover a set of lifted rules (i.e., rules that apply to objects uniformly instead of being tied with specific ones) and generalize to blocks worlds which contain more blocks than those encountered during training. To get an intuition on this, we refer the readers who are not familiar with the blocks world domain to the task of learning to sort arrays (e.g.,In this paper, we propose Neural Logic Machines (NLMs) to address the aforementioned challenges. In a nutshell, NLMs offer a neural-symbolic architecture which realizes Horn clauses in first-order logic (FOL). The key intuition behind NLMs is that logic operations such as logical ANDs and ORs can be efficiently approximated by neural networks, and the wiring among neural modules can realize the logic quantifiers. The rest of the paper is organized as follows. We first revisit some useful definitions in symbolic logic systems and define our neural implementation of a rule induction system in Section 2. As a supplementary, we refer interested readers to Appendix A for implementation details. In Section 3 we evaluate the effectiveness of NLM on a broad set of tasks ranging from relational reasoning to decision making. We discuss related works in Section 4, and conclude the paper in Section 5. The NLM is a neural realization of logic machines (under the Closed-World Assumption 3). Given a set of base predicates, grounded on a set of objects (the premises), NLMs sequentially apply first-order rules to draw , such as a property about an object. For example, in the blocks world, based on premises IsGround(u) and Clear(u) of object u, NLMs can infer whether u is moveable. Internally, NLMs use tensors to represent logic predicates. This is done by grounding the predicate as True or False over a fixed set of objects. Based on the tensor representation, rules are implemented as neural operators that can be applied over the premise tensors and generate tensors. Such neural operators are probabilistic, lifted, and able to handle relational data with various orders (i.e., operating on predicates with different arities). We adopt a probabilistic tensor representation for logic predicates. Suppose we have a set of objects U = {u 1, u 2, . . ., u m}. A predicate p(x 1, x 2, . . ., x r), of arity r, can be grounded on the object set U (informally, we call it U-grounding), ing in a tensor p U of shape [m r] [m, m − 1, m − 2, . . ., m − r + 1], where the value of each entry p U (u i1, u i2, . . ., u ir) of the tensor represents whether p is True under the grounding that x 1 = u i1, x 2 = u i2, · · ·, x r = u ir. Here, we restrict that the grounded objects of all x i's are mutually exclusive, i.e., i j = i k for all pairs of Figure 2: An illustration of Neural Logic Machines (NLM). During forward propagation, NLM takes object properties and relations as input, performs sequential logic deduction, and outputs conclusive properties or relations of the objects. Implementation details can be found in Section 2.3.indices j and k. This restriction does not limit the generality of the representation, as the "missing" entries can be represented by the U-grounding of other predicates with a smaller arity. For example, for a binary predicate p, the grounded values of the p U (x, x) can be represented by the U-grounding of a unary predicate p (x) p(x, x).We extend this representation to a collection of predicates of the same arity. Let C (r) be the number of predicates of arity r. We stack the U-grounding tensors of all predicates as a tensor of shape DISPLAYFORM0, where the last dimension corresponds to the predicates. Intuitively, a group of C unary predicates grounded on m objects can be represented by a tensor of shape m, C BID22, describing a group of "properties of objects", while a m, m − 1, DISPLAYFORM1 shaped tensor for C BID23 binary predicates describes a group of "pairwise relations between objects". In practice, we set a maximum arity B for the predicates of interest, called the breadth of the NLM.In addition, NLMs take a probabilistic view of predicates. Each entry in U-grounding tensors takes value from, which can be interpreted as the probability being True. All premises, , and intermediate in NLMs are represented by such probabilistic tensors. As a side note, we impose the restriction that all arguments in the predicates can only be variables or objects (i.e., constants) but not function symbols, which follows the setting of Datalog BID2 ). Our goal is to build a neural architecture to learn rules that are both lifted and able to handle relational data with multiple arities. We present different modules of our neural operators by making analogies to a set of essential meta-rules in symbolic logic systems. Specifically, we discuss our neural implementation of boolean logic rules, as lifted rules containing boolean operations (AND, OR, NOT) over a set of predicates; and quantifications, which bridge predicates with different arities by logic quantifiers (∀ and ∃).Next, we combine these neural units to compose NLMs. Figure 2 illustrates the overall multi-layer, multi-group architecture of an NLM. An NLM has layers of depth D (horizontally), and each layer has B + 1 computation units (vertically). These units operate on the tensor representations of predicates whose arities range from [0, B], respectively. NLMs take input tensors of predicates (premises), perform layer-by-layer computations, and output tensors as . As the number of layers increases, higher levels of abstraction can be formed. For example, the output of the first layer may represent Clear(x), while a deeper layer may output more complicated predicate like Moveable(x). Thus, forward propagation in NLMs can be interpreted as a sequence of rule applications. We further show that NLMs can efficiently realize a partial set of Horn clauses. We start from the neural boolean logic rules and the neural quantifiers. Boolean logic. We use the following symbolic meta-rule for boolean logic: DISPLAYFORM0 where expression can be any boolean expressions consisting of predicates over all variables (x 1, . . ., x r) andp(·) is the conclusive predicate. For example, the rule Moveable(x) ← ¬IsGround(x) ∧ Clear(x) can be instantiated from this meta-rule. Denote P = {p 1, . . ., p k} as the set of |P| predicates appeared in expression. By definition, all p i's have the same arity r and can be stacked as a tensor of shape [m r, |P|]. In Eq. 1, for a specific grounding of the conclusive predicatep(x 1 · · · x r), it is conditioned r! × |R| grounding values with the same subset of objects, of arbitrary permutation as the arguments to all input predicates P. For example, consider a specific ternary predicatep(x 1, x 2, x 3). For three different objects a, b, c ∈ U, the groundingp(a, b, c) is conditioned on p j (a, b, c), DISPLAYFORM1, a) (all permutations of the parameters) for all j (all input predicates).Our neural implementation of boolean logic rules is a lifted neural module that uniformly applies to any grounding entries (x 1 · · · x r) in the output tensorp U. It has a Permute(·) operation transforming the tensor representation of P, followed by a multi-layer perceptron (MLP). Given the tensor representation of P, for each p U i (x 1, x 2, . . ., x r), the Permute(·) operation creates r! new tensors as p U i,1,..., p U i,r! by permuting all axes that index objects, with all possible permutations. We stack all to form a [m r, r! × |P|]-shaped tensor. An MLP uniformly applies to all m r object indices: DISPLAYFORM2 where σ is the sigmoid nonlinearity, θ is the trainable network parameters. For all sets of mutually exclusive indexes i 1,..., i r ∈ {1, 2, . . ., m}, the same MLP is applied. Thus, the size of θ is independent of the number of objects m. This property is analogous to the implicit unification property of Horn clauses: DISPLAYFORM3 Quantification. We introduce two types of meta-rules for quantification, namely expansion and reduction. Let p be a predicate, and we have DISPLAYFORM4 where DISPLAYFORM5. The expansion operation constructs a new predicate q from p, by introducing a new variable x r+1. For example, consider the following rule DISPLAYFORM6 This rule does not fit the meta-rule in Eq. 1 as some predicates on the RHS only take a subset of variables as inputs. However, it can be described by using the expansion and the boolean logic meta-rules jointly. DISPLAYFORM7 The expansion meta-rule (Eq. 3) for a set of C r-ary predicates, represented by a [m r, C]-shaped tensor, introduces a new and distinct variable x r+1. Our neural implementation Expand(·) repeats each predicate (their tensor representation) for (m − r) times, and stacks in a new dimension. Thus the output shape is [m r+1, C].The other meta-rule is for reduction: DISPLAYFORM8 where the ∀ quantifier can also be replaced by ∃. The reduction operation reduces a variable in a predicate via the quantifier. As an example, the rule to deduce the moveability of objects, DISPLAYFORM9 can be expressed using meta-rules as follows: DISPLAYFORM10 The reduction meta-rule (Eq. 4) for a set of C (r + 1)-ary predicates, represented by a [m r+1, C]-shaped tensor, eliminates the variable x r+1 via quantifiers. For ∃ (or ∀), our neural implementation Reduce(·) takes the maximum (or minimum) element along the dimension of x r+1, and stacks the two ing tensors. Therefore, the output shape becomes [m r, 2C]. denotes the number of output predicates of group j at layer i.[·] denotes the shape of the tensor. NLMs realize symbolic logic rules in a multi-layer multi-group architecture, illustrated in Figure 2. An NLM has D layers, and each layer has B + 1 computation units as groups. Between layers, we use intra-group computation (Eq. 1). The predicates at each layer are grouped by their arities, and inside each group, we use inter-group computation (Eq. 3 and 4). DISPLAYFORM0 as the outputs of layer i, where DISPLAYFORM1 is the output corresponding to the r-ary unit at layer i. For convenience, we denote i. Inter-group computation. As shown in FIG0, we connect tensors from the previous layer i − 1 in vertically neighboring groups (i.e. r − 1, r and r + 1), and aligns their shapes by expansion (Eq. 3) or reduction (Eq. 4) to form an intermediate tensor I DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 Nonexistent terms are ignored (e.g. when r + 1 > B or r − 1 < 0). Note that from the previous layer, O (r−1) DISPLAYFORM5, respectively. After the concatenation, the ing tensor I Intra-group computation. The intra-group computation is implemented as the neural boolean logic in Eq. 1. It take the intermediate tensor I DISPLAYFORM6 where σ is the sigmoid nonlinearity and θ Then it uses expansion/reduction to compose the intermediate tensor I DISPLAYFORM7 i−1 predicates. For each object pair (x, y), the output U-grounding tensor of predicates is computed by intra-group computation O BID23 i (x, y) = MLP(Concat(I BID23 i (x, y), I BID23 i (y, x)); θ BID23 i ), and the output shape is [m, m − 1, C BID23 i]. The Concat(·, ·) corresponds to the Permute operation, while the MLP is shared among all pairs of objects (x, y).Remark. It can be verified that NLMs can realize the forward chaining of a partial set of Horn clauses. In NLMs, we consider only finite cases. Thus, there should not exist cyclic references of predicates among rules. The extension to support cyclic references is left as a future work. See the proof in Appendix D. Thus, given the training dataset containing pairs of (premises, ), NLMs can induce lifted rules that entail the and generalize w.r.t. the number of objects during testing. The expressive power of NLM depends on multiple factors:1. The depth D of NLM (i.e., number of layers) restricts the maximum number of deduction steps. 2. The breadth B of NLM (i.e., the maximum number of variables in all predicates considered) limits the arity of relations among objects. Practically, most (intermediate) predicates are binary or ternary and we set B depending on the task (typically 2 or 3, see Table 3 in Appendix B.) 3. The number of output predicates used at each layer (C (r) i in FIG0 ). Let C = max i,r C (r) i, and this number is often small in our experiments (e.g., 8 or 16). 4. In Eq. 2, the expressive power of MLP (number of hidden layers and number of hidden neurons) restricts the complexity of the boolean logic to be represented. In our experiments, we usually prefer shallow networks (e.g., 0 or 1 hidden layer) with a small number of neurons (e.g., 8 or 16). This can be viewed as a low-dimension regularization on the logic complexity and encourages the learned rule to be simple. The computational complexity of NLM's forward or backward propagation is O(m B DC 2) where m is the number of objects. The network has O(DC 2) parameters. Assuming B is a small constant, the computational complexity of NLM is quadratic in the number of allowed predicates. In this section, we show that NLM can solve a broad set of tasks, ranging from relational reasoning to decision making. Furthermore, we show that NLM trained using small-sized instances can generalize to large-sized instances. In the experiments, Softmax-Cross-Entropy loss is used for supervised learning tasks, and REINFORCE is used for reinforcement learning tasks. Due to space limitation, interested readers are referred to Appendix A for details of training (including curriculum learning) in the decision making tasks, and Appendix B for more implementation details (such as residual connections ), hyper-parameters, and model selection criterion. We consider two baselines as representatives of the connectionist and symbolicist: Memory Networks (MemNN) BID19 and Differentiable Inductive Logic Programming (∂ILP) , a state-of-the-art ILP framework. We also make comparisons with other models such as Differentiable Neural Computer (DNC) BID6 and graph neural networks whenever eligible. For MemNN, in order to handle an arbitrary number of inputs (properties, relations), we adopt the method from BID6. Specifically, each object is assigned with a unique identifier (a binary integer ranging from 0 to 255), as its "name". The memory of MemNN is now a set of "pre-conditions". For unary predicates, the memory slot contains a tuple (id(x), 0, properties(x)) for each x, and for binary predicates p(x, y), the memory slot contains a tuple (id(x), id(y), relations(x, y)), for each pair of (x, y). Both properties(x) and relations(x, y) are length-k vectors v, where k is the number of input predicates. We number each input predicate with an integer i = 1, 2, · · ·, k. If object x has a property DISPLAYFORM0 We extract the key and value for MemNN's to lookup on the given pre-conditions with 2-layer multi-layer perceptrons (MLP). MemNN relies on iterative queries to the memory to perform relational reasoning. Note that MemNN takes a sequential representation of the multi-relational data. For ∂ILP, the grounding of all base predicates is used as the input to the system. The family tree is a benchmark for inductive logic programming, where the machine is given a family tree containing m members. The family tree is represented by the following relations (predicates): Table 1: Comparison among MemNN, ∂ILP and the proposed NLM in family tree and graph reasoning, where m is the size of the testing family trees or graphs. Both ∂ILP and NLM outperform the neural baseline and achieve perfect accuracy (100%) on test set. Note N/A mark means that ∂ILP cannot scale up in 2-OutDegree. IsSon, IsDaughter, IsFather and IsMother. The goal of the task is to reason out other properties of family members or relations between them. Our are summarized in Table 1.For MemNN, we treat the problem of relation prediction as a question answering task. For example, to determine whether member x has a father in the family tree, we input id(x) to MemNN as the question. MemNN then performs multiple queries to the memory and updates its hidden state. The finishing hidden state is used to classify whether HasFather(x). For relations (binary predicates), the corresponding MemNN takes the concatenated embedding of id(x) and id(y) as the question. For ∂ILP, we take the grounded probability of the "target" predicate as the output; for an NLM with D layers, we take the corresponding group of output predicates at the last layer (for property prediction, we use tensor OD to represent unary predicates, while for relation prediction we use tensor O BID23 D to represent binary predicates) and classify the property or relation with a linear layer. All models are trained on instances of size 20 and tested on instances of size 20 and 100 (size is defined as the number of family members). The models are trained with fully supervised learning (labels are available for all objects or pairs of objects). During the testing phase, the accuracy is evaluated (and averaged) on all objects (for properties such as HasFather) or pairs of objects (for relations such as IsUncle). MGUncle is defined as one's maternal great uncle, which is also used by Differentiable Neural Computer (DNC) BID6. We report the performance of MemNN in the format of Micro / Macro accuracy. We also try our best to replicate the setting used by BID6, and as a comparison, in the task of "finding" the MGUncle instead of "classifying", DNC reaches the accuracy of 81.8%. We further extend the Family tree to general graphs and report the reasoning performance in Table 1.We treat each node in the graph as an object (symbol). The (undirected) graph is fed into the model in the form of a "HasEdge" relation between nodes (which is an adjacent matrix). Besides, an extra property color represented by one-hot vectors is defined for every node. A node has the property of AdjacentToRed if it is adjacent to a red node by an outgoing edge. k-Connectivity is a relation between two nodes in the graph, which is true if two nodes are connected by a path with length at most k. A node has property k-OutDegree if its out-degree is exactly k. The N/A of ∂ILP in the 2-OutDegree task comes from its memory restriction , where 3-ary intentional predicate is required. As an example, a human-written logic rule for 2-OutDegree All models are trained on instances of size 10 and tested on instances of size 10 and 50 (size is defined as the number of nodes in the graph). between MemNN and the proposed NLM in the blocks world, sorting integers, and finding shortest paths, where m is the number of blocks in the blocks world environment or the size of the arrays/graphs insorting/path environment. Both models are trained on instance size m ≤ 12 and tested on m = 10 or 50. The performance is evaluated by two metrics and separated by "/": the probability of completing the task during the test, and the average Moves used by the agents when they complete the task. There is no for ∂ILP since it fails to scale up. MemNN fails to complete the blocks world within the maximum m × 4 Moves. 3.4 BLOCKS WORLD We also test NLM's capability of decision making in the classic blocks world domain BID10 ) by slightly extending the model to fit the formulation of Markov Decision Process (MDP) in reinforcement learning. Shown in Figure 1, an instance of the blocks world environment contains two worlds: the initial world and the target world, each containing the ground and m blocks. The task is to take actions in the operating world and make its configuration the same as the target world. The agent receives positive rewards only when it accomplishes the task and the sparse reward setting brings significant hardness. Each object (blocks or ground) can be represented by four properties: world_id, object_id, coordinate_x, coordinate_y. The ground has a fixed coordinate. The input is the of the numeral comparison among all pairs of objects (may come from different worlds). For example, in x-coordinate, the comparison produces three relations for each object pair (i, j), i = j: DISPLAYFORM0 The only operation is Move(i, j), which moves object i onto the object j in the operating world if i is movable and j is placeable. If the operation is invalid, it will have no effect; otherwise, the action takes effect and the state represented as coordinates will change accordingly. In our setting, an object i is movable iff it is not the ground and there are no blocks on it, i.e. ∀ j ¬(Up(i, j) ∧ SameX(i, j)). Object i is placeable iff it is the ground or there are no blocks on it. To avoid the ambiguity of the x-coordinates while putting blocks onto the ground, we set the x coordinate of block i to be i when it is placed onto the ground. The action space is (m + 1) × m where m is the number of blocks in the world and +1 comes from the "ground". For both MemNN and NLM, we apply a shared MLP on the output relational predicates of each pair of objects OD (x, y) and compute an action score s(x, y). The probability for Move(x, y) is ∝ exp s(x, y) (by taking a Softmax). The are summarized in TAB3. For more discussion on the confidence bounds of the experiments, please refer to Appendix B.6. We further show NLM's ability to excel at algorithmic tasks, such as Sorting and Path. We view an algorithm as a sequence of primitive actions and cast as a reinforcement learning problem. Sorting. We first consider the problem of sorting integers. Given a length-m array a of integers, the algorithm needs to iterative swap elements to sort the array in ascending order. We treat each slot in the array as an object, and input their index relations (whether i < j) and numeral relations (whether a[i] < a[j]) to NLM or MemNN. The action space is m × (m − 1) indicating the pair of integers to be swapped. TAB3 summarizes the learning performance. As the comparisons between all pairs of elements in the array are given to the agent, sorting the array within the maximum number of swaps is an easy task. A trivial solution is to randomly swap an inversion BID24 in the array at each step. Beyond being able to generalize to arrays of arbitrary length, with different hyper-parameters and random seeds, the learned algorithms can be interpreted as Selection-Sort, Bubble-Sort, etc. We include videos demonstrating some learned algorithms in our website. Path finding. We also test the performance of finding a path (single-source single-target path) in a given graph as a sequential decision-making problem in reinforcement learning environment. Given an undirected graph represented by its adjacency matrix as relations, the algorithm needs to find a path from a start node s (with property IsStart(s) = True) to the target node t (with property IsTarget(t) = True). To restrict the number of deduction steps, we set the maximum distance between s and t to be 5 during the training and set the distance between s and t to be 4 during the testing, which replicates the setting of BID6. TAB3 summarizes the . Path task here can be seen as an extension of bAbI task 19 (path finding) with symbolic representation. As a comparison with graph neural networks, Li et al. FORMULA2 achieved 99% accuracy on the bAbI task 19. Contrastively, we formulate the shortest path task as a more challenging reinforcement learning (decision-making) task rather than a supervised learning (prediction) task as in BID6. Specifically, the agent iteratively chooses the next node next along the path. At the next step, the starting node will become next (at each step, the agent will move to next). As a comparison, in Graves et al. FORMULA2, Differentiable Neural Computer (DNC) finds the shortest path with probability 55.3% in a similar setting. ILP and relational reasoning. Inductive logic programming (ILP) BID7 BID8 ) is a paradigm for learning logic rules derived from a limited set of rule templates from examples. Being a powerful way of reasoning over discrete symbols, it is successfully applied to various language-related problems, and has been integrated into modern learning frameworks (; BID14). introduces a differentiable implementation of ILP which works with connectionist models such as CNNs. Sharing a similar spirit, BID15 introduces an end-to-end differentiable logic proving system for knowledge base (KB) reasoning. A major challenge of these approaches is to scale up to a large number of complex rules. Searching a rule as complex as our ShouldMove example in Appendix E from scratch is beyond the scope of most systems that use weighted symbolic rules generated from templates. As shown in Section 2.4, both computational complexity and parameter size of the NLM grow polynomially w.r.t. the number of allowed predicates (in contrast to the exponential dependence in ∂ILP ), but factorially w.r.t. the breadth (max arity, same as ∂ILP). Therefore, our method can deal with more complex tasks such as the blocks world which requires using a large number of intermediate predicates, while ∂ILP fails to search in such a large space. Our paper also differs from existing approaches on using neural networks to augment symbolic rule induction BID1 BID3. Specifically, we have no rule designed by humans as the input or the knowledge base for the model. NLMs are general neural architectures for learning lifted rules from only input-output pairs. Our work is also related to symbolic relational reasoning, which has a wide application in processing discrete data structures such as knowledge graphs and social graphs (; ; ;). Most symbolic relational reasoning approaches (e.g., ; BID15 are developed for KB reasoning, in which the predicates on both sides of a rule is known in the KB. Otherwise, the complexity grows exponentially in the number of used rules for a , which is the case in the blocks world. Moreover, Yang et al. FORMULA2 considers rues of the form query(Y, X) ← R n (Y, Z n) ∧ · · · ∧ R 1 (Z 1, X), which is not for general reasoning. The key of BID15 and is to learn subsymbolic embeddings of entities and predicates for efficient KB completion, which differs from our focus. While NLMs can scale up to complex rules, the number of objects/entities or relations should be bounded as a small value (e.g., < 1000), since all predicates are represented as tensors. This is, to some extent, in contrast with the systems developed for knowledge base reasoning. We leave the scalability of NLMs to large entity sets as future works. Besides, modular networks BID0 BID4 are proposed for the reasoning over subsymbolic data such as images and natural language question answering. BID16 implements a visual reasoning system based on "virtual" objects brought by receptive fields in CNNs. tackles the problem of deriving structured representation from raw pixel-level inputs. combines structured visual representation and theorem proving. Graph neural networks and relational inductive bias. Graph convolution networks (GCNs) (; ; ;) is a family of neural architectures working on graphs. As a representative, Gilmer et al. FORMULA2 proposes a message passing modeling for unifying various graph neural networks and graph convolution networks. GCNs achieved great success in tasks with intrinsic relational structures. However, most of the GCNs operate on pre-defined graphs with only nodes and binary connections. This restricts the expressive power of models in general-purpose reasoning tasks .In contrast, this work removes such restrictions and introduces a neural architecture to capture lifted rules defined on any set of objects. Quantitative support the effectiveness of the proposed model in a broad set of tasks ranging from relational reasoning to modeling general algorithms (as decision-making process). Moreover, being fully differentiable, NLMs can be plugged into existing convolutional or recurrent neural architectures for logic reasoning. Relational decision making. Logic-driven decision making is also related to Relational RL , which models the environment as a collection of objects and their relations. State transition and policies are both defined over objects and their interactions. Examples include OO-MDP , symbolic models for learning in interactive domains BID12, structured task definition by object-oriented instructions , and structured policy learning . General planning methods solve these tasks via planning based on rules (; BID18 Jiménez et al., 2019). The goal of our paper is to introduce a neural architecture which learns lifted rules and handle relational data with multiple orders. We leave its application in other RL and planning tasks as future work. Neural abstraction machines and program induction. Neural Turing Machine (NTM) (; enables general-purpose neural problem solving such as sorting by introducing an external memory that mimics the execution of Turing Machine. Neural program induction and synthesis BID9 BID13 ; BID11 ; ; BID20 are recently introduced to solve problems by synthesizing computer programs with neural augmentations. Some works tackle the issue of the systematical generalization by introducing extra supervision . , more complex programs such as language parsing are studied. However, the neural programming and program induction approaches are usually hard to optimize in an end-to-end manner, and often require strong supervisions (such as ground-truth programs). In this paper, we propose a novel neural-symbolic architecture called Neural Logic Machines (NLMs) which can conduct first-order logic deduction. Our model is fully differentiable, and can be trained in an end-to-end fashion. Empirical evaluations show that our method is able to learn the underlying logical rules from small-scale tasks, and generalize to large-scale tasks. The promising open the door for several research directions. First, the maximum depth of the NLMs is a hyperparameter to be specified for individual problems. Future works may investigate how to extend the model, so that it can adaptively select the right depth for the problem at hand. Second, it is interesting to extend NLMs to handle vector inputs with real-valued components. Currently, NLM requires symbolic input that may not be easily available in applications like health care where many inputs (e.g., blood pressure) are real numbers. Third, training NLMs remains nontrivial, and techniques like curriculum learning have to be used. It is important to find an effective yet simpler alternative to optimize NLMs. Last but not least, unlike ILP methods that learn a set of rules in an explainable format, the learned rules of NLMs are implicitly encoded as weights of the neural networks. Extracting human-readable rules from NLMs would be a meaningful future direction. This supplementary material is organized as follows. First, we provide more details for our training method and introduce the curriculum learning used for reinforcement learning tasks in Appendix A. Second, in Appendix B, we provide more implementation details and hyper-parameters of each task in Section 3. Next, we provide deferred discussion of NLM extensions in Appendix C. Besides, we give a proof of how NLMs could realize the the forward chaining of a set of logic rules defined in Horn cluases. See Appendix D for details. In Appendix E, We also provide a list of sample rules for the blocks world problem in order to exhibit the complexity of describing a strategies or policies. Finally, we also provide a minimal implementation of NLM in TensorFlow for reference at the end of the supplementary material (Appendix F). In this section, we provide hyper-parameter details of our training method and introduce the examguided curriculum learning used for reinforcement learning tasks. We also provide details of the data generation method. We optimize both NLM and MemNN with Adam (Kingma & Ba FORMULA2) and use a learning rate of α = 0.005.For all supervised learning tasks (i.e. family tree and general graph tasks), we use Softmax-CrossEntropy as loss function and a training batch size of 4.For reinforcement learning tasks (i.e. the blocks world, sorting and shortest path tasks), we use REINFORCE algorithm for optimization. Each training batch is composed of a single episode of play. Similar to A3C BID6 ), we add policy entropy term in the objective function (proposed by) to help exploration. The update function for parameters θ of policy π is DISPLAYFORM0 where H is the entropy function, s t and a t are the state and action at time t, v t is the discounted reward starting from time t. The hyper-parameter β is set according to different environments and learning stages depending on the demand of exploration. In all environments, the agent receives a reward of value 1.0 when it completes the task within a limited number of steps (which is related to the number of objects). To encourage the agent to use as few moves as possible, we give a reward of −0.01 for each move. The reward discount factor γ is 0.99 for all tasks. Table 3: Hyper-parameters for reinforcement learning tasks. The meaning of the hyper-parameters could be found in Section A.1 and Section A.2. For the Path environment, the step limit is set to the actual distance between the starting point and the targeting point, to encourage the agents to find the shortest path. instances with its complexity. Training instances are grouped by their complexity (as lessons). For example, in the game of BlocksWorld, we consider the number of blocks in a game instance as its complexity. During the training, we present the training instances to the model from lessons with increasing difficulty. We periodically test models' performance (as exams) on novel instances of the same complexity as the ones in its current lesson. The well-performed model (whose accuracy reaches a certain threshold) will pass the exam and advance to a harder lesson (of more complex training instances). The exam-guided curriculum learning exploits the previously gained knowledge to ease the learning of more complex instances. Moreover, the performance on the final exam reaches above a threshold indicates the graduation of models. In our experiments, each lesson contains training instances of the same number of objects. For example, the first lesson in the blocks world contains all possible instances consisting of 2 blocks (in each world). The instances of the second lesson contain 3 blocks in each world. And in the last lesson (totally 11 lessons) there are 12 blocks in each world. We report the range of the curriculum in Table 3 for three RL tasks. Another essential ingredient for the efficient training of NLMs is to record models' failure cases. Specifically, we keep track of two sets of training instances: positive and negative (meaning the agent achieves the task or not). For each presented instance of the exam, it is recollected into positive or negative sets depending on whether the agent achieves the task or not. All training samples are sampled from the positive set with probability Ω and from the negative set with probability 1−Ω. This balanced sampling strategy prevents models from getting stuck at sub-optimal solutions. Algorithm 1 illustrates the pseudo-code of the curriculum learning guided by exams and fails. The evaluation process ("exam") randomly samples examples from 3 recent lessons. The agent goes through these examples and gets the success rate (the ratio of achieving the task) as its performance, which is used to decide whether the agent passes the exam by comparing to a lesson-depend threshold. As we want a perfect model, the threshold for passing the last lesson (the "final exam") is 100%. We linearly decrease the threshold by 0.5% for each former lessons, to prevent over-fitting(e.g., the threshold of the first lesson in the blocks world is 95%). After the "exam", the examples are collected into positive and negative pools according to the outcome (success or not). During the training, we use balanced sampling for choosing training instances from positive and negative pools with probability Ω from positive. The hyper-parameters Ω, the number of epochs, the number of episodes in each training epoch and the number of episodes in one evaluation are shown in Table 3 for three RL tasks. This section provides more implementation details for the model and experiments, and summarizes the hyper-parameters used in experiments for our NLM and the baseline algorithm MemNN. Analog to the residual link in (; BID17, we add residual connections to our model. Specifically, for each layer illustrated in Figure 2, the base predicates (inputs) are concatenated to the conclusive predicates (outputs) group-wisely. That is, input unary predicates are concatenated to the deduced unary predicates while input binary predicates are concatenated to the conclusive binary predicates. Table 4 shows hyper-parameters used by NLM for different tasks. For all MLPs inside NLMs, we use no hidden layer, and the hidden dimension (i.e., the number of intermediate predicates) of each layer is set to 8 across all our experiments. In supervised learning tasks, a model is called "graduated" if its training loss is below a threshold depending on the task (usually 1e-6). In reinforcement learning tasks, an agent is called "graduated" if it can pass the final exam, i.e., get 100% success rate on the evaluation process of the last lesson. We note that in the randomly generated cases, the number of maternal great uncle (IsMGUncle) relation is relatively small. This makes the learning of this relation hard and in a graduation ratio of only 20%. If we increase the maximum number of people in training examples to 30, the graduation ratio will grow to 50%. Table 4: Hyper-parameters for Neural Logic Machines. The definition of depth and breadth are illustrated in Figure 2. "Res." refers to the use of residual links. "Grad." refers to the ratio of successful graduation in 10 runs with different random seeds, which partially indicates the difficulty of the task. "Num. Examples/Episodes" means the maximum number of examples/episodes used to train the model in supervised learning and reinforcement learning cases. Tasks Depth Breath Res. Grad. Num. Examples/EpisodesIn family tree tasks, we mimic the process of families growing using a timeline. For each newly created person, we randomly sample the gender and parents (could be none, indicating not included in the family tree) of the person. We also maintain lists of singles of each gender, and randomly pick two from each list to be married (each time when a person was created). We randomly permute the order of people. In general graph tasks (include Path), We adopt the generation method from BID6, which samples m nodes on a unit square, and the out-degree k i of each node is sampled. Then each node connects to k i nearest nodes on the unit square. In undirected graph cases, all generated edges are regarded as undirected edges. In Sorting, we randomly generate permutations to be sorted in ascending order. In Blocks World, We maintain a list of placeable objects (the ground included). Each newly created block places on one randomly selected placeable object. Then we randomly shuffle the id of the blocks. In the blocks world environment, to better aid the reinforcement learning process, we train the agent on an auxiliary task, which is to predict the validity or effect of the actions. This task is trained by supervised learning using cross-entropy loss. The overall loss is a summation of cross-entropy loss (with a weight of 0.1) and the REINFORCE loss. We did not choose the Move to be taken directly based on the relational predicates at the last layer of NLM. Instead, we manually concatenate the object representation from the current and the target configuration, which share the same object ID. Then for each pair of objects, their relational representation is constructed by the concatenation of their own object representation. An extra fully-connected layer is applied to the relational representation, followed by a Softmax layer over all pairs of objects. We choose an action based on the Softmax score. We cannot directly prove the accuracy of NLM by looking at the induced rules as in traditional ILP systems. Alternatively, we take an empirical way to estimate its accuracy by sampling testing examples. Throughout the experiments section, all accuracy statistics are reported in 1000 random generated data. To show the confidence of this , we test a specific trained model of Blocks World task with 100,000 samples. We get no fail cases in the testing. According to the multiplicative form of Chernoff Bound 6, We are 99.7% confident that the accuracy is at least 99.98%. Reasoning over noisy input: integration with neural perception. Recall that NLM is fully differentiable. Besides taking logic pre-conditions (binary values) as input, the input properties or relations can be derived from other neural architectures (e.g., CNNs). As a preliminary example, we replace the input properties of nodes with images from the MNIST dataset. A convolutional neural network (CNN) is applied to the input extracting multiple features for future reasoning. CNN and NLM can be optimized jointly. This enables reasoning over noisy input. We modify the AdjacentToRed task in general graph reasoning to AdjacentToNumber0. In detail, each node has a visual input from the MNIST dataset indicating its number. We say AdjacentToNumber0(x) if and only if a node x is adjacent to another node with number 0. We use to extract visual features for recognizing the number of each node. The output of LeNet for each node is a vector of length 10, with sigmoid activation. We follow the train-test split from the original MNIST dataset. The joint model is trained on 100,000 training examples (m = 10) and gets 99.4% accuracy on 1000 testing examples (m = 50). Note that the LeNet modules are optimized jointly with the reasoning about AdjacentToNumber0. In this section, we show that NLM can realize a partial set of Horn clauses in first-order logic (FOL), up to the limit of the NLM's depth and breadth. In NLMs, we consider only finite cases. Thus, there should not exist cyclic references of predicates among rules. The extension to support cyclic references is left as a future work. Throughout the proof, we always assume the depth, breadth and number of predicates of NLM are flexible and large enough to realize the demanding rules. Here, we only prove the realization of a definite clause, i.e., a Horn clause with exactly one positive literal and a non-zero number of negative literals in FOL 7. It can be written in the implication form iŝ p ← p 1 ∧ p 2 ∧ · · · ∧ p k (variables as arguments are implicitly universally quantified), wherep is called the head predicate and p 1,..., p k are called body predicates. We group the variables appearing in the rule into three subsets: BID22 variables that only appear in the head predicate, BID23 variables that appear in the body predicates, and variables that appear in both head and body predicates. Consider as an example a chain-like rule: DISPLAYFORM0 We rewrite the rule by classifying the variables: DISPLAYFORM1 That is, we move all variables that ony appear in body predicates to the right-hand side, and extract out all variables that only appear in the head predicate. We show how we can compositionally combines the computation units in NLMs to realize this rule, in the following 4 steps:1. We first align the arity of the body predicates to include all variables that appear in at least one of the body predicates (including variables of set 2 and set 3). This could be done by a sequence of Expand operations (Eq. 3). In this example, we will create helper predicates to make the right-hand side of the rule as DISPLAYFORM2 where p 1 (x 1, x 2, x 3) p 1 (x 1, x 2) and p 2 (x 2, x 3, x 1) p 2 (x 2, x 3). 2. We use neural boolean logic (Eq. 1) to realize the boolean formula inside all quantification symbols. Moreover, we use the Permute operation to transpose the tensor representation so that all variables being quantified on the right-hand side appear as the last several variables in the derived predicate p. Overall, we will derive another helper predicates, DISPLAYFORM3 3. We use the Reduce operation to add quantifiers to the right-hand side (i.e., to the p predicate). We will get: DISPLAYFORM4 4. Finally, we use the Expand operation (Eq. 3] to add variables that only appear in the head predicate to the derived predicate: DISPLAYFORM5 Note that, all variables appeared in the head predicate are implicitly universally quantified. This is consistent with our setting, since all rules in NLMs are lifted. Overall, a symbolic rule written as a Horn clause can be realized by NLMs as a computation flow which starts from multiple expansions followed by a neural boolean rule and multiple reductions, and ends with a set of expansions. Next, we show that the forward propagation of NLMs realizes the forward chaining of a set of Horn clauses. Following the notation in , the forward chaining starts from a set of initial facts, which are essentially the grounding of base predicates. The forward chaining process sequentially applies rules over the fact set, and concludes new facts. In NLM, we represent facts as the U-grounding of predicates. If we consider a set of rules that does not have recursive references, all rules can be sorted in an topological order R = (r 1, r 2, . . ., r k). We only allow references of r i from r j, where i < j. Without loss of generality, we assume that the grounding of r k is of interest. Given the topologically resolved set of rules R, we build a set of NLMs where each NLM realizes a specific rule r i. By stacking the NLMs sequentially, we can conclude r k. As a side note, for multiple rules referring to the same head predicatep, they implicitly indicate the logical disjunction (∨) of the rules. We can rename these head predicates asp 1,p 2, · · ·, and use an extra NLM to implement the logical disjunction of allp i' s. This example shows a complex reasoning in the seemingly simple Blocks World domain, which can be solved by our NLMs but requires great efforts of create manual rules by human experts in contrast. Suppose we are interested in knowing whether a block should be moved in order to reach the target configuration. Here, a block should be moved if FORMULA2 it is moveable; and FORMULA4 there is at least one block below it that does not match the target configuration. Call the desired predicate "ShouldMove(x)".Input Relations. (Specified in the last paragraph of Section 3.4) SameWorldID, SmallerWorldID, LargerWorldID; SameID, SmallerID, LargerID; Left, SameX, Right, Below, SameY, Above. The relations are given on all pairs of objects across both worlds. Here is one way to produce the desired predicate by defining several helper predicates, designed by "human experts":Note that this is only a part of logic rules needed to complete the Blocks World challenge. The learner also needs to figure out where should the block be moved onto. The proposed NLM can learn policies that solve the Blocks World from the sparse reward signal indicating only whether the agent has finished the game. More importantly, the learned policy generalizes well to larger instances (consisting more blocks). The following python code contains a minimal implementation for one Neural Logic Machines layer with breadth equals 3 in TensorFlow. The neural_logic_layer_breath3 is the main function. The syntax is highlighted and is best viewed in color.def neural_logic_layer_breath3(input0, input1, input2, input3, M, hidden_dim, residual): DISPLAYFORM0 """A neural logic layer with breath 3.
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning.
1,371
scitldr
Sequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization. Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding. In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content. A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer. Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model. Automatic text summarization holds the promise of alleviating the information overload problem BID13. Considerable progress has been made over decades, but existing summarization systems are still largely extractive-important sentences or phrases are identified from the original text for inclusion in the output BID22. Extractive summaries thus unavoidably suffer from redundancy and incoherence, leading to the need for abstractive summarization methods. Built on the success of sequence-to-sequence (seq2seq) learning models BID35, there has been a growing interest in utilizing a neural framework for abstractive summarization BID28 BID20 BID41 BID36 BID5.Although current state-of-the-art neural models naturally excel at generating grammatically correct sentences, the model structure and learning objectives have intrinsic difficulty in acquiring semantic interpretation of the input text, which is crucial for summarization. Importantly, the lack of semantic understanding causes existing systems to produce unfaithful generations. BID3 report that about 30% of the summaries generated from a seq2seq model contain fabricated or nonsensical information. Furthermore, current neural summarization systems can be easily fooled by off-topic information. For instance, FIG0 shows one example where irrelevant sentences are added into an article about "David Collenette's resignation". Both the seq2seq attentional model BID20 and the popular pointer-generator model BID31 are particularly susceptible to unfaithful generation, partially because these models tend to rely on sentences at the beginning of the articles for summarization while being ignorant about their content. Therefore, we design a novel adversarial evaluation metric to measure the robustness of each summarizer against small amounts of randomly inserted topic-irrelevant information. The intuition is that if a summarization system truly understands the salient entities and events, it would ignore unrelated content.32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. Article Snippet: For years Joe DiMaggio was always introduced at Yankee Stadium as "baseball's greatest living player." But with his memory joining those of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller Huggins. Canada's Minister of Defense resigned today, a day after an army official testified that top military officials had altered documents to cover up responsibility for the beating death of a Somali teen-ager at the hands of Canadian peacekeeping troops in 1993. Defense minister David Collenette insisted that his resignation had nothing to do with the Somalia scandal. Ted Williams was the first name to come to mind, and he's the greatest living hitter.... Seq2seq: George Vecsey sports of The Times column on New York State's naming of late baseball legend Joe DiMaggio as "baseball's greatest living player," but with his memory joining those of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller dens. Pointer-generator: Joe DiMaggio is first name to come to mind, and Ted Williams is first name to come to mind, and he's greatest living hitter; he will be replaced by human resources minister, Doug Young, and will keep his Parliament seat for governing Liberal Party. Our Model: Former Canadian Defense Min David Collenette resigns day after army official testifies that top military officials altered documents to cover up responsibility for beating death of Somali teen-ager at hands of Canadian peacekeeping troops in 1993. To address the above issues, we propose a novel semantic-aware abstractive summarization model, inspired by the human process of writing summaries-important events and entities are first identified, and then used for summary construction. Concretely, taking an article as input, our model first generates a set of summary-worthy semantic structures consisting of predicates and corresponding arguments (as in semantic parsing), then constructs a fluent summary reflecting the semantic information. Both tasks are learned under an encoder-decoder architecture with new learning objectives. A dual attention mechanism for summary decoding is designed to consider information from both the input article and the generated predicate-argument structures. We further present a novel decoder with a segment-based reranking strategy to produce diverse hypotheses and reduce redundancy under the guidance of generated semantic information. Evaluation against adversarial samples shows that while performance by the seq2seq attentional model and the pointer-generator model is impacted severely by even a small addition of topic-irrelevant information to the input, our model is significantly more robust and consistently produces more on-topic summaries (i.e. higher ROUGE and METEOR scores for standard automatic evaluation). Our model also achieves significantly better ROUGE and METEOR scores than both models on the benchmark dataset CNN/Daily Mail BID11. Specifically, our model's summaries use substantially fewer and shorter extractive fragments than the comparisons and have less redundancy, alleviating another common problem for the seq2seq framework. Human evaluation demonstrates that our model generates more informative and faithful summaries than the seq2seq model. To discourage the generation of fabricated content in neural abstractive models, a pointer-generator summarizer BID31 ) is proposed to directly reuse words from the input article via a copying mechanism BID9. However, as reported in their work BID31 and confirmed by our experiments, this model produces nearly extractive summaries. While maintaining their model's rephrasing ability, here we improve the faithfulness and informativeness of neural summarization models by enforcing the generation of salient semantic structures via multi-task learning and a reranking-based decoder. Our work is in part inspired by prior abstractive summarization work, where the summary generation process consists of a distinct content selection step (i.e., what to say) and a surface realization step (i.e., how to say it) BID40 BID26. Our model learns to generate salient semantic roles and a summary in a single end-to-end trained neural network. Our proposed model also leverages the recent successes of multi-task learning (MTL) as applied to neural networks for a wide array of natural language processing tasks BID18 BID33 BID25 BID27 BID12. Most recent work BID23 leverages MTL to jointly improve performance on summarization and entailment generation. Instead of treating the tasks equally, we employ semantic parsing for the sake of facilitating more informative and faithful summary generation. Figure 2: Our semantic-aware summarization model with dual attention over both input and generated salient semantic roles. In both shared and separate decoder models, the semantic roles are generated first, followed by the summary construction. Best viewed in color. In the standard seq2seq model, a sequence of input tokens x = {x 1, ..., x n} is encoded by a recurrent neural network (RNN) with hidden states h i at every timestep i. The final hidden state of the encoder is treated as the initial state of the decoder, another RNN (h n = s 0). The decoder will produce the output sequence y = {y 1, ..., y T}, with hidden states s t at every timestep t. Our model outputs two sequences: the sequence of semantic role tokens y s = {y is calculated as the summation of the encoder hidden states, weighted by the attention distribution a t at every timestep t: DISPLAYFORM0 where v, W s, W h are learnable parameters. The context vector, along with the decoder hidden state, is used to produce the vocabulary distribution via softmax: The loss is defined as the negative log likelihood of generating summary y over the training set D using model parameters θ: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 The log probability for each training sample is the average log likelihood across decoder timesteps. Encoder. In our models, our encoder is a single-layer bidirectional long short-term memory (LSTM) unit BID7, where the hidden state is a concatenation of the forwards and backwards LSTMs: DISPLAYFORM4 Decoders. We propose two different decoder architectures-separate decoder and shared decoderto handle semantic information. In the separate decoder model (See Figure 2c), the semantic decoder and the summary decoder are each implemented as their own single-layer LSTM. This setup, inspired by the one-to-many multi-task learning framework of BID18, encourages each decoder to focus more on its respective task. Decoder output and attention over the input are calculated using Eqs. We further study a shared decoder model (See Figure 2a) for the purpose of reducing the number of parameters as well as increasing the summary decoder's exposure to semantic information. One single-layer LSTM is employed to sequentially produce the important semantic structures, followed by the summary. Our output thus becomes y = [y s y a], and the first timestep of the summary decoder is the last timestep of the semantic decoder. Attention is calculated as in Eqs. 1-5.For both models, the loss becomes the weighted sum of the semantic loss and the summary loss: DISPLAYFORM5 In our experiments, we set α as 0.5 unless otherwise specified. We also investigate multi-head attention BID37 over the input to acquire different language features. To our knowledge, we are the first to apply it to the task of summarization. As shown later in the section, this method is indeed useful for summarization, with different heads learning different features. In fact, the multi-head attention is particularly well-suited for our shared decoder model, as some heads learn to attend semantic aspects and others learn to attend summary aspects. Target Semantic Output. We use semantic role labeling (SRL) as the target semantic representation, which identifies predicate-argument structures. To create the target data, articles in the training set are parsed in the style of PropBank BID14 using the DeepSRL parser from BID10. We then choose up to five SRL structures that have the most overlap with the reference summary. Here we first consider matching headwords of predicates and arguments. If no match is found, we consider all content words. Note that the semantic labels are only used for model training. At test time, no external resource beyond the article itself is used for summary decoding. To further leverage semantic information, a dual attention mechanism is used to attend over the generated semantic output in addition to the input article during summary decoding. Although attending over multiple sources of information has been studied for visual QA BID21 and sentence-level summarization BID3 ), these are computed over different encoder states. In contrast, our dual attention mechanism considers decoding from the semantic output. Although our attention mechanism may appear close to intra-or self-attention BID30, the function of our dual attention is to attend over a specific portion of the previously generated content that represents its own body of information, whileas traditional self-attention is predominantly used to discourage redundancy. The context vector attending over the semantic decoder hidden states is calculated as follows:, segment-reranking BID32, and hamming loss BID42. Here, we discuss our own segment-reranking beam search decoder, which encourages both beam diversity and semantic coverage. Our reranker is applied only during summary decoding, where we rely on the generated semantic roles for global guidance. The only modification made to the semantic decoder is a de-duplication strategy, where generated predicates seen in previous output are eliminated. DISPLAYFORM0 Reranking. Regular beam search chooses the hypotheses for the next timestep solely based on conditional likelihood (i.e., p(y a | x)). In our proposed summary decoder, we also leverage our generated semantic information and curb redundancy during beam selection. Reranking is performed every R timesteps based on the following scorer, where hypotheses with less repetition (r) and covering more content words from the generated semantics (s) are ranked higher: DISPLAYFORM1 where LRS is the longest repeating substring in the current hypotheses. We define a repeating substring as a sequence of three or more tokens that appears in that order more than once in the hypothesis, with the intuition that long repeating fragments (ex. "the Senate proposed a tax bill; the Senate proposed a tax bill.") should be penalized more heavily than short ones (ex. "the Senate proposed a tax bill; Senate proposed."). s measures the percentage of generated semantic words reused by the current summary hypothesis, contingent on the predicate of semantic structure matching. At every other timestep, we rank the hypotheses based on conditional likelihood and a weaker redundancy handler, r, that considers unigram novelty (i.e., percentage of unique content words): score = log (p(y a | x)) + α r. We use R = 10, α = 0.4, β = 0.1, α = 0.1 in our experiments. Beam Diversity. In the standard beam search algorithm, all possible extensions to the beams are considered, ing in a comparison of B × D hypotheses, where B is the number of beams and D is the vocabulary size. In order to make the best use of the reranking algorithm, we develop two methods to enforce hypothesis diversity during non-reranking timesteps. Beam Expansion. Inspired by BID32, we rank only the K highest scoring extensions from each beam, where K < B. This ensures that at least two unique hypotheses from the previous timestep will carry on to the next timestep. Beam Selection. We further reinforce hypothesis diversity through the two-step method of likelihood selection and dissimilarity selection. In likelihood selection, we accept N hypotheses (where N < B) from our pool of B × K based solely on conditional probabilities, as in traditional beam search. From the remaining hypotheses pool, we select B − N hypotheses on the basis of dissimilarity. We choose the hypotheses with the highest dissimilarity score ∆([h] N, h ), where h is a candidate and [h] N are the hypotheses chosen during likelihood selection. In our experiments, we use token-level Levenshtein edit distance BID15 as the dissimilarity metric, where (Lev(h, h) ). In experiments, we use B = 12, K = 6, N = 6. DISPLAYFORM2 Datasets. We experiment with two popular large datasets of news articles paired with human-written summaries: the CNN/Daily Mail corpus BID11 (henceforth CNN/DM) and the New York Times corpus BID29 ) (henceforth NYT). For CNN/DM, we follow the experimental setup from BID31 and obtain a dataset consisting of 287,226 training pairs, 13,368 validation pairs, and 11,490 test pairs. For NYT, we removed samples with articles of less than 100 words or summaries of less than 20 words. We further remove samples with summaries containing information outside the article, e.g., "[AUTHOR]'s movie review on..." where the author's name does not appear in the article. NYT consists of 280,146 training pairs and 15,564 pairs each for validation and test. Training Details and Parameters. For all experiments, a vocabulary of 50k words shared by input and output is used. Model parameters and learning rate are adopted from prior work BID31 for comparison purpose. All models are also trained in stages of increasing maximum token lengths to expedite training. The models trained on the NYT dataset use an additional final training stage, where we optimized only on the summary loss (i.e., α = 1 in Eq. 7). During decoding, unknown tokens are replaced with the highest scoring word in the corresponding attention distribution. Baselines and Comparisons. We include as our extractive baselines TEXTRANK BID19 and LEAD-2, the first 2 sentences of the input article, simulating the average length of the target summaries. We consider as abstractive comparison models vanilla seq2seq with attention (SEQ2SEQ) and pointer-generator BID31 (POINT-GEN), which is trained from scratch using the released code. Results for variants of our model are also reported.1 Automatic Evaluation Metrics. For automatic evaluation, we first report the F1 scores of ROUGE-1, 2, and L BID17, and METEOR scores based on exact matching and full matching that considers paraphrases, synonyms, and stemming BID6.We further measure two important aspects of summary quality: extractiveness-how much the summary reuses article content verbatim, and redundancy-how much the summary repeats itself. For the first aspect, we utilize the density metric proposed by BID8 that calculates "the average length of extractive fragments": DISPLAYFORM0 where A represents the article, S represents the summary, and F (A, S) is a set of greedily matched extractive fragments from the article-summary pair. Based on density, we propose a new redundancy metric: DISPLAYFORM1 where F (S) contains a set of fragments at least three tokens long that are repeated within the summary, and #f is the repetition frequency for fragment f. Intuitively, longer fragments and more frequent repetition should be penalized more heavily. Adversarial Evaluation. Our pilot study suggests that the presence of minor irrelevant details in a summary often hurts a reader's understanding severely, but such an error cannot be captured or penalized by the recall-based ROUGE and METEOR metrics. In order to test a model's ability to discern irrelevant information, we design a novel adversarial evaluation scheme, where we purposely mix a limited number of off-topic sentences into a test article. The intuition is that if a summarization system truly understands the salient entities and events, it would ignore unrelated sentences. We randomly select 5,000 articles from the CNN/DM test set with the "news" tag in the URL (mainly covering international or domestic events). For each article, we randomly insert one to four sentences from articles in the test set with the "sports" tag. For NYT, we randomly select 5,000 articles from the test set with government related tags ("U.S.", "Washington", "world") and insert one to four sentences from articles outside the domain ("arts", "sports", "technology"). We ensure that sentences containing a pronoun in the first five words are not interrupted from the previous sentence so that discourse chains are unlikely to be broken. the content of the article, it would ignore unrelated sentences. We took 15,000 articles from the NYT corpus with government related tags ("u.s.", "washington", "world"), and for each article, randomly inserted up to four sentences from articles outside the domain ("arts", "sports", "technology"). We ensured sentences that contained a pronoun in the first five words were not interrupted from the previous sentence so as not to break discourse or coreference chains. DISPLAYFORM0 Figure 3: ROUGE-L and full METEOR scores on adversarial samples, where n irrelevant sentences are inserted into original test articles. Our models (shared decoder for NYT, shared+MHA for CNN/DM) are sturdier against irrelevant information than seq2seq and pointer-generator. separate decoder, and with or without multi-head attention (MHA). In addition to ROUGE and METEOR, we also display extractive density (Dens.) and redundancy (Red.) (lower scores are preferred). Best performing amongst our models are in bold. All our ROUGE scores have a 95% confidence interval of at most ±0.04. Our models all statistically significantly outperform seq2seq for ROUGE and METEOR (approximate randomization test, p < 0.01). Our shared decoder with MHA model statistically significantly outperforms pointer-generator in ROUGE-1 and ROUGE-L on CNN/DM. DISPLAYFORM1 The adversarial evaluation on seq2seq, pointer-generator, and our shared decoder model are shown in Figure 3, where our model consistently yields significantly better ROUGE-L and METEOR scores than the comparisons, and is less affected as more irrelevant sentences are added. Sample summaries for an adversarial sample are shown in FIG0. We find that our semantic decoder plays an important role in capturing salient predicates and arguments, leading to higher quality summaries. Automatic Evaluation. The main are displayed in Table 1. On CNN/DM, all of our models significantly outperform seq2seq across all metrics, and our shared decoder model with multi-head attention yields significantly better ROUGE (R-1 and R-L) scores than the pointer-generator on the same dataset (approximate randomization test, p < 0.01). Despite the fact that both ROUGE and METEOR favor recall and thus reward longer summaries, our summaries that are often shorter than comparisons still produce significantly better ROUGE and METEOR scores than seq2seq on the NYT dataset. Furthermore, our system summaries, by all model variations, reuse fewer and shorter phrases from the input (i.e., lower density scores) than the pointer-generator model, signifying a potentially stronger ability to rephrase. Note that the density scores for seq2seq are likely deflated due to its inability to handle out-of-vocabulary words. Our models also produce less redundant summaries than their abstractive comparisons. Human Evaluation. We further conducted a pilot human evaluation on 60 samples selected from the NYT test set. For each article, the summaries by the seq2seq model, our shared decoder model, and the human reference were displayed in a randomized order. Two human judges, who are native or fluent English speakers, were asked to read the article and rank the summaries against each other based on non-redundancy, fluency, faithfulness to input, and informativeness (whether the summary delivers the main points of the article). Ties between system outputs were allowed but discouraged. Table 2, our model was ranked significantly higher than seq2seq across all metrics. Surprisingly, our model's output was ranked higher than the reference summary in 26% of the samples for informativeness and fluency. We believe this is due to the specific style of the NYT reference summaries and the shorter lengths of our summaries: the informal style of the reference summaries (e.g., frequently dropping subjects and articles) negatively affects its fluency rating, and readers find our shorter summaries to be more concise and to the point. NON-RED. FLUENCY FAITH. INFORM. HUMAN 1.4 ± 0.7 1.5 ± 0.7 1.5 ± 0.8 1.6 ± 0.8 SEQ2SEQ 2.0 ± 0.8 2.4 ± 0.7 2.1 ± 0.8 2.4 ± 0.7 OURS 1.6 ± 0.7 2.1 ± 0.8 1.9 ± 0.7 2.0 ± 0.7 Table 2: Human ranking on non-redundancy, fluency, faithfulness of summaries, and informativeness. The mean (± std. dev.) for the rankings is shown (lower is better). Across all metrics, the difference between our model (shared decoder) and human summary, as well as between our model and seq2seq is statistically significant (one-way ANOVA, p < 0.05). Usage of Semantic Roles in Summaries. We examine the utility of the generated semantic roles. Across all models, approximately 44% of the generated predicates are part of the reference summary, indicating the adequacy of our semantic decoder. Furthermore, across all models, approximately 65% of the generated predicates are reused by the generated summary, and approximately 53% of the SRL structures are reused by the system using a strict matching constraint, in which the predicate and head words for all arguments must match in the summary. When gold-standard semantic roles are used for dual attention in place of our system generations, ROUGE scores increase by about half a point, indicating that improving semantic decoder in future work will further enhance the summaries. Coverage. We also conduct experiments using a coverage mechanism similar to the one used in BID31. We apply our coverage in two places: over the input to handle redundancy, and over the generated semantics to promote its reuse in the summary. However, no significant difference is observed. Our proposed reranker handles both issues in a more explicit way, and does not require the additional training time used to learn coverage parameters. Alternative Semantic Representation. Our summarization model can be trained with other types of semantic information. For example, in addition to using the salient semantic roles from the input article, we also explore using SRL parses of the reference abstracts as training signals, but the higher level of abstraction required for semantic generation hurts performance by two ROUGE points for almost all models, indicating the type of semantic structure matters greatly for the ultimate summarization task. For future work, other semantic representation along with novel model architecture will be explored. For instance, other forms of semantic representation can be considered, such as frame semantics BID1 or Abstract Meaning Representation (AMR) BID2. Although previous work by has shown that seq2seq models are able to successfully generate linearized tree structures, we may also consider generating semantic roles with a hierarchical semantic decoder BID34. We presented a novel semantic-aware neural abstractive summarization model that jointly learns summarization and semantic parsing. A novel dual attention mechanism was designed to better capture the semantic information for summarization. A reranking-based decoder was proposed to promote the content coverage. Our proposed adversarial evaluation demonstrated that our model was more adept at handling irrelevant information compared to popular neural summarization models. Experiments on two large-scale news corpora showed that our model yielded significantly more informative, less redundant, and less extractive summaries. Human evaluation further confirmed that our summaries were more informative and faithful than comparisons.
We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples.
1,372
scitldr
The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art in text classification tasks, showcasing the promise of this new approach. This paper borrows the idea of Super Characters method and two-dimensional embedding, and proposes a method of generating conversational response for open domain dialogues. The experimental on a public dataset shows that the proposed SuperChat method generates high quality responses. An interactive demo is ready to show at the workshop. And code will be available at github soon. Dialogue systems are important to enable machine to communicate with human through natural language. Given an input sentence, the dialogue system outputs the response sentence in a natural way which reads like humantalking. Previous work adopts an encoder-decoder architecture BID9, and also the improved architectures with attention scheme added BID0 BID10 BID11. In architectures with attention, the input sentence are encoded into vectors first, and then the encoded vectors are weighted by the attention score to get the context vector. The concatenation of the context vector and the previous output vector of the decoder, is fed into the decoder to predict the next words iteratively. Generally, the encoded vectors, the context vector, and the decoder output vector are all one-dimensional embedding, i.e. an array of real-valued numbers. The models used in decoder and encoder usually adopt RNN networks, such as bidirectional GRU BID0 Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. BID3, LSTM BID6, and bidirectional LSTM BID11. However, the time complexity of the encoding part is very expensive. The recent work of Super Characters method has obtained state-of-the-art for text classification on benchmark datasets in different languages, including English, Chinese, Japanese, and Korean. The Super Characters method is a two-step method. In the first step, the characters of the input text are drawn onto a blank image. Each character is represented by the two-dimensional embedding, i.e. an matrix of real-valued numbers. And the ing image is called a Super Characters image. In the second step, Super Characters images are fed into a twodimensional CNN models for classification. Examples of two-dimensional CNN models are used in Computer Vison (CV) tasks, such as VGG BID7, ResNet BID4, SE-net BID5 and etc. in ImageNet BID1.In this paper, we propose the SuperChat method for dialogue generation using the two-dimensional embedding. It has no encoding phase, but only has the decoding phase. The decoder is fine-tuned from the pretrained two-dimensional CNN models in the ImageNet competition. For each iteration of the decoding, the image of text through two-dimensional embedding of both the input sentence and the partial response sentence is directly fed into the decoder, without any compression into a concatenated vector as done in the previous work. The proposed SuperChat method is motivated by the two-dimensional embedding used in the Super Characters method. If the Super Characters method could keep the same good performance when the number of classes in the text classification problem becomes even larger, e.g. the size of dialogue vocabulary, then the Super Characters method should be able to address the task of conversational dialogue generation. This can be done by treating the input sentence and the partial response sentence as one combined text input. Figure 1. SuperChat method illustration. The input Chinese sentence means "Wher are you?" in English, and the output given by the proposed SuperChat method is "I am within your heart". Figure 1 illustrates the proposed SuperChat method. The response sentence is predicted sequentially by predicting the next response word in multiple iterations. During each iteration, the input sentence and the current partial response sentence are embedded into an image through twodimentional embedding. The ing image is called as a SuperChat image. And then this SuperChat image is fed into a CNN model to predict the next response word. In each SuperChat image, the upper portion corresponses to the input sentence, and the lower portion corresponses to the partial response sentence. At the beggining of the iteration, the partial response sentence is initiallized as null. The prediction of the first response word is based on the SuperChat image with only the input sentence embedded, and then the predicted word is added to the current partial response sentence. This iteration continues until End Of Sentence (EOS) appeared. Then, the final output would be a concatenation of the sequential output. The CNN model used in this method is fine-tuned from pretrained ImageNet models to predict the next response word with the generated SuperChat image as input. It can be trained end-to-end using large dialogue corpus. Thus the problem of predicting the next response word in dialogue generation is converted into an image classification problem. The training data is generated by labeling each SuperChat image as an example of the class indicated by its next response word. EOS is labeled to the SuperChat image if the response sentence is finished. The cut-length of sentences is high-related to the font size of each character. For fixed image size, the larger cutlength means smaller font size for each character, and vice versa. On the one hand, we want to cover long sentences, which means the cut-length should be big, so there will be variety in both the input dialogue and the response dialogue. On the other hand, if we set the cutlength too big, the font size of each character will be small, and there could be large blank area for short sentences, which is a waste of the space on the image. The cut-length should be configured according to the sentence length distribution. It should be also emphasized that the split of the image into input and response part could be not even. Depending on the statistics of the training data, maybe larger or smaller size could be assigned to response and input text. Also, the font size for each part does not need to be the same. Although the examples used in Figure 1 is illustrated with Chinese sentences, however, it can be also applied to other languages. For example, Asian languages such as Japanese and Korean, which has the same square shaped characters as in Chinese. For Latin languages where words may have variant length, SEW method BID8 could be used to convert the Latin languages also into the squared shape before applying the SuperChat method to generate the dialogue response. Beam search BID2 ) could be also used. In that case, instead of hard prediction for the first character, a soft prediction will be used to output all the possible sentences and one of the best will be selected as the final output. The dataset used is Simsimi 1. This is a Chinese chitchat database. This data set contains 454,561 dialogue pairs. There are totally 5,523 characters used in the response sentences, of which 4996 are characters with frequency less than 1000. The top five frequency characters are ",", "我" (I), "你" (you), "的" (of), and "是" (is).Based on the distribution of the sentence length, we set cut length for input sentence at 18, and response cut length also at 18. So, altogether we have 36 characters within one SuperChat image, which could be a layout of 6 rows by 6 columns of characters. The input sentence takes the upper 3 rows, and the response sentence takes the lower 3 rows. For simplicity, we removed all the emoticons in the data set. In order to get enough samples for training, only characters whose frequency is not less than 1000 appearances are selected in the list of characters to predict. After this filtering, the remaining set is composed by the sentences with both input and response sentence length less than 18 characters, and all its characters in the list of the 528 frequent characters (including EOS). The ing set is 178,192 pairs of dialogues, and a total of 989,087 SuperCharacter images are generated. We set our image size at three channels of 224x224 grey image, in order to use the pretrained models on ImageNet. We also added a margin area for the four edges in the SuperChat image, which means the first character will not start from the pixel location of, but from (m, m) instead. Here m is the we set for the four edges. In this experiment, we set m = 16, which in the remaining area is the square of 224−16x2 = 192 pixels. If we set same font size for both input and response sentence, it in a font size of 192/6 = 32 pixels. That means, each character takes an area of 32x32 pixels. The fonts used is the "simhei". For each character, we split its labeled data into 75% for training and 25% for testing. Resulting in 739,289 training samples and 249,798 testing samples. SE-net-154 is used in this experiment. The pretrained model on ImageNet is used as initialization 2. This model is fine-tuned on the generated SuperChat images just as an image classification task. The fine-tuning is done by simply modifying the last layer to 528, which is the size of the subset of the response vocabulary. The learning curve on the test data is shown in FIG0.We can see that at the begining of the training, the curve climbs up quickly, and after it achieved 60%, the slope goes slowly and almost saturate to 64%. The x-axis is in unit of every 500 iterations, so totally it is one million iterations, with batch size of 5, wich accounts to about 7 epochs over the training data. In this paper, we propose the SuperChat method for dialogue response generation. It has no encoding, but only decodes the two-dimensional embedding of the input sentence and partial response sentence to predict the next response word iteratively. The pretrained two-dimensional CNN model is fine-tuned with the generated SuperChat images. The experimental shows high quality response. An interactive demonstration is to show at the workshop TAB0 Submission and Formatting Instructions for ICML 2019 TAB0. Sample response sentences generated by the SuperChat method on the Simsimi data set. Response Sentence 1 "你有男朋友吗?" "没有,我只有老公" "Do you have boyfriend?" "No,I only have a husband" 2 "哼!你不想我我就不和你玩" "我很好玩的跟我玩嘛跟我玩嘛" "Hmm! I won't play with you if you didn't miss me" "I am funny, please play with me, please" 3 "我看到你第一眼,就喜欢你了" "我也喜欢你" "The first time I saw you, I liked you" "I like you too" 4 "是什么意思" "你自己百度去" "What does that mean" "Go Baidu it yourself" 5 "我去睡觉了" "晚安,好梦,明天叫你起床哦" "I am going sleep now" "Good dream, wake you up tomorrow" 6 "你什么时候睡觉" "我等你睡了再睡" " When will you sleep" "I will wait untill after you sleep" 7 "无聊中" "我陪你聊天呗" "I am bored" "Let's chitchat" 8 "我好难过,哭的都好累" "不哭不哭,哭了不好看了" "I am sad, and cried a lot" "Don't cry, not looking good when crying" 9 "问你家哪的" "我是火星人" " Where are you from" "I am from Mars" 10 "对啊你吃什么" "你喜欢吃什么,我就喜欢吃什么" "BTW what do you eat" "I eat whatever you like to eat" 11 "为你付出那种伤心你永远不了解" "我不会骗你。现在不会、以后也不会。" " You won't know my heartbroken for you" "I won't lie to you. Not now, neither future." 12 "那你是美女不" "我是美女!" "So you are a beauty" "I am a beauty!" 13 "睡了吗" "没有啊,我在等你呢。" "Are you asleep" "Nop, I am waiting for you." 14 "你会打鸣不" "喔喔喔!" "Can you do rooster crowing" "Cockadoodledoo!" 15 "你累么" "你累吗?累了早点睡啦。乖乖" "Are you tied" "You tired? Then go sleep. Honey" 16 "我想她了,怎么办" "去找她啊!" "I miss her, what should I do" "Go see her!" 17 "你什么都懂噢" "我不知道,但有些东西我刚好知道" "You seem to know everything" "I don't konw, but it happens I know something" 18 "我昨天发状态找你。你都不给回" "我错了" "I sent message to you. But you didn't reply" "I apologize" 19 "讨厌" "我讨厌一切讨厌我的人" "Dislike you" "I dislike anyone who dislikes me" 20 "猜猜我是什么星座" "双子座" "Guess what is my constellation""Gemini"
Print the input sentence and current response sentence onto an image and use fine-tuned ImageNet CNN model to predict the next response word.
1,373
scitldr
Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is that CNNs are complex and hard to interpret. Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain, and it is desirable to be able to learn them from few examples. In this work, we address these limitations of CNNs by developing novel, simple, and interpretable models for few-shot learn- ing. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented by the feature vectors within CNNs. We first adapt the learning of visual concepts to the few-shot setting, and then uncover two key properties of feature encoding using visual concepts, which we call category sensitivity and spatial pattern. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than alternative state-of-the-art few-shot learning methods. We conclude that using visual concepts helps expose the natural capability of CNNs for few-shot learning. After their debut BID13 BID13 have played an ever increasing role in computer vision, particularly after their triumph BID11 on the ImageNet challenge BID3. Some researchers have even claimed that CNNs have surpassed human-level performance BID8, although other work suggests otherwise. Recent studies also show that CNNs are vulnerable to adversarial attacks BID6. Nevertheless, the successes of CNNs have inspired the computer vision community to develop more sophisticated models BID9 BID21.But despite the impressive achievements of CNNs we only have limited insights into why CNNs are effective. The ever-increasing depth and complicated structures of CNNs makes them very difficult to interpret while the non-linear nature of CNNs makes it very hard to perform theoretical analysis. In addition, CNNs traditionally require large annotated datasets which is problematic for many real world applications. We argue that the ability to learn from a few examples, or few-shot learning, is a characteristic of human intelligence and is strongly desirable for an ideal machine learning system. The goal of this paper is to develop an approach to few-shot learning which builds on the successes of CNNs but which is simple and easy to interpret. We start from the intuition that objects can be represented in terms of spatial patterns of parts which implies that new objects can be learned from a few examples if they are built from parts that are already known, or which can be learned from a few examples. We recall that previous researchers have argued that object parts are represented by the convolutional layers of CNNs BID29 BID14 provided the CNNs are trained for object detection. More specifically, we will build on recent work BID23 which learns a dictionary of Visual Concepts (VCs) from CNNs representing object parts, see FIG0. It has been shown that these VCs can be combined to detect semantic parts BID24 and, in work in preparation, can be used to represent objects using VC-Encoding (where In general, these patches roughly correspond to semantic parts of objects, e.g., the cushion of a sofa (a), the side windows of trains (b) and the wheels of bicycles (c). All VCs are referred to by their indices (e.g., VC 139). We stress that VCs are learned in an unsupervised manner and terms like"sofa cushion" are inferred by observing the closest image patches and are used to describe them informally.objects are represented by binary codes of VCs). This suggests that we can use VCs to represent new objects in terms of parts hence enabling few-shot learning. But it is not obvious that VCs, as described in BID24, can be applied to few-shot learning. Firstly, these VCs were learned independently for each object category (e.g., for cars or for airplanes) using deep network features from CNNs which had already been trained on data which included these categories. Secondly, the VCs were learned using large numbers of examples of the object category, ranging from hundreds to thousands. By contrast, for few-shot learning we have to learn the VCs from a much smaller number of examples (by an order of magnitude or more). Moreover, we can only use deep network features which are trained on datasets which do not include the new object categories which we hope to learn. This means that although we will extract VCs using very similar algorithms to those in BID23 our motivation and problem domain are very different. To summarize, in this paper we use VCs to learn models of new object categories from existing models of other categories, while BID23 uses VCs to help understand CNNs and to perform unsupervised part detection. In Section 3, we will review VCs in detail. Briefly speaking, VCs are extracted by clustering intermediate-level raw features of CNNs, e.g., features produced by the Pool-4 layer of VGG16 BID19. Serving as the cluster centers in feature space, VCs divide intermediate-level deep network features into a discrete dictionary. We show that VCs can be learned in the few-shot learning setting and they have two desirable properties when used for image encoding, which we call category sensitivity and spatial patterns. More specifically, we develop an approach to few-shot learning which is simple, interpretable, and flexible. We learn a dictionary of VCs as described above which enables us to represent novel objects by their VC-Encoding. Then we propose two intuitive models: (i) nearest neighbor and (ii) a factorizable likelihood model based on the VC-Encoding. The nearest neighbor model uses a similarity measure to capture the difference between two VC-Encodings. The factorizable likelihood model learns a likelihood function of the VC-Encoding which, by assuming spatial independence, can be learned form a few examples. We emphasize that both these models are very flexible, in the sense that they can be applied directly to any few-shot learning scenarios. This differs from other approaches which are trained specifically for scenarios such as 5-way 5-shot (where there are 5 new object categories with 5 examples of each). This flexibility is attractive for real world applications where the numbers of new object categories, and the number of examples of each category, will be variable. Despite their simplicity, these models achieve comparable to the state-of-theart few-shot learning methods (using only the simplest versions of our approach), such as learning a metric and learning to learn. From a deeper perspective, our show that CNNs have the potential for few-shot learning on novel categories but to achieve this potential requires studying the internal structures of CNNs to re-express them in simpler and more interpretable terms. Overall, our major contributions are two-fold: We show that VCs can be learned in the few-shot setting using CNNs trained on other object categories. By encoding images using VCs, we observe two desirable properties, i.e., category sensitivity and spatial patterns. Based on these properties, we present two simple, interpretable, and flexible models for fewshot learning. These models yield competitive compared to the state-of-the-art methods on specific few-shot learning tasks and can also be applied directly, without additional training, to other few-shot scenarios. Our work lies at the intersection of attempts to understand the internal representations of neural networks and research on few-shot learning. Therefore, we review here the previous literature on these two topics. Recently, there have been numerous studies aimed at understanding the behavior of neural networks and, in particular, to uncover the internal representations within CNNs. Some researchers try to visualize internal representations by sampling BID28, generating BID20 BID15 or by backpropagating BID14 images in order to maximize the activations of the hidden units. A particularly relevant work by BID29 shows that object and object parts detectors emerge in CNNs. Conversely, other works investigate the discriminative power of the hidden features of CNNs by assessing them on specific problems BID18 BID2 BID0 BID27. The overall findings suggest that deep networks have internal representations of object parts. The most relevant work to our paper is the study of VCs which discovered mid-level visual cues in the internal features of CNNs and showed relationships between these visual cues and semantic parts BID23 ). This work is described in detail in Section 3 There have been growing attempts to perform few-shot learning motivated by attempts to mimic human abilities and to avoid some of the limitations of conventional data-demanding learning. An early attempt was made building on probabilistic program induction BID12 and another attempt exploited object parts BID25. More recent efforts at few-shot learning can be broadly categorized into two classes. The first is to design methods to embed the inputs into a feature space friendly to few-shot settings BID10 BID22. Their goal is to find a good similarity measure (e.g., using Siamese networks) that can be applied rapidly to novel categories. The second is meta-learning which efficiently trains an ordinary model with the budget of few examples BID17 BID5 ). An alternative approach by BID16 performs few-shot learning by estimating parameters of the prediction layer using regression from previously learned objects. We emphasize that the approach in our paper differs from these works, most of which are tailored for a few specific few-shot learning scenarios (i.e., test and train conditions must match), while our methods are simple and flexible, so that they work both in normal and almost all few-shot settings. In BID23, VCs were discovered as internal representations within deep networks which roughly correspond to mid-level semantic visual cues. These VCs play a core role in our work on understanding properties of CNNs and developing our interpretable few-shot learning models. In this section, we review how VCs were defined and learned in BID23. We will describe later in Section 4.1 how we modify VCs for few-shot learning. We first summarize the formalization of VCs, which are illustrated in FIG1. CNNs contain a hierarchy of lattices L l, where l ∈ {0, 1, . . .} stands for the layer of the lattice. In particular, the input image is defined over the lattice L 0 and the lattice on which we derive VCs is specified by DISPLAYFORM0 DISPLAYFORM1 In other words, the responses of all the channels in position p constitute the feature vector f n p. Then we have F k = ∪ n F n k from all images of interest. Note that by collecting feature vectors into F k, all spatial and image identity information is removed. Since layer k is usually pre-selected for different network architectures for VCs applications (e.g., BID23 typically studied layer k = 4 for VGG16 net in their work), the subscript will be omitted in the remainder of the paper for simplicity. Now we describe how VCs are extracted. The approach assumes that the VCs are represented by a population code of the CNN feature vectors. They are extracted using an unsupervised clustering algorithm. Since we first normalize the feature vectors into unit length, instead of using K-means as proposed in BID23, we assume that the feature vectors are generated by a mixture of von Mises-Fisher distributions (vMFM) BID7 and learn this mixture by the EM algorithm BID1. The goal is to maximize the likelihood function DISPLAYFORM2 where M is the total feature vector count (i.e., the number of all feature vectors collected from all images of interest) and V is the predefined VC (cluster) number. V d (·) is the density function of the vMF distribution. f m denotes each feature vector we get from the intermediate layer of a CNN (without image identity or spatial information). α, µ, and κ are vMFM parameters and represent the mixing proportion, mean direction, and concentration values respectively. We define the set of VCs by V = {µ v : v = 1, . . ., V} (i.e., by the mean directions of the learned vMFM distribution). Alternatively, since the {µ v} have the same dimensionality as the {f m}, we denote a specific VC center by DISPLAYFORM3 To help understand the VCs, we compute the cosine distances from the original feature vectors to the VCs as follows: DISPLAYFORM4 where d n p,v denotes the distance between feature vector f n p and the VC v in the n-th image at position p, and we call them VC distances. We select those feature vectors with the smallest distances to each VC and trace them back to the original input image using π k →0. This yields "visualization patches" of VCs, shown in FIG0. We observe that these patches roughly correspond to the semantic parts of objects, which justifies our assertion that VCs are semantic visual cues. In the previous studies of VCs BID23, the CNNs that were used to generate feature vectors were trained for a large scale object classification task that included the object categories of interest. Moreover, they extracted VCs using hundreds of images within a specific category of object, which ed to category specific visual cues that were useful for interpreting CNN behaviors and building novel models for semantic part detections. In more recent work (in preparation) VCs were used to encode semantic parts and objects using VC-Encoding that could be applied to detection tasks in the presence of occlusion. VC-Encoding is described in the next section. We emphasize that none of this prior work on VCs addressed few-shot learning and, by contrast, only addressed situations where there were many training examples. This section describes the technical ideas of our paper. In Section 4.1, we introduce how we learn VCs in the few-shot setting. In Section 4.2, we introduce VC-Encoding and show its two desirable properties for few-shot classification tasks. Then in Section 4.3 and Section 4.4, we propose two simple, interpretable models for few-shot learning based on VC-Encoding. It is not obvious that VCs can be applied to few-shot learning tasks where only few examples are available for each novel category. It is not possible to train the CNNs on the objects (as was done in BID23) and also there may not be enough data to get good VC clusters. Hence we modify the way VCs are learned: we learn VCs from small number of examples of novel object categories using features from CNNs trained on other object categories. This is similar to how metric-learning and meta-learning are trained on large datasets which do not include the novel categories. This ensures that the CNN used for feature extraction has never seen the categories on which we will perform few-shot classification. To extract VCs for the novel categories which only have few examples each, we pool feature vectors from the different categories together and perform the clustering algorithm on all of them. This gives us a little more data and encourages VC sharing between different categories. This improves data efficiency and also makes it easier to apply our VC models to multiple novel categories. By the two modifications described above, we obtain few-shot VCs, i.e., VCs that are suitable for few-shot learning. This is critical for our application and differentiates this work from previous studies of VCs. Surprisingly, we find that we only need a few images (e.g., five images per category) to extract high quality VCs (see visualizations in Figure 3a) which, when used for VC-Encoding, possess similar desirable properties as the traditional VCs and hence are suitable for few-shot object classification task. We assume that objects can be decomposed into semantic parts. From the perspective of VCs, this means that most {f p} should be assigned to a single VC. This requires specifying an explicit relationships between the {f p} and the VCs. A natural choice is to compute the distances d p,v between the f p and the v-th VC and threshold it to produce a binary value DISPLAYFORM0.., V } as the VC-Encoding. Note the image index n is omitted here since the operations are identical for all images of interest. We use two criteria to specify a good encoding, coverage and f irerate, defined as following: DISPLAYFORM1 The choice of the encoding threshold T is a trade-off between requiring sufficient coverage and a firing rate that is close to one. In practice, we choose T for each testing trial by a grid-search with step size 0.001 which outputs the smallest threshold ensuring that the average coverage >= 0.8 for all few-shot training images. This yields the final VC-Encoding B used in our models, with the following desirable properties:Category Sensitivity Despite the fact that the VCs are learned from a mix of images with different category labels, the first insight is that many VCs tend to fire (b ·,v = 1) intensively for one or a few specific object categories. In Figure 3a, we calculate the occurrence distributions of several VCs for 6 object categories out of the 12 in PASCAL3D+ BID26. In each column that represents a specific VC, the occurrence frequencies tend to be high for one or two object categories and low for the others. This suggests that VC identities can provide useful informations for object classification. Moreover, the corresponding visualized patches on the top of Figure 3a support our understanding that VCs have this category sensitivity because they capture the semantic parts that are specific for object categories. Spatial Patterns The spatial pattern of VC firings is also indicative of the object category. Although spatial information is ignored during feature clustering, the learned VCs give binary maps that contain regular spatial patterns for images of the same category with relatively similar viewpoints (as shown in Figure 3b). This is consistent with the more general conjecture that the spatial patterns of semantic parts play a vital role in object recognition, and shows again that the VC-Encoding can capture the spatial patterns of the semantic parts to a certain extend. Next, we build two simple few-shot learning models based on VC-Encoding learned from few examples. First, we propose a simple template matching method which is similar to traditional nearest neighbor algorithms. The only novelty is that we use a similarity metric between VC-Encodings which is spatially "fuzzy" so that it can tolerate small spatial shifts of the semantic parts in images. Formally, the similarity metric takes the following form: FIG4 ). During testing, we classify an image to the category of the training example with the largest similarity. DISPLAYFORM0 One motivation for this method is that sequential convolutional operations carried in a neural network can be considered as embedding input images into a hierarchy of feature spaces. Each convolutional layer can be treated as a different level of decomposition of the inputs since the convolution along with non-linear activation, which take the form Y = σ(W · X + B), is composed of matching templates W and using non-linear function σ to filter out patterns based on the threshold B. In light of this interpretation, and the two properties described earlier, it is reasonable that VC-Encoding will yield an explicit semantic decomposition. Apart from the intuitive nearest neighbor method, we present a second method which models the likelihood of the VC-Encoding. We observe that we can specify a distribution over the VC-Encoding b p,v using a bernoulli distribution with probability θ p,v. Following Naïve Bayes, we assume all the elements of the VC-Encoding b are independent (making it possible to learn the distribution from a very small number of examples). Hence we can express the likelihood of b as following: DISPLAYFORM0 For each object category y, we derive a probabilistic distribution θ y from the training examples. Thus the prediction of object category given the VC-Encoding b is given by: DISPLAYFORM1 Note that by doing this, we are in fact implementing a discriminative model obtained from a generative distribution. We smooth each distribution θ y using a Gaussian filter to guard against unlikely events. Few-Shot learning is a very challenging task where humans perform much better than current algorithms. It requires the ability, or efficiency, to learn generalizable knowledge from strictly limited examples, such as a few training images. Nevertheless, Section 4.1 suggests that a few images may be enough for learning object models when represented by VC-Encodings. Indeed our experiments show that both our two VCs-based few-shot learning models are competitive in performance with existing methods designed specifically for few-shot learning such as BID17. In addition, while previous few-shot methods are trained to work in specific few-shot scenarios, such as 5-way classifications, our methods can be applied to a large range of few-shot scenarios without additional training. The experimental show that trained CNNs have the potential to recognize novel objects from few examples by exploiting VC-Encoding. To assess the capability of our few-shot methods, we first evaluate them on a common few-shot learning benchmark, namely Mini-ImageNet. The Mini-ImageNet dataset was first proposed by BID22 as a benchmark for evaluating few-shot learning methods. It selects 100 categories out of 1000 categories in ImageNet with 600 examples per category. We use the split proposed by BID17 consisting of 64 training categories, 16 validation categories and 20 testing categories. In accordance with the convention for Mini-ImageNet, we perform numerous trials of few-shot learning during testing. In each trial, we randomly sample 5 unseen categories from a preserved testing set. Each category is composed of 5 training images for the 5-shot setting and 1 training image for the 1-shot setting. During evaluation, we randomly select 15 images for each category following BID17.As Table 1 illustrates, we compare our methods against two baselines in line with the ones in BID17. In addition, we present the performances of state-of-the-art few-shot learning 45.61 ± 1.14% 63.07 ± 1.02% 45.11 ± 0.66% Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals. Evaluations of Baseline-finetune and Baseline-nearest-neighbor are from BID17. Pool3-nearest-neighbor stands for a nearest neighbor method based on raw Pool-3 features from the same VGG-13 as our methods. At the bottom are our factorizable likelihood method and nearest neighbor method based on VCs. Marked in bold at the top are the best published for each scenario. Marked in bold at the bottom are our best for the corresponding set-up. At the right is an extended setting for variance in the number of categories. Note in the last column we use the same models as in the middle column and we omit those that cannot be directly applied to this setting. We adopt the for Matching Network from BID17.methods including matching network BID22, Meta-Learner BID17 and MAML BID5. Regarding our methods, we train a VGG-13 on the training and validation set. The network is trained with the objective of cross entropy. We preserve 10% images in each category to validate our network. We extract 200 VCs from the Pool-3 layer. The reason for choosing Pool-3 features is that a grid in Pool-3 lattices L 3 correspond to a 36 × 36 patch in the original 84 × 84 image, which is a plausible size for a semantic part. For the gaussian filter used to smooth the factorizable likelihood model, we use a σ of 1.2. To directly examine the impact of the VCs, we also include the of nearest neighbor matching using raw features from the pool-3 layer (referred as Pool3-Nearest-Neighbor in Table 1). Moreover, we attempt to evaluate the fewshot learning ability with the variance in the number of categories (as shown in the last column of Table 1) and further extend the few-shot learning evaluation with other randomly selected settings (as shown in Table 2), where we need to use exactly the same model as the middle columns in Table 1 trained once on the training and validation set. The show that our VCs-based methods compare well with current methods which were specifically designed for few-shot learning. If we contrast with meta-learning-based methods, we achieve higher accuracy than Meta-Learner both in the 1-shot and the 5-shot set-ups, while just slightly behind MAML. Compared with metric-based methods, which are more similar to ours, we marginally outperform the matching network, which is the state-of-the-art method of this category. These confirm our assumption that low level visual cues within trained CNNs can naturally perform few-shot learning. We observe that on the 5-shot scenario, our likelihood model is significantly better than our nearest neighbor model. A possible explanation is that the likelihood model Table 2: Average classification accuracies on Mini-ImageNet with 95% confidence intervals under randomly selected few-shot settings. All models used here and in Table 1 are the same set of models trained only once on the training set. Like the last column of Table 1, we omit those models which cannot be directly applied to various few-shot settings. Step I: Transform input to Pool-4 Raw Features. Step III: Given a set of VC distribution, calculate Pixel-wise Likelihood. Step IV: Aggregate Pixel-wise Likelihood into Class-wise Likelihood. Step V: Choose the class with highest likelihood. Step II: Transform Raw Features into VC-Encoding. Figure 5: Visualizing the inference procedure of the factorizable likelihood model (using 1 VC for example). The original image is processed by part of VGG-16, and represented by its VC-Encoding. After calculating the likelihood for each pixel using distributions derived from a few examples, we obtain the pixel-wise likelihood. Then we use the likelihood to make the final decision. For better visualization, we rescale the variance of aggregated likelihood to 1. For all the visualizations, we use the same Universal Color Bar. This figure is best viewed in color.combines several training examples into a distribution while nearest neighbor can only use examples individually. For instance, if a front wheel of cars appears in one training example and a rare wheel occurs in another example, the likelihood model can combine these two wheels into a distribution while nearest neighbor can only match testing examples with either the front wheel or the rare wheel. Finally, using the same model in the middle columns of Table 1, some previous methods like Meta-Learner LSTM are not applicable to various extended settings in Table 2 and the last column in Table 1. In fact, these methods can directly deal with only changes in the number of shots but cannot deal easily with changes in the number of categories. But unlike these methods, our few-shot learning methods based on VC-Encoding can be easily extended (with minimal re-training) to any number of shots and any number of categories. To delve deeper into our methods, we apply them to PASCAL3D+, a dataset with larger high quality images than Mini-ImageNet. PASCAL3D+ BID26 ) is a dataset augmenting 12 rigid categories of the PASCAL VOC 2012 (Everingham et al.). It was originally tailored for 3D object detection and pose estimation. We choose PASCAL3D+ as our testbed since it provides high quality images with comparable image size to ImageNet. We interpret our few-shot recognition mainly by visualizing every step of the inference. With input images of sufficient sizes, we can obtain large VC-Encoding distribution maps whose visualizations are easy for humans to interpret. The simplicity of our methods makes the inference process of few-shot recognition very transparent. On PASCAL3D+, we first qualitatively analyze this procedure based on VCs. In Figure 5, we visualize every step of our method based on an example VC. Among the closest patches, the corner of the TV Monitor occurs most frequently. So we assume this VC relate to the corner of TV Monitor. The distributions of this VC suggest it mainly responds to the upper right of TV Monitors since only the upper right corner of TV Monitor's distribution map shows high frequency (see the distributions in Figure 5). Using this VC, we convert the original deep network features into VC-Encoding. The VC-Encoding implies this VC fires on the upper right corner of the input (see the VC-Encoding map in Figure 5). After calculating the pixel-wise likelihood using the distributions from a few images, it is clear that except for the TV Monitor, each category has low likelihood in the area of the corner TAB6: Average classification accuracies on PASCAL3D+. At the top is the group of baseline methods including nearest neighbor and Exemplar-SVM based on Pool-4 features from the same VGG-16 used in our methods. In the middle are our factorizable likelihood models using different number of VCs. At the bottom are our VCs-based nearest neighbor models. Marked in bold are the best within each group for each scenario.(dark parts of pixel-wise likelihood maps in Figure 5). Finally, we aggregate the likelihood and make the correct classification decision. Meanwhile, we quantitatively evaluate our methods on PASCAL3D+. More specifically, we employ PASCAL3D+ as our testing dataset. For training, we use the ImageNet BID3 classification dataset without object categories related to 12 rigid categories (956 categories left). We train an ordinary VGG-16 as our starting point which achieves 71.27% top-1 accuracy. For testing, we crop the objects out using annotated bounding boxes provided by BID26 and resize them into 224 × 224. Then we use Pool-4 features produced by the VGG-16 to implement our few-shot methods instead of the Pool-3 features used in 5.1. The main reason for this change is the increased input image size of 224 in PASCAL3D+, which suggests that Pool-4 features will be better for capturing the semantic parts. As a comparison, we propose 2 baseline models. One (referred to as Baseline-nearest-neighbor in TAB6) is a nearest neighbor method based on raw Pool-4 features using the cosine distance metric. The other (referred to as Baseline-SVM in TAB6) is an Exemplar-SVM trained using hinge loss. Both of these baselines use the same pre-trained VGG-16 as our methods. During evaluation, we set 20 trials of both 5-shot and 1-shot learning over 12 categories on PASCAL3D+. We also assess our methods using different numbers of VCs to see impacts of the number of VCs. The are shown in TAB6.In light of our testing , we conclude that VC-Encoding is a useful semantic decomposition of images into parts. In general, our methods based on VCs significantly outperform two baselines. In particular, the difference between our nearest neighbor methods and the baseline nearest neighbor methods is due to the use of VCs (e.g., by first transferring the raw feature vectors into VC distances, and by thresholding to get the VC-Encoding). Thus, we claim that decomposing fuzzy features (i.e., deep network features) into explicit semantic cues (i.e., the VCs) improves both interpretability and performance. We also notice that our methods are not sensitive to the number of VCs since changes of the number of VCs only cause slight differences among accuracies. In this paper we address the challenge of developing simple interpretable models for few-shot learning exploiting the internal representations of CNNs. We are motivated by VCs BID23 which enable us to represent objects in terms of VC-Encodings. We show that VCs can be adapted to the few-shot learning setting where the VCs are extracted from a small set of images of novel object categories using features from CNNs trained on other object categories. We observe two properties of VC-Encoding, namely category sensitivity and spatial pattern, which leads us to propose two novel, but closely related, methods for few-shot learning which are simple, interpretable, and flexible. Our methods show comparable performances to the current state-of-the-art methods which are specialized for specific few-shot learning scenarios. We demonstrate the flexibility of our two models by showing that they can be applied to a range of different few-shot scenarios with minimal re-training. In summary, we show that VCs and VC-Encodings enable ordinary CNNs to perform few-shot learning. We emphasize that in this paper we have concentrated on developing the core ideas of our two few-shot learning models and that we have not explored variants of our ideas which could lead to better performance by exploiting standard performance enhancing tricks, or by specializing to specific few-shot challenges. Future work includes improving the quality of the extracted VCs and extending our approach to few-shot detection. Although they were not tailored for specific few-shot learning scenarios, our VC-Based methods have comparable performance to previous methods while being more flexible and interpretable. Since we are proposing a new method for few-shot learning, we carried out ablation studies to understand the influences of different model components on the effectiveness of our approach. Encoding Against Distance. In Section 4.2, we transformed real-valued distances from VCs into binary VC-Encoding by a threshold, which was dynamically determined to satisfy our requirement for coverage and f irerate. We claim that this encoding is beneficial to learning with limited budgets, e.g., few shots. First, it enables us to implement the Factorizable Likelihood Model, which relies on binary encoding and shows better than nearest neighbor methods. Second, by thresholding the distances, we are filtering out the noises of model estimations based on few training examples. This also helps the nearest neighbor model. In Table 1, we compare our models based on VC-Encoding with a model based on real-valued distances (referred to as VC-Distance in Table 1), which uses the cosine distances as the metric. The show that while despite being better than the original CNN features (Pool3-nearest-neighbor in Table 1), the distance-based model is not as effective as binary coded models. Table 1: VC models using distances and VC-Encoding on Mini-ImageNet. VC-distance stands for a nearest neighbor model using distances regarding VCs and the cosine distance metric. VC-nearestneighbor and VC-likelihood are our proposed models based on VC-Encoding. Pool3-nearestneighbor is a nearest neighbor model using original CNN features. Marked in bold are the best for each few-shot set-up. Clustering Methods. VCs are obtained without supervision by clustering features extracted by CNNs. In our proposed models, we use the clustering method based on a mixture of von MisesFisher distribution. By learning this distribution through the EM algorithm, we can obtain cluster centers that remain on the unit sphere. This is mathematically reasonable since we expect the features to distribute on the unit sphere. In Table. 2, we compare the using von Mises-Fisher distribution with those using K-Means clustering on the 5-Shot 5-category learning task. The show that for both VC-Likelihood and VC-nearest-neighbor K-Means is just slightly behind the von Mises-Fisher distribution. Hence, we state that VC-based models are robust to different clustering methods. Clustering Likelihood Nearest Neighbor K-Means 62.51 ± 0.89% 59.60 ± 1.04% von Mises-Fisher 63.07 ± 1.02%58.85 ± 1.12% Table 2: Different clustering for each model we proposed on the 5-shot 5-category setting on MiniImageNet. Likelihood refers to the Factorizable Likelihood Model. Marked in bold are the best for each model. Scale of Semantic Parts. While our VCs are extracted without any supervision, we find that they are related to semantic parts of objects. However, semantic parts can have various scales in images. To keep the simplicity of our model, we choose a single scale of semantic parts. In terms of the CNN we used, we choose to use Pool-3 features of VGG-13. In Table. 3, we test different scales of VCs, i.e., features from different layers, under the 5-Shot 5-Category few-shot learning setting. We see that our proposed scale is the most effective of the different scales. In addition, we notice that while the performance of the baseline method, which uses the original features for nearest neighbor, drops drastically as scale changes, the performances our VC-based models, especially our Likelihood model, decay more gradually. This also reflects the robustness of the VC-based models. In this section we illustrate more VCs (in FIG0 and their distributions over object categories (in TAB8).
We enable ordinary CNNs for few-shot learning by exploiting visual concepts which are interpretable visual cues learnt within CNNs.
1,374
scitldr
Recently various neural networks have been proposed for irregularly structured data such as graphs and manifolds. To our knowledge, all existing graph networks have discrete depth. Inspired by neural ordinary differential equation (NODE) for data in the Euclidean domain, we extend the idea of continuous-depth models to graph data, and propose graph ordinary differential equation (GODE). The derivative of hidden node states are parameterized with a graph neural network, and the output states are the solution to this ordinary differential equation. We demonstrate two end-to-end methods for efficient training of GODE: indirect back-propagation with the adjoint method; direct back-propagation through the ODE solver, which accurately computes the gradient. We demonstrate that direct backprop outperforms the adjoint method in experiments. We then introduce a family of bijective blocks, which enables $\mathcal{O}$ memory consumption. We demonstrate that GODE can be easily adapted to different existing graph neural networks and improve accuracy. We validate the performance of GODE in both semi-supervised node classification tasks and graph classification tasks. Our GODE model achieves a continuous model in time, memory efficiency, accurate gradient estimation, and generalizability with different graph networks. Convolutional neural networks (CNN) have achieved great success in various tasks, such as image classification and segmentation , video processing and machine translation . However, CNNs are limited to data that can be represented by a grid in the Euclidean domain, such as images (2D grid) and text (1D grid), which hinders their application in irregularly structured datasets. A graph data structure represents objects as nodes and relations between objects as edges. Graphs are widely used to model irregularly structured data, such as social networks , protein interaction networks , citation and knowledge graphs . Early works use traditional methods such as random walk (Lovász et al., 1993), independent component analysis (ICA) (Hyvärinen &) and graph embedding to model graphs, however their performance is inferior due to the low expressive capacity. Recently a new class of models called graph neural networks (GNN) were proposed. Inspired by the success of CNNs, researchers generalize convolution operations to graphs to capture the local information. There are mainly two types of methods to perform convolution on a graph: spectral methods and non-spectral methods. Spectral methods typically first compute the graph Laplacian, then perform filtering in the spectral domain . Other methods aim to approximate the filters without computing the graph Laplacian for faster speed . For non-spectral methods, the convolution operation is directly performed in the graph domain, aggregating information only from the neighbors of a node . The recently proposed GraphSAGE learns a convolution kernel in an inductive manner. To our knowledge, all existing GNN models mentioned above have a structure of discrete layers. The discrete structure makes it hard for the GNN to model continuous diffusion processes in graphs. The recently proposed neural ordinary differential equation (NODE) ) views a neural network as an ordinary differential equation (ODE), whose derivative is parameterized by the network, and the output is the solution to this ODE. We extend NODE from the Euclidean domain to graphs and propose graph ordinary differential equations (GODE), where the message propagation on a graph is modeled as an ODE. NODEs are typically trained with adjoint method. NODEs have the advantages of adaptive evaluation, accuracy-speed control by changing error tolerance, and are free-form continuous invertible models ). However, to our knowledge, in benchmark image classification tasks, NODEs are significantly inferior to state-of-the-art discrete-layer models (error rate: 19% for NODE vs 7% for ResNet18 on CIFAR10) . In this work, we show this is caused by error in gradient estimation during training of NODE, and propose a memory-efficient framework for accurate gradient estimation. We demonstrate our framework for free-form ODEs generalizes to various model structures, and achieves high accuracy for both NODE and GODE in benchmark tasks. Our contribution can be summarized as follows: 1. We propose a framework for free-form NODEs to accurately estimate the gradient, which is fundamental to deep-learning models. Our method significantly improves the performance on benchmark classification (reduces test error from 19% to 5% on CIFAR10). 2. Our framework is memory-efficient for free-form ODEs. When applied to restricted-form invertible blocks, the model achieves constant memory usage. 3. We generalize ODE to graph data and propose GODE models. 4. We demonstrate improved performance on different graph models and various datasets. There have been efforts to view neural networks as differential equations. viewed a residual network as a discretization of a differential equation and proposed several new architectures based on numerical methods in ODE solver. proposed a stable architecture based on analysis of the ODE. proposed neural ordinary differential equation (NODE), which treats the neural network as a continuous ODE. NODE was later used in a continuous normalizing flow for generative models . There have been many studies on the training of NODE. The adjoint method has long been widely used in optimal control and geophysical problems , and recently applied to ODE. proposed augmented neural ODEs to improve the expressive capacity of NODEs. However, to our knowledge, none of the methods above discusses the inaccurate gradient estimation issue; empirical performances of NODE in benchmark classification tasks are significantly inferior to state-of-the-art discrete-layer models. GNNs can be divided into two categories: spectral methods and non-spectral methods. Spectral GNNs perform filtering in the Fourier domain of a graph, thus need information of the whole graph to determine the graph Laplacian. In contrast, non-spectral GNNs only consider message aggregation around neighbor nodes, therefore are localized and generally require less computation . We first briefly introduce several spectral methods. first introduced graph convolution in the Fourier domain based on the graph Laplacian, however the computation burden is heavy because of non-localized filters. incorporated a graph estimation procedure in spectral networks and parameterized spectral filters into a localized version with smooth coefficients. used Chebyshev expansion to approximate the filters without the need to compute the graph Laplacian and its eigenvectors, therefore significantly accelerated the running speed. proposed to use a localized first-order approximation of graph convolution on graph data and achieved superior performance in semi-supervised tasks for node classification. proposed fast localized spectral filtering on graphs. Non-spectral methods typically define convolution operations on a graph, only considering neighbors of a certain node. MoNet uses a mixture of CNNs to generalize convolution to graphs. GraphSAGE samples a fixed size of neighbors for each node for fast localized inference. Graph attention networks (Veličković et al., 2017) learn different weights for different neighbors of a node. The graph isomorphism network (GIN) has a structure as expressive as the Weisfeiler-Lehman graph isomorphism test. Invertible blocks are a family of neural network blocks whose forward function is a bijective mapping. Therefore, the input to a bijective block can be accurately reconstructed from its outputs. Invertible blocks have been used in normalizing flow (; ; ; ;), where the model is required to be invertible in order to calculate the log-density of data distribution. Later on, used bijective blocks to build invertible networks. proposed to use invertible blocks to perform back propagation without storing activation, which achieves a memory-efficient network structure. They were able to discard activation of middle layers, because each layer's activation can be reconstructed from the next layer with invertible blocks. We first consider discrete-layer models with residual connection , which can be represented as: where x k is the states in the kth layer; f k (·) is any differentiable function whose output has the same shape as its input. When we add more layers with shared weights, and let the stepsize in Eq. 1 go to infinitesimal, the difference equation turns into a neural ordinary differential equation (NODE): We use z(t) in the continuous case and x k in the discrete case to represent hidden states. f (·) is the derivative parameterized by a network. Note that a key difference between Eq. 1 and 2 is the form of f: in the discrete case, different layers (different k values) have their own function f k; while in the continuous case, f is shared across all time t. The forward pass of model with discrete layers can be written as: where K is the total number of layers. Then an output layer (e.g. fully-connected layer for classification) is applied on x K. The forward pass of a NODE is: where z = input and T is the integration time, corresponding to number of layers K in the discrete case. The transformation of states z is modeled as the solution to the NODE. Then an output layer is applied on z(T). Integration in the forward pass can be performed with any ODE solver, such as the Euler Method, Runge-Kutta Method, VODE solver and Dopris Solver (; ;). The adjoint method is widely used in optimal process control and functional analysis . We follow the method by. Denote model parameters as θ, which is independent of time. Define the adjoint as: Figure 1: Comparison of two methods for back-propagation on NODE. As in figure (a), the ODE solver is discretized at points {t0, t1, ..., tN} during forward pass. Black dashed curve shows hidden state solved in forward-time, denoted as z(t). Figure (b) shows the adjoint method, red solid line shows the hidden state solved in reverse-time, denoted as h(t). Ideally z(t) = h(t) and dashed curve overlaps with solid curve; however, the reverse-time solution could be numerically unstable, and causes z(t) = h(t), thus causes error in gradient. Figure (c) shows the direct back-propagation through ODE solver. In direct back-propagation, we save evaluation time points {t0, t1, ...tN} during forward pass; during backward pass, we re-build the computation graph by directly evaluating at the same time points. In this way, z(ti) = h(ti). Since the hidden state can be accurately reconstructed, the gradient can be accurately evaluated. where L is the loss function. Then we have with detailed proof from optimization perspective in appendix F. Then we can perform gradient descent to optimize θ to minimize L. Eq. 6 is a reverse-time integration, which can be solved with any ODE solver. To evaluate, we need to determine z(t) by solving Eq. 2 reverse-time (Directly storing z(t) during forward pass requires a large memory consumption, because the continuous model is equivalent to an infinite-layer model). To summarize, in the forward pass we solve Eq. 2 forward in time; in the backward pass, we solve Eq. 2 and 6 reverse in time, with initial condition determined from Eq. 5 at time T. We give an intuition why the reverse-time ODE solver causes inaccurate gradient in adjoint methods. The backward pass (Eq. 6) requires determining f (z(t), t, θ) and, which requires determining z(t) by solving Eq. 2 reverse-time. As shown in Fig. 1 (a,b), the hidden state solved forward-time (z(t i)) and the hidden state solved reverse-time (h(t i)) may not be equal; this could be caused by the instability of reverse-time ODE, and is represented by the mismatch between z(t) (dashed curve) and h(t) (solid curve). Error h(t) − z(t) will cause error in gradient dL dθ. Proposition 1 For an ODE in the form, denote the Jacobian of f as J f. If this ODE is stable both in forward-time and reverse-time, then Re(λ i (J f)) = 0 ∀i, where λ i (J f) is the ith eigenvalue of J f, and Re(λ) is the real part of λ. Detailed proof is in appendix C. Proposition 1 indicates that if the Jacobian of the original system Eq. 2 has eigenvalues whose real-part are not 0, then either the reverse-time or forward-time ODE is unstable. When |Re(λ)| is large, either forward-time or reverse-time ODE is sensitive to numerical errors. This phenomenon is also addressed in. This instability affects the accuracy of solution to Eq. 2 and 6, thus affects the accuracy of the computed gradient. The adjoint method might be sensitive to numerical errors when solving the ODE in reverse-time. To resolve this, we propose to directly back-propagate through the ODE solver. As in Fig. 1(a), the ODE solver uses discretization for numerical integration, evaluated at time points {t 0, t 1, ...t N}. Fig. 1(c) demonstrates the direct back-propagation with accurate hidden states h(t i), which can be achieved with two methods: the activation z(t i) can be saved in cache for back-prop, but requires huge memory; or we can accurately reconstruct z(t i) by re-building the computation graph directly at evaluated time points {t i}. Since the model is evaluated at the same time points t i in forward-time, it's guaranteed that z(t i) = h(t i). Therefore direct back-prop is accurate, regardless of the stability of Eq. 2. Similar to the continuous case, we can define the adjoint with discrete time. Then we have: where a i is the adjoint for the ith step in discrete forward-time ODE solution. Eq. 7 can be viewed as a numerical discretization of Eq. 6. We show Eq. 6 can be derived from an optimization perspective. Detailed derivations of Eq. 6-7 are in appendix E and F. Algorithm 1: Algorithm for accurate gradient estimation in ODE solver for free-form functions Define model, where f is a free-form function. Denote integration time as T. Forward (f, T, z 0, tolerance): Select initial step size h = h 0 (adaptively with adaptive step-size solver). time points = empty list While t < T: state = f.state dict, accept step = F alse While Not accept step: f.load state dict(state) with grad disabled: z new, error estimate = step(f, z, t, h) If error estimate < tolerance: accept step = T rue z = z new, t = t + h, time points.append(t) else: reduce stepsize h according to error estimate delete z new, error estimate and related local computation graph cache.save(time points) return z, cache Backward (f, T, z 0, tolerance, cache): Details of our method are summarized in Algorithm 1. We discuss its properties below: Summary of the algorithm During forward pass, the solver performs a numerical integration, with the stepsize adaptively varying with error estimation. During forward pass, the solver outputs the integrated value, and the evaluation time points {t i}. All middle activations are deleted to save memory. During backward pass, the solver re-builds the computation graph, by directly evaluating at saved time points, without adaptive searching. During backward pass, the solver performs a numerical version (Eq. 7) of reverse-time integration (Eq. 6). Support for free-form continuous dynamics There's no constraint on the form of f. Therefore, our algorithm is a generic method. Memory consumption analysis Suppose f has N f layers, the number of forward evaluation step is N t on average, and the evaluations to adaptively search for an optimal stepsize is K. A naive solver will take O(N f × N t × K), while our method consumes O(N f × N t) because all middle activations are deleted during forward pass, and we don't need to search for optimal stepsize in backward pass. If we perform step-wise checkpoint method, where we only store z(t i) for all t i, and compute the gradient ∂z(ti+1) ∂z(ti) for one t i at a time, then the memory consumption can be reduced to O(N f +N t). Since the solver can handle free-form functions, it can also handle restricted form invertible block (see below). In this case, we don't need to store z(t i), and the memory consumption can reduce to O(N f). More memory-efficient solver with invertible blocks Restricting the form of f to invertible blocks allows for O(N f) memory consumption. For invertible blocks, input x is split into two parts (x 1, x 2) of the same size (e.g. x has shape N × C, where N is batch size, C is channel number; we can split x into x 1 and x 2 with shape N × C 2). The forward and inverse of a bijective block can be denoted as: where the output of a bijective block is denoted as (y 1, y 2) with the same size as (x 1, x 2). F and G are any differentiable neural networks, whose output has the same shape as the input. ψ(α, β) is a differentiable bijective function w.r.t α when β is given; ψ −1 (α, β) is the inverse function of ψ. Theorem 1 If ψ(α, β) is a bijective function w.r.t α when β is given, then the block defined by Eq. 8 is a bijective mapping. Proof of Theorem 1 is given in appendix D. Based on this, we can apply different ψ functions for different tasks. Since x can be accurately reconstructed from y, there's no need to store activations, hence is memory-efficient. Details for back-prop without storing activation are in appendix B. We first introduce graph neural networks with discrete layers, then extend to the continuous case and introduce graph ordinary differential equations (GODE). As shown in Fig. 2, a graph is represented with nodes (marked with circles) and edges (solid lines). We assign a unique color to each node for ease of visualization. Current GNNs can generally be represented in a message passing scheme : where x u k represents states of the uth node in the graph at kth layer and e u,v represents the edge between nodes u and v. N (u) represents the set of neighbor nodes for node u. ζ represents a differentiable, permutation invariant operation such as mean, max or sum. γ (k) and φ (k) are differentiable functions parameterized by neural networks. For a specific node u, a GNN can be viewed as a 3-stage model, corresponding to Eq. 9-11: Message passing, where neighbor nodes v ∈ N (u) send information to node u, denoted by message (v,u). The message is generated from function φ(·), parameterized by a neural network. Message aggregation, where a node u aggregates all messages from its neighbors N (u), denoted as aggregation u. The aggregation function ζ is typically permutation invariant operations such as mean and sum, because graphs are invariant to permutation. Update, where the states of a node are updated according to its original states x u k−1 and aggregation of messages aggregation u, denoted as γ(·). We can convert a discrete-time GNN to continuous-time GNN by replacing f in Eq. 2 with the message passing process defined in Eq. 9 to 11, which we call graph ordinary differential equation (GODE). A diagram of GODE is shown in Fig. 2. Because GODE is an ODE in nature, it can capture highly non-linear functions, thus has the potential to outperform its discrete-layer counterparts. We demonstrate that the asymptotic stability of GODE could be related to the over-smoothing phenomena . It's demonstrated that graph convolution is a special case of Laplacian smoothing , which can be written as Y = (I − γD −1/2LD−1/2)X where X and Y are the input and output of a graph-conv layer respectively,Ã = A + I where A is the adjacency matrix, andD is the corresponding degree matrix ofÃ, and γ is a positive scaling constant. When modified from a discrete model to a continuous model, the continuous smoothing process is: Since all eigenvalues of the symmetrically normalized Laplacian are real and non-negative, then all eigenvalues of the above ODE are real and non-positive. Suppose all eigenvalues of the normalized Laplacian are non-zero. In this case, the ODE has only negative eigenvalues, hence the ODE above is asymptotically stable . Hence as time t grows sufficiently large, all trajectories are close enough. In the experiments, this suggests if integration time T is large enough, all nodes (from different classes) will have very similar features, thus the classification accuracy will drop. To evaluate our method on general NODE, we conducted experiments with a CNN-NODE on two benchmark image classification tasks (CIFAR10 and CIFAR100) . We also evaluated our method on benchmark graph datasets, including 2 bioinformatic graph classification datasets (MUTAG and PROTEINS), 2 social network graph classification datasets (IMDB-BINRAY, REDDIT-BINARY) , and 3 citation networks (Cora, CiteSeer and PubMed). For graph classification tasks, different from the experiment settings in , we input the raw dataset into our models without pre-processing. For node classification tasks, we performed transductive inference and strictly followed the train-validation-test split by , where less than 6% nodes are used as training examples. Details of datasets are summarized in appendix A. For image classification tasks, we directly modify a ResNet18 into its corresponding NODE model. For each block, the function is where f is a sequence of conv − bn − relu − conv − bn − relu layers. f is the same as residual branch in ResNet, and it can be replaced with any free-form functions. For tasks on graph datasets, GODE can be applied to any graph neural network by simply replacing f in Eq. 2 with corresponding structures (free-form functions), or replacing F, G in Eq. 8 with other structures (invertible blocks). To demonstrate that GODE is easily generalized to existing structures, we used several different GNN architectures, including the graph convolutional network (GCN) , graph attention network (GAT) (Veličković et al., 2017), graph network approximated with Chebyshev expansion (ChebNet) , and graph isomorphism network (GIN) . For a fair comparison, we trained GNNs with different depths of layers (1-3 middle layers, besides an initial layer to transform data into specified channels, and a final layer to generate prediction), and reported the best among all depths for each model structure. On the same task, different models use the same hyper-parameters on model structures, such as channel number. For graph classification tasks, we set the channel number of hidden layers as 32 for all models; for ChebNet, we set the number of hops as 16. For node classification tasks, we set the channel number as 16 for GCN and ChebNet, and set number of hops as 3 for ChebNet; for GAT, we used 8 heads, and set each head as 8 channels. For every GNN structure, we experimented with different number of hidden layers, and calculated the mean and variance of accuracy of 10 runs. We compared the adjoint method and direct back-propagation on the same network, and demonstrated direct back-prop generates higher accuracy. For CNN-NODE on classification tasks, we directly modify a ResNet18 into NODE18, and report resuls in Table. 1; for graph networks, we train a GODE model with a GCN to parameterize the derivative, and report in Table. 2. Empirical performance Direct back-propagation consistently outperformed the adjoint method for both tasks. This validates our analysis on the instability of the adjoint method, which is intuitively caused by the instability of the reverse-time ODE. On image classification tasks, compared to adjoint method, our training method reduces error rate of NODE18 from 19% (37%) to 5%(23%) on CIFAR10 (CIFAR100). Furthermore, NODE18 has the same number of parameters as ResNet18, but outperforms deeper networks such as ResNet101 on both datasets. Our method also consistently outperforms the adjoint method on several benchmark graph datasets, as shown in Table. 2. Robustness to ODE solvers We implemented adaptive ODE solvers of different orders, as shown in Table 1. HeunEuler, RK23, RK45 are of order 1, 2, 4 respectively, i.e., for each step forward in time f is evaluated 1, 2, 4 times respectively. During inference, using different solvers is equivalent to changing model depth (without re-training the network): for discrete-layer models, it generally causes huge error; for continuous models, we observe only around 1% increase in error rate. This suggests our method is robust to different orders of ODE solvers. Support for free-form functions Our method supports NODE and GODE models with free-form functions; for example, f in NODE18 in Table. 1 is a free-form function. We demonstrate that bijective blocks defined as Eq. 8 can be easily generalized: F and G are general neural networks, which can be adapted to different tasks; ψ(α, β) can be any differentiable bijective mapping w.r.t. α when β is given. We demonstrate two examples of ψ: Results for different ψ are reported in Table 3. Note that we experimented with different depths and reported the best accuracy for each model, and performed a paired t-test on from GODE and their discrete-layer counterparts. Most GODE models outperformed their corresponding discretelayer models significantly, validating the effectiveness of GODE; different ψ functions behaved similarly on our node classification tasks, indicating the continuous-time model is more important than coupling function ψ. We also validate the lower memory cost, with details in appendix B. Results for different models on graph classification tasks are summarized in Table 4. We experimented with different structures, including GCN, ChebNet and GIN; for corresponding GODE models (marked with ODE), we tested both free-form (marked with "free") and invertible block (marked with "INV"). We performed paired t-test comparing GODE and its discrete-layer counterparts. For most experiments, GODE models performed significantly better. This indicates the continuous process model might be important for graph models. For a NODE and GODE model, during inference, we test the influence of integration time. Results are summarized in Table. 5. When integration time is short, the network does not gather sufficient information from neighbors; when integration time is too long, the model is sensitive to over-smooth issue, as discussed in Sec. 4.2. We observe accuracy drop in both cases. We propose GODE, which enables us to model continuous diffusion process on graphs. We propose a memory-efficient direct back-propagation method to accurately determine the gradient for general free-form NODEs, and validate its superior performance on both image classification tasks and graph data. Furthermore, we related the over-smoothing of GNN to asymptotic stability of ODE. Our paper tackles the fundamental problem of gradient estimation for NODE; to our knowledge, it's the first paper to improve accuracy on benchmark tasks to comparable with state-of-the-art discrete layer models. It's an important step to apply NODE from theory to practice. A DATASETS We perform experiments on various datasets, including citation networks (Cora, CiteSeer, PubMed), social networks (COLLAB, IMDB-BINARY, REDDIT-BINARY), and bioinformatics datasets (MUTAG, PROTEINS). Details of each dataset are summarized in Table 1. We explain the structure and conduct experiments for the invertible block here. Structure of invertible blocks Structure of invertible blocks are shown in Fig. 1. We follow the work of with two important modifications: We generalize to a family of bijective blocks with different ψ in Eq. 8 in the main paper, while restrict the form of ψ to be sum. We propose a parameter state checkpoint method, which enables bijective blocks to be called more than once, while still generating accurate inversion. The algorithm is summarized in Algo. 2. We write the pseudo code for forward and backward function as in PyTorch. Note that we use "inversion" to represent reconstructing input from the output, and use "backward" to denote calculation of the gradient. To reduce memory consumption, in the forward function, we only keep the outputs y 1, y 2 and delete all other variables and computation graphs. In the backward function, we first "inverse" the block to calculate x 1, x 2 from y 1, y 2, then perform a local forward and calculate the gradient x1,x2]. In this section we demonstrate that our bijective block is memory efficient. We trained a GODE model with bijective blocks, and compared the memory consumption using our memory-efficient function as in Algo. 2 and a memory-inefficient method as in conventional backpropagation. Results were measured with a batchsize of 100 on MUTAG dataset. Depth Memory-efficient Conventional 10 2.2G 5.3G 20 2.6G 10.5G Table 2: Memory consumption of bijective blocks. "Conventional" represents storing activation of all layers in cache, "memory-efficient" represents our method in Algo. 2. Results are summarized in Table. 2. We measured the memory consumption with different depths, which is the number of ODE blocks. When depth increases from 10 to 20, the memory by conventional methods increases from 5.3G to 10.5G, while our memory-efficient version only increases from 2.2G to 2.6G. In theory, our bijective block takes O memory, because we only need to store the outputs in cache, while deleting activations of middle layers. For memory-efficient network, the slightly increased memory consumption is because states of F, G need to be cached; but this step takes up minimal memory compared to input data. Algorithm 2: Function for memory-efficient bijective blocks delete computation graphs generated by F and G return cache, y1, y2 Backward(cache, y1, y2, F, G, ψ, Proposition 1 For an ODE in the form dz(t) dt = f (z(t), t), denote the Jacobian of f as J f. If this ODE is stable both in forward-time and reverse-time, then Re(λ i (J f)) = 0 ∀i, where λ i (J f) is the ith eigenvalue of J f, and Re(λ) is the real part of λ. Proof Denote s = T − t, where T is the end time. Notice that the reverse-time in t is equivalent to forward-time in s. Therefore, we have forward-time ODE: and reverse-time ODE: Therefore, we have λ(J f) = −λ(J g). For both forward-time and reverse-time ODE to be stable, the eigenvalues of J need to have non-positive real part. Therefore The only solution is D PROOF FOR THEOREM 1 Theorem 1 For bijective block whose forward and reverse mappings are defined as If ψ(α, β) is a bijective function w.r.t α when β is given, then the block is a bijective mapping. Proof To prove the forward mapping is bijective, it is equivalent to prove the mapping is both injective and surjective. Injective We need to prove, if F orward(x 1, x 2) = F orward(x 3, x 4), then x 1 = x 3, x 2 = x 4. The assumption above is equivalent to Since ψ(α, β) is bijective w.r.t α when β is given, from Eq., we have x 1 = x 3. Similarly, condition on x 1 = x 3 and Eq., using bijective property of ψ, we have x 2 = x 4. Therefore, the mapping is injective. Given y 1, y 2, we construct Then for the forward function, given bijective property of ψ, apply F orward and Reverse defined in the proposition statement, z 1 = ψ(x 1, G(y 2)) = ψ ψ −1 y 1, G(y 2), G(y 2) = y 1 Therefore we construct x 1, x 2 s.t. F orward(x 1, x 2) = [y 1, y 2]. Therefore the mapping is surjective. Therefore is bijective. We use a figure to demonstrate the computation graph, and derive the gradient from the computation graph. The loss is L, forward pass is denoted with black arrows, gradient back-propagation is shown with red arrows. We use p to denote each path from θ to L, corresponding to all paths in red that goes from L to θ. In this section we derive the gradient of parameters in an neural-ODE model from an optimization perspective. Then we extend from continuous cases to discrete cases. Notations With the same notations as in the main paper, we use z(t) to denote hidden states z at time t. Denote parameters as θ, and input as x, target as y, and predicted output asŷ. Denote the loss as J(ŷ, y). Denote the integration time as 0 to T. Problem setup The continuous model is defined to follow an ODE: dz(t) dt = f (z(t), t, θ), s.t. z = x We assume f is differentiable, since f is represented by a neural network in our case. The forward pass is defined as:ŷ The loss function is defined as: J(ŷ, y) = J(z(T), y) We formulate the training process as an optimization problem: For simplicity, Eq. 15 only considers one ODE block. In the case of multiple blocks, z(T) is the input to the next ODE block. As long as we can derive dLoss dθ and dLoss dz when dLoss dz(T) is given, the same analysis here can be applied to the case with a chain of ODE blocks. We use the Lagrangian Multiplier Method to solve the problem defined in Eq. 15. For simplicity, only consider one example (can be easily extended to multiple examples cases), the Lagrangian is Karush-Kuhn-Tucker (KKT) conditions are necessary conditions for an solution to be optimal. In the following sections we start from the KKT condition and derive our . Derivative w.r.t. λ At optimal point, we have δL δλ = 0. Note that λ is a function of t, we derive the derivative from calculus of variation. Consider a cotninuous and differentiable perturbation λ(t) on λ(t), and a scalar, L now becomes a function of, It's easy to check the conditions for Leibniz integral rule, and we can switch integral and differentiation, thus: At optimal λ(t), dL d | =0 = 0 for all continuous differentiable λ(t). Therefore, dz(t) dt − f (z(t), t, θ) = 0, ∀t ∈ (0, T) From continuous to discrete case To derive corresponding in discrete cases, we need to replace all integration with finite sum. In discrete cases, the ODE condition turns into: from Eq. 31, we can get: Re-arranging terms we have: which is the discrete version of Eq. 29. Which also corresponds to our analysis in Eq. 10 and 11.
Apply ordinary differential equation model on graph structured data
1,375
scitldr
Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time. In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges. Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames. We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem. We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset. Our show that frame-wise translation produces realistic on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance. Recent work on unsupervised image-to-image translation BID10 BID12 has shown astonishing on tasks like style transfer, aerial photo to map translation, day-to-night photo translation, unsupervised semantic image segmentation and others. Such methods learn from unpaired examples, avoiding tedious data alignment by humans. In this paper, we propose a new task of unsupervised video-to-video translation, i.e. learning a mapping from one video domain to another while preserving high-level semantic information of the original video using large numbers of unpaired videos from both domains. Many computer vision tasks can be formulated as video-to-video translation, e.g., semantic segmentation, video colorization or quality enhancement, or translating between MRI and CT volumetric data (illustrated in FIG0). Moreover, motion-centered tasks such as action recognition and tracking can greatly benefit from the development of robust unsupervised video-to-video translation methods that can be used out-of-the-box for domain adaptation. Since a video can be viewed as a sequence of images, one natural approach is to use an image-toimage translation method on each frame, e.g., applying a state-of-art method such as CycleGAN, CoGAN BID10 or UNIT BID12. Unfortunately, these methods cannot preserve continuity and consistency of a video when applied frame-wise. For example, colorization of an object may have multiple correct solutions for a single input frame, since some objects such as cars can have different colors. Therefore, there is no guarantee that an object would preserve its color if translation is performed on the frame level frame. In this paper, we propose to translate an entire video as a three-dimensional tensor to preserve its cross-frame consistency and spatio-temporal structure. We employ multiple datasets and metrics to evaluate the performance of our proposed video-to-video translation model. Our synthetic datasets include videos of moving digits of different colors and volumetric images of digits imitating medical scans. We also perform more realistic segmentation-to-RGB and colorization experiments on the GTA dataset BID14, and propose a new MRI-to-CT dataset for medical volumetric image translation, which to our knowledge is the first open medical dataset for unsupervised volumeto-volume translation. We propose the task of unsupervised video-to-video translation. Left: Results of MR-to-CT translation. Right: moving MNIST digits colorization. Rows show per-frame CycleGAN (2D) and our spatio-temporal extension (3D). Since CycleGAN takes into account information only from the current image, it produces reasonable on the image level but fails to preserve the shape and color of an object throughout the video. Best viewed in color. Figure 2: Results of GTA video colorization show that per-frame translation of videos does not preserve constant colours of objects within the whole sequence. We provide more and videos in the supplementary video: https://bit.ly/2R5aGgo. Best viewed in color. Our extensive experiments show that the proposed 3D convolutional model provides more accurate and stable video-to-video translation compared to framewise translation with various settings. We also investigate how the structure of individual batches affects the training of framewise translation models, and find that structure of a batch is very important for stable translation contrary to an established practice of shuffling training data to avoid overfitting in deep models BID3.To summarize, we make the following main contributions: 1) a new unsupervised video-to-video translation task together with both realistic and synthetic proof-of-concept datasets; 2) a spatiotemporal video translation model based on a 3D convnet that outperforms per-frame methods in Figure 3: Our model consists of two generator networks (F and G) that learn to translate input volumetric images from one domain to another, and two discriminator networks (D A and D B) that aim to distinguish between real and fake inputs. Additional cycle consistency property requires that the of translation to the other domain and back is equal to the input video, DISPLAYFORM0 all experiments, according to human and automatic metrics, and 3) an additional analysis of how performance of per-frame methods depends on the structure of training batches. In recent years, there has been increasing interest in unsupervised image-to-image translation. Aside from producing interesting graphics effects, it enables task-independent domain adaptation and unsupervised learning of per-pixel labels. Many recent translation models BID10 BID12 use the adversarial formulation initially proposed by BID2 as an objective for training generative probabilistic models. The main intuition behind an adversarial approach to domain translation is that the learned cross-domain mapping F: X → Y should translate source samples to fake target samples that are indistinguishable from actual target samples, in the sense that no discriminator from a fixed hypothesis space H should be capable of distinguishing them. Many recent advances in domain translation are due to the introduction of the cycle-consistency idea. Models with a cycle consistency loss aim to learn two mappings F (x) and G(y) such that not only are the translated source samples F (x) indistinguishable from the target, and G(y) are indistinguishable from source, but they are also inverses of each other, i.e. F (G(y)) = y, G(F (x)) = x. We also employ this idea and explore how well it generalizes to video-to-video translation. The cycle-consistency constraints might not restrict semantic changes as explored by BID4. There has been work on combining cycle-consistency with variational autoencoders that share latent space for source and target BID11, which ed in more visually pleasing . Both adversarial BID5 and non-adversarial BID1 supervised imageto-image translation models archive much better visual fidelity since samples in source and target datasets are paired, but their use is limited since we rarely have access to such aligned datasets for different domains. Adversarial video generation has also gained much traction over last years with frame-level models based on long short-term memory BID15 as well as spatiotemporal convolutions BID17, especially since adversarial terms seem to better avoid frame blurring BID13 than Euclidian distance. However, none of these works consider learning a conditional generative video model from unpaired data, i.e. cross domain video translation. Here, we propose this problem and a solution based on jointly translating the entire volume of video. We introduce a neural approach for video-to-video translation based on a conditional GAN BID2 ) that treats inputs and outputs as three-dimensional tensors. The network takes a volumetric image (e.g. a video) from domain A and produces a corresponding volume of the same shape in the domain B. The generator module aims to generate realistic volumes, while the discriminator aims to discriminate between the real and generated samples. Similarly to the CycleGAN method, we introduce two generator-discriminator pairs and add a cycle consistency loss ensuring that samples mapped from A to B and back to A are close to the originals. We implement the two generators, F and G, as 3D convolutional networks BID7 ) that follow the architecture described in BID8. The networks consist of three convolutional layers with 3D convolutional filters of shape 3 × 3 × 3, nine resudual blocks and two additional convolutional layers with stride The overall objective of the model consists of the adversarial loss L GAN and the cycle consistency loss L cyc. The adversarial L GAN loss forces both the generator networks to produce realistic videos and the discriminators to distinguish between real and fake samples from the domain in a min-max fashion, whereas L cyc ensures that each sample x ∼ p A translated into domain B and back is equal to the original and vice versa, i.e. G(F (x)) ≈ x (see Fig. 3).The adversarial loss L GAN is a log-likelihood of correct classification between real and synthesized volumes: DISPLAYFORM0 where the generator G is learned in a way that minimizes L GAN, while the discriminator D B aims to maximize it. The cycle consistency loss is the L 1 loss between the input volume and of the reverse translation: DISPLAYFORM1 The total objective can be written as follows: DISPLAYFORM2 Because we employ the cycle loss and the PatchGAN architecture also employed by CycleGAN, we refer to our model as 3D CycleGAN. More generally, we can consider other generator and discriminator implementations within the overall 3D convolutional framework for video-to-video translation. We used CycleGAN trained on randomly selected images (referred to as random CycleGAN) as a baseline method. We also considered two alternative training strategies for training frame-level CycleGAN baselines: CycleGAN trained on consecutive image sequences (sequential CycleGAN) and sequential CycleGAN with additional total variation loss (see Eq. 2) that penalizes radical change in the generated image sequence (const-loss CycleGAN). We compared the performance of these baselines with our approach that operates on three-dimensional inputs (3D CycleGAN).Random CycleGAN. The first strategy for training a CycleGAN is taking as an input 2D images selected randomly from image sequences available for training, which is the standard approach in deep learning. Data shuffling is known to reduce overfitting and speeds up the learning process BID3.Sequential CycleGAN. Since the order of frames is essential in sequential image data, we investigated the case when images are given to a 2D CycleGAN sequentially during the training phase (see Figure 4 : We compare three ways of forming batches used during training of a CycleGAN : (a) random frames from multiple videos, (b) sequential frames from a single video or (c) single 3D tensor consisting of consecutive frames from a single video. Contrary to the conventional wisdom, our experiments suggest that additional randomness in the batch structure induced in case (a) hurts the performance and convergence of the ing translation model. Fig. 4). In contrast to our expectation and the conventional wisdom, sequential CycleGAN often outperformed random CycleGAN in terms of image continuity and frame-wise translation quality. Const-loss CycleGAN. Since CycleGAN performs translation on the frame-level, it is not able to use information from previous frames and, as a , produces some undesired motion artifacts, such as too rapid change of object shape in consecutive frames, change of color or disappearing objects. To alleviate this problem, we tried to force the frame-level model to generate more consistent image sequences by directly adding a total variation penalty term to the loss function, as follows: DISPLAYFORM0 5 EXPERIMENTS Intuitively, translation models that operate on individual frames can not preserve continuity along the time dimension, which may in radical and inconsistent changes in shape, color and texture. In order to show this empirically, we used the GTA segmentation dataset for unsupervised segmentation-to-video translation. Since this task is very challenging even for still images, we created three more datasets that give more insight into pros and cons of different models. MRCT dataset. First, we evaluated the performance of our method on an MR (magnetic resonance) to CT (computed tomography) volumetric image translation task. We collected 225 volumetric MR images from LGG-1p19qDeletion dataset BID0 and 234 CT volumetric images from Head-Neck-PET-CT dataset BID16. Both datasets are licensed with Creative Commons Attribution 3.0 Unported License. Since images from these datasets represent different parts of the body, we chose parts of volume where body regions represented in the images overlap: from superciliary arches to lower jaw. Images of both modalities were manually cropped and resized to 30 × 256 × 256 shape. The final dataset is available for download on the website [TDB].Volumetric MNIST. Volumetric MNIST dataset was created using MNIST handwritten digits database BID9. From each image from MNIST we created a volumetric image imitating 3d scan domain using erosion transformation of two types, we called them "spherical" and "sandglass" domains (see FIG2). The task is to capture and translate the global intensity pattern over time while preserving image content (digit). The ing image volumes of shape 30 × 84 × 84 were used to train the models to transform digits of spherical type to sandglass type and vice versa. Colorization of moving MNIST digits. To test models' ability to preserve local information about the color of object, inspired by the Moving MNIST dataset introduced in BID15, we generated a dataset with moving digits of different colors. We used this dataset to train the models to translate from the original moving white digits to the same moving digits in color. GTA segmentation dataset. The Playing for Benchmarks dataset BID15 is a large collection of GTA gameplay recordings with a rich set of available annotations, and currently it is one of the default datasets for evaluation of image-to-image translation methods. Using the daylight driving and walking subsets 1 of this dataset we generated 1216 short clips of shape 30 × 192 × 108 and corresponding ground truth segmentation videos. Since this paper is focused on translation of Figure 5: Results of unsupervised GTA video-to-segmentation translation with different models. We observed that frame-level methods diverged significantly more frequently than 3D model (top). No information about ground truth pairs was used during training. Frame-wise translation (2D) produces plausible images, but does not preserve temporal consistency. Forming batches from consecutive frames during training (2D sequence) helps in reducing spatio-temporal artifacts and improves convergence. Additional penalty term on consecutive generated frames (const loss) further reduces motion artifacts at the expense of diversity of generated images. Our proposed 3D convolutional model (3D) produces outputs that are coherent in time, but have fewer details because network has to approximate a higher dimensional mapping (video-to-video instead of frame-to-frame) using same number of learnable weights. with global decay-then-rise and rise-then-decay intensity patterns respectively, during models were presented with pairs of videos containing different digits. Our experiments show that frame-level approaches are not able to learn this spatio-temporal pattern and hence cannot perform correct translation whereas our 3D method performs almost perfectly. Both sequence and sequence+const approaches were able to capture temporal pattern but did not learn shape correspondence.dense image sequences with a lot of inter-frame relations, we used the walking subset of this dataset for evaluation as it has slower camera movements and therefore higher effective frame rate. We performed experiments on all datasets mentioned above with four approaches: random, sequential, sequential+const and 3D CycleGAN. For each dataset we used the same 70% of data for training and 30% for performance evaluation. For datasets that contain ground truth pairs (GTA, volumetric MNIST, colored 3D MNIST) samples from both domains were split into train and test sets independently, i.e. often no corresponding pairs of images were present in the training data. We set the number of parameters to be the same for all models (∼ 90M). We also trained the models with number of learnable parameters proportional to the input size (∼ 200M for 3D model). Visual fidelity of the video generated by larger models did not improve, whereas segmentation quality increased significantly. We also report the performance of a large (∼ 200M) random 2D model for reference. For MRCT and GTA segmentation-to-RGB tasks, we ran human evaluation on amazon mechanical turk since these is no "gold" translation to compare with TAB3. In a series of randomized trials, participants were presented with multiple real samples from each domain and then were asked to choose the more realistic one of the outputs of two different models for same input. We also estimated the probability of choosing a video generated by each model over a real one, but only report these numbers for the MRCT domain pair since for segmentation-to-RGB they were below significance level. To help evaluate significance of differences in probabilities, we report a bootstrap estimate of the standard deviation of reported probabilities. For some domain pairs we actually have definitive ground truth answers. For rgb-to-segmentation translation we evaluated segmentation pixel accuracy and L 2 distance between stochastic matrices of pixel class transitions between frames for generated and ground truth segmentation videos with different label denoising levels TAB4. For volumetric MNIST dataset we also computed L 2 error TAB1 ). Since there is no single correct colorization in the colorization of moving digits task, we evaluated average standard deviation of non- colors within each generated image. Models are expected to colorize digits without changing their shapes, therefore we also evaluate L 2 error between original and translated shapes (FIG2) and ground truth videos (last row). Volumetric MNIST. Our experiments on volumetric MNIST show that standard CycleGAN is not able to capture the global motion patterns in the image sequence (see FIG2). Since videos from domain B have the same frames as videos from domain B but in different order, a model with random batches cannot this temporal pattern and outputs a slightly dimmed input. In contrast, 3D CycleGAN was able to learn the transformation almost perfectly. In contrast, sequential models learned the global phase pattern properly, but were unable to generate correct shapes. DISPLAYFORM0 The experiment on colorization of MNIST videos showed that the "random" model is able to colorize individual frames but cannot preserve the color throughout the whole sequence. The choice of batch selection, however, is important: the sequential and const-loss models learned to preserve the same color throughout the sequence even though they did not have access to previous frames. However, we should mention that all models that succeeded in this task collapsed to colorizing digits with a single (blue-green) color even though the training data had 20 different colors. The GTA segmentation-to-video translation with the 3D model produced smoother and more consistent videos compared to the framewise methods which produced undesirable artifacts such as shadow flickering and rapid deformation or disappearance of objects and road marking. Both sequence methods often moved static objects like road marking with camera. One of the drawbacks of the 3D model is that it does not tolerate rapid changes in the input and hence cannot deal with low frame-rate videos. The additional constraint on total variation ed in better visual fidelity and smoothness, but leaned towards rendering all objects of same semantic class using same texture, which reduces the variability of the outputs and fine details, but at the same time reduces the amount of spatio-temporal artifacts. The qualitative of GTA video colorization confirm that the spatio-temporal model produces more consistent and stable colorization (see FIG0).The experiments on MRI-to-CT translation showed that all per-frame translation methods produce image volumes that do not capture the real anatomy (e.g. shape of the skull, nasal path and eyes vary significantly within the neighboring frames), whereas the proposed 3D method gives a continuous and generally more realistic for both CT-MRI and GTA segmentation-to-video tasks TAB3. The CT-to-MRI task is harder since it requires "hallucinating" a lot of fine details and on this task 3D model outperformed random with a significant margin (bold numbers). On a simpler MRI-to-CT task random and 3D models performed similarly within the limits of statistical error. In contrast to the common practice, the sequential batch approach produced more realistic and continuous compared to the random batch choice. Supposedly this is due to the fact that images within the sequence are more similar than randomly selected images, and hence the magnitude of the sum of gradients might be higher ing in faster convergence. Of course, order of frames within sequential batch does not matter since all gradients are summed up during backward pass, but the similarity between images within a batch is important. We proposed a new computer vision task of unsupervised video-to-video translation as well as datasets, metrics and multiple baselines: multiple approaches to framewise translation using imageto-image CycleGAN and its spatio-temporal extension 3D CycleGAN. The of exhaustive experiments show that per-frame approaches cannot capture the essential properties of videos, such as global motion patterns and shape and texture consistency of translated objects. However, contrary to the previous practice, sequential batch selection helps to reduce motion artifacts.
Proposed new task, datasets and baselines; 3D Conv CycleGAN preserves object properties across frames; batch structure in frame-level methods matters.
1,376
scitldr
Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees. We present a variation of capsule networks that aims to remedy this. We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient. Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance. Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. This is done using a trainable, equivariant function defined over a grid of group-transformations. Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function. As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase. We also introduce an equivariant routing mechanism based on degree-centrality. We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations. We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines. The hierarchical component-structure of visual objects motivates their description as instances of class-dependent spatial grammars. The production-rules of such grammars specify this structure by laying out valid type-combinations for components of an object, their inter-geometry, as well as the behaviour of these with respect to transformations on the input. A system that aims to truly understand a visual scene must accurately learn such grammars for all constituent objects -in effect, learning their aggregational structures. One means of doing so is to have the internal representation of a model serve as a component-parsing of an input across several semantic resolutions. Further, in order to mimic latent compositionalities in objects, such a representation must be reflective of detected strengths of possible spatial relationships. A natural structure for such a representation is a parse-tree whose nodes denote components, and whose weighted parent-child edges denote the strengths of detected aggregational relationships. Capsule networks , are a family of deep neural networks that aim to build such distributed, spatially-aware representations in a multi-class setting. Each layer of a capsule network represents and detects instances of a set of components (of a visual scene) at a particular semantic resolution. It does this by using vector-valued activations, termed'capsules'. Each capsule is meant to be interpreted as being representative of a set of generalised pose-coordinates for a visual object. Each layer consists of capsules of several types that may be instantiated at all spatial locations depending on the nature of the image. Thus, given an image, a capsule network provides a description of its components at various'levels' of semantics. In order that this distributed representation across layers be an accurate component-parsing of a visual scene, and capture meaningful and inherent spatial relationships, deeper capsules are constructed from shallower capsules using a mechanism that combines backpropagation-based learning, and consensus-based heuristics. Briefly, the mechanism of creating deeper capsules from a set of shallower capsules is as follows. Each deeper capsule of a particular type receives a set of predictions for its pose from a local pool of shallower capsules. This happens by using a set of trainable neural networks that the shallower capsules are given as input into. These networks can be interpreted as aiming to capture possible part-whole relationships between the corresponding deeper and shallower capsules. The predictions thus obtained are then combined in a manner that ensures that the reflects agreement among them. This is so that capsules are activated only when their component-capsules are in the right spatial relationship to form an instance of the object-type it represents. The agreement-based aggregation described just now is termed'routing'. Multiple routing algorithms exist, for example dynamic routing , EM-routing , SVD-based routing , and routing based on a clustering-like objective function . Based on their explicit learning of compositional structures, capsule networks can be seen as an alternative (to CNNs) for better learning of compositional representations. Indeed, CNN-based models do not have an inherent mechanism to explicitly learn or use spatial relationships in a visual scene. Further, the common use of layers that enforce local transformation-invariance, such as pooling, further limit their ability to accurately detect compositional structures by allowing for relaxations in otherwise strict spatial relations . Thus, despite some manner of hierarchical learning -as seen in their layers capturing simpler to more complex features as a function of depth -CNNs do not form the ideal representational model we seek. It is our belief that capsule-based models may serve us better in this regard. This much said, research in capsule networks is still in its infancy, and several issues have to be overcome before capsule networks can become universally applicable like CNNs. We focus on two of these that we consider as fundamental to building better capsule network models. First, most capsule-network models, in their current form, do not scale well to deep architectures. A significant factor is the fact that all pair-wise relationships between capsules of two layers (upto a local pool) are explicitly modelled by a unique neural network. Thus, for a'convolutional capsule' layer -the number of trainable neural networks depends on the product of the spatial extent of the windowing and the product of the number of capsule-types of each the two layers. We argue that this design is not only expensive, but also inefficient. Given two successive capsule-layers, not all pairs of capsule-types have significant relationships. This is due to them either representing object-components that are part of different classes, or being just incompatible in compositional structures. The consequences of this inefficiency go beyond poor scalability. For example, due to the large number of prediction-networks in this design, only simple functions -often just matrices -are used to model part-whole relationships. While building deep capsule networks, such a linear inductive bias can be inaccurate in layers where complex objects are represented. Thus, for the purpose of building deeper architectures, as well as more expressive layers, this inefficiency in the prediction phase must be handled. The second issue with capsule networks is more theoretical, but nonetheless has implications in practice. This is the lack, in general, of theoretical guarantees on equivariance. Most capsule networks only use intuitive heuristics to learn transformation-robust spatial relations among components. This is acceptable, but not ideal. A capsule network model that can detect compositionalities in a provablyinvariant manner are more useful, and more in line with the basic motivations for capsules. Both of the above issues are remedied in the following description of our model. First, instead of learning pair-wise relationships among capsules, we learn to projectively encode a description of each capsule-type for every layer. This we do by associating each capsule-type with a vector-valued function, given by a trainable neural network. This network assumes the role of the prediction mechanism in capsule networks. We interpret the role of this network as a means of encoding the manifold of legal pose-variations for its associated capsule-type. It is expected that, given proper training, shallower capsules that have no relationship with a particular capsule-type will project themselves to a vector of low activation (for example, 2-norm), when input to the corresponding network. As an aside, it is this mechanism that gives the name to our model. We term this manifold the'space-of-variation' of a capsule-type. Since, we attempt to learn such spaces at each layer, we name our model'space-of-variation' networks (SOVNET). In this design, the number of trainable networks for a given layer depend on the number of capsule-types of that layer. As mentioned earlier, the choice of prediction networks and routing algorithm is important to having guarantees on learning transformation-invariant compositional relationships. Thus, in order to ensure equivariance, which we show is sufficient for the above, we use group-equivariant convolutions (GCNN) in the prediction phase. Thus, shallower capsules of a fixed type are input to a GCNN associated with a deeper capsule-type to obtain predictions for it. Apart from ensuring equivariance to transformations, GCNNs also allow for greater parameter-sharing (across a set of transformations), ing in greater awareness of local object-structures. We argue that this could potentially improve the quality of predictions when compared to isolated predictions made by convolutional capsule layers, such as those of . The last contribution of this paper is an equivariant degree-centrality based routing algorithm. The main idea of this method is to treat each prediction for a capsule as a vertex of a graph, whose weighted edges are given by a similarity measure on the predictions themselves. Our method uses the softmaxed values of the degree scores of the affinity matrix of this graph as a set of weights for aggregating predictions. The key idea being that predictions that agree with a majority of other predictions for the same capsule get a larger weight -following the principle of routing-by-agreement. While this method is only heuristic in the sense of optimality, it is provably equivariant and preserves the capsule-decomposition of an input. We summarise the contributions of this paper in the following: 1. A general framework for a scalable capsule-network model. A particular instantiation of this model that uses equivariant convolutions, and an equivariant, degree-centrality-based routing algorithm. 3. A graph-based framework for studying the representation of a capsule network, and the proof of the sufficiency of equivariance for the (qualified) preservation of this representation under transformations of the input. 4. A set of proof-of-concept, evaluative experiments on affinely transformed variations of MNIST, FASHIONMNIST, and CIFAR10, as well as separate experiments on KMNIST and SVHN that showcase the superior adapatability of SOVNET architectures to train and test-time geometric perturbations of the data, as well as their general performance. We begin with essential definitions for a layer of SOVNET, and the properties we wish to guarantee. Given a group (G, •), we formally describe the l th layer of a SOVNET architecture as the set of We model each capsule-type as a function over a group of transformations so as to allow for formal guarantees on transformation-equivariance. Thus, we also model images as function from a group to a representation-space. The main assumption being that the translation-group is a subgroup of the group in question. This is similar in approach to . We wish for each capsule-type, both pose and activation-wise, to display equivariance. We present a formal definition of this notion. Consider a group (G, •) and vector spaces V, W. Let T and T be two group-representations for elements of G over V and W, respectively. Φ: V → W is said to be equivariant with respect to T and This definition translates to a preservation on transformations in the input-space to the output-space -something that allows no loss of information in compositional structures. As in , we restrict the notion of equivariance in our model by using the operator L g in place of the group-representation. The operator ⊗ describes the change in representation space, and is dependent on the nature of the deep learning model. In the case of capsule networks (and SOVNET), this change is given by routing among capsules as described in subsection 2.1. We define the capsule-types of a particular layer as an output of an agreement-based aggregation of predictions made by the preceding layer. A recursive application of this definition is enough to define a SOVNET architecture, given an initial set of capsules. A means of obtaining this initial set is given in section 3. We provide a general framework for the summation-based family of routing procedures in Algorithm 1. Algorithm 1 A general summation-based routing algorithm for SOVNET. The weighted-sum family of routing algorithms builds deeper capsules using a weighted sum of predictions made for them by shallower capsules. To ensure that the predictions are combined in a meaningful manner, different methods can be used to obtain the weights. The role of the function GetW eights is to represent any such mechanism. The activation of a capsule, representative of the probability of existence of the object it represents, is determined by the extent of the consensus among its predictions. This is based on the routing-by-agreement principle of capsule networks. The Agreement function represents any means of evaluating such consensus. We instantiate the above algorithm to a specific model, as given in Algorithm 2. In this model, the Ψ l j are group-equivariant convolutional filters, and the operator · is the corresponding groupequivariant correlation operator. The weights c l+1 ij (g) are, in this routing method, the softmaxed degree-scores of the affinities among predictions for the same deeper capsule. Further, like in dynamic routing , we also assume that the activation of a capsule is given by its 2-norm. To ensure that this value is in, we use the'squash' function of dynamic routing. Thus, we do not mention it explicitly. Note that we have used the subscript notation to also denote that a variable is part of a vector, for example S l+1 ijp (g) denotes the p th element of the This new routing algorithm is meant to serve as an alternative to existing iterative routing strategies such as dynamic routing. An important strength of our method being that there is no hyperparameter, like that of the number of iterations in dynamic routing or EM routing. The SOVNET layer we introduced in Algorithm 2 is group-equivariant with respect to the group action L g, where g ∈ G -the set of transformations over which the group-convolution is defined. For notational convenience, we define ⊗ to be an operator that encapsulates the degree-routing procedure with prediction networks Ψ l+1 j. Thus, the j th capsule-type of the l + 1 th layer is functionally depicted. The formal statement of this is given below; the proof is presented in the appendix. Theorem 2.1. The SOVNET layer defined in Algorithm 2, and denoted by the operator ⊗ as given above, satisfies, where g belongs to the underlying group of the equivariant convolution. Proof. The proof is given in the appendix. Algorithm 2 The degree-centrality based routing algorithm for SOVNET. Equivariance is widely considered a desirable inductive bias for a variety of reasons. First, equivariance mirrors natural label-invariance under transformations. Second, it lends predictability to the output of a network under (fixed) transformations of the input. These, of course, lead to a greater robustness in handling transformations of the data. We aim at adding to this list by showing that equivariance guarantees the preservation of detected compositionalities in a SOVNET architecture. This is of course quite unsurprising, and has been a significant undercurrent of the capsule-network idea. Our work completes this intuition with a formal . We begin by first defining the notion of a capsule-decomposition graph. This graph is formed from the activations and the routing weights of a SOVNET. Specifically, given an input to a SOVNET model, each capsule of every type is a vertex in this graph. We construct an edge between capsules that are connected by routing, with the direction from the shallower capsule to the deeper capsule. Each of these edges are weighted by the corresponding routing coefficient. Capsules not related to each other by routing are not connected by an edge. This graph is a direct formalisation of the various detected compositionalities with their strengths. What should the ideal behaviour of this graph be under the change-of-viewpoint of an input? The answer to this lies in the expected behaviour of natural compositionalities. Thus, while the pose of objects, and their components, is changed under transformations of the input, the relative geometry is constant. Thus, it is desirable that the capsule-decomposition graphs of a particular input (and its transformed variations) be isomorphic to each other. We show that a SOVNET model that is equivariant with respect to a set of transformations satisfies the above property for that set. A more formal description of the capsule-decomposition graph, and the statement for the above theorem are given below. Consider an L-layer SOVNET model, whose routing procedure belongs to the family of methods given by Algorithm 1. Let us consider a fixed input x: G → R c. We define the capsule-decomposition graph of such a model, for this input x, as G(x) = (V (x), E(x)). Here, V (x) and E(x) denote the vertex-set and the edge-set, respectively. denotes the pool of grid-positions at layer l that route to the deeper capsule of type j of layer l + 1 at g 2. A more formal definition is given the appendix. We also use the notation L hf Theorem 2.2. Consider an L-layer SOVNET whose activations are routed according to a procedure belonging to the family given by Algorithm 1. Further, assume that this routing procedure is equivariant with respect to the group G. Then, given an input x and ∀g ∈ G, are isomorphic. Proof. The proof is given in the appendix. Based on above theorem, and the fact that degree-centrality based routing is equivariant, the above applies to SOVNET models that use Algorithm 2. This section presents a description of the experiments we performed. We conducted two sets of experiments; the first to compare SOVNET architectures to other capsule network baselines with respect to transformation robustness on classification, and the second to compare SOVNET to certain capsule as well as convolutional baselines based on classification performance. Before we present the details of these experiments, we briefly describe some details of the SOVNET architecture we used. We only present an outline -the complete details, both architecture-wise and about the training, can be found in the anonymised github repository https://github.com/sairaamVenkatraman/ SOVNET. The first detail of the architecture pertains to the construction of the first layer of capsules. While many approaches are possible, we used the following methodology that is similar in spirit to other capsule network models. The first layer of the SOVNET architectures we constructed use a modified residual block that uses the SELU activation, along with group-equivariant convolutions. This is so as to allow a meaningful set of equivariant feature maps to be used for the creation of the first set of capsules. Intuition and some literature, for example , suggest that the construction of primary capsules plays a significant role in the performance of the capsule network. Thus, it is necessary to build a sufficiently expressive layer that yields the first set of meaningful capsule-activations. To this end, each capsule-type in the primary capsule layer is associated with a group-convolution layer followed by a modified residual block. The convolutional feature-maps from the preceding layer passes through each of these sub-networks to yield the primary capsules. No routing is performed in this layer. We now describe the SOVNET blocks. Since the design of SOVNET significantly reduces the number of prediction networks, and thereby the number of trainable parameters, we are able to build architectures whose each layer uses more expressive prediction mechanisms than a simple matrix. Specifically, each hidden layer of the SOVNET architectures we consider uses a (group-equivariant) modified residual block as the prediction mechanism. We use a SOVNET architecture that uses 5 hidden layers for MNIST, FashionMNIST, KMNIST, and SVHN, and a model that uses 6 hidden layers for CIFAR-10. Unlike DeepCaps -another capsule network whose predictions use (regular) convolution, each of the hidden layers of our SOVNET models use degree-routing. The hidden layers of DeepCaps (excepting the last), in contrast, are not strictly capsule-based -being just convolutions whose outputs are reshaped to a capsule-form. The output capsule-layer of SOVNET is designed similar to the hidden capsule-layers, with the difference that the prediction-mechanism is a group-convolutional implementation of a fullyconnected layer. In order to make a prediction for the class of an input, the maximum across the rotational (and reflectional) positions of the two-norm of the capsule-activations of this layer are taken for each class-type. This is an equivariant operation, as it corresponds to the subgroup-pooling of. The predictions that this layer yields is the type of the capsule with the maximum 2-norm. In order to guarantee the robustness to translations and rotations, we used the p4-convolutions for the prediction mechanism in all the networks used in the first set of experiments. For the second set, we used the p4m-convolution , that is equivariant to rotations, translations and reflections -for greater ability to learn from augmentations. The architectures, however are identical but for this difference. As in , we used a margin loss and a regularising reconstruction loss to train the networks. The positive and negative margins for half of the training epochs were set to 0.9 and 0.1, respectively. Further, the negative margin-loss was weighted by 0.5, as in . These values were used for the first half of the training epochs. In order to facilitate better predictions, these values were changed to 0.95, 0.05, and 0.8, respectively for the second half of the training. We adopt this from. The reconstruction loss was computed by masking the incorrect classes, and by feeding the'true' class-capsule to a series of transposed convolutions to reconstruct the image. The mean square loss was computed for the reconstruction and original image. The main idea being that this loss guides the capsule network to build meaningful capsules. This loss was weighed by 0.0005 as in . We used the Adam optimiser and an exponential learning rate scheduler that reduced the learning rate by a factor of 0.9 each epoch. With this outline of the architecture and details of the training, we now describe the first set of experiments we conducted on SOVNET. The preservation of detected compositionalities under transformations in SOVNET leads us to the expectation that SOVNET models, when properly trained, will display greater robustness to changes in viewpoint of the input. Apart from handling test-time transformations, as is the commonly held notion of transformation robustness, a robust model must also effectively learn from train-time perturbations of the data. Based on these ideas, we designed a set of experiments that compare SOVNET architectures to other capsule networks on their ability to handle train and test-time affine transformations of the data. Specifically, we perform experiments on MNIST , FashionMNIST , and CIFAR-10 . For each of these datasets, we created 5 variations of the train and test-splits by randomly transforming data according to the extents of the transformations given in Table 1. We train a given model on each transformed version of the training-split, and test each model on each of the versions of the test-split. Thus we obtain, for a single model, 25 accuracies per dataset -each corresponding to a pair of train and test-splits. There is a single modification to these transformations for the case of CIFAR-10. In order to compare SOVNET against the closest competitor DeepCaps, we use their strategy of first resizing CIFAR-10 images to 64×64, followed by translations and rotations. We tested SOVNET against four capsule network baselines, namely Capsnet , EMcaps , DeepCaps, and GCaps . The of these experiments are given in Tables 2 to 4. In the majority of the cases, SOVNET obtains the highest accuracy -showing that it is more robust to transformations of the data. Note that we had to conduct these experiments as such a robustness study was not done in the original papers for the baselines. We used, and modified, code from the following github sources for the implementation of the baselines: for CAPSNET; for EMCAPS; and for DeepCaps, and for GCaps. We also tested against a group-equivariant convolution network (GCNN). The second set of experiments we conducted, tested SOVNET against several capsule as well as convolutional baselines. We trained and tested SOVNET on KMNIST and SVHN . With fairly standard augmentation -mild translations (and resizing for SVHN to 64×64) -the SOVNET architecture with p4m-convolutions was able to achieve on-par, or above, comparative performance. The of this experiment are in Table 5. In order to compare the performance of SOVNET architectures against more sophisticated CNN-baselines, we also trained ResNet-18, ResNet-34 on the most extreme transformation -translation by up to ± 2 pixels, and rotation by up to ± 180°. The of these experiments are presented in the appendix. A number of insights can be drawn from an observation of the accuracies obtained from the experiments. First, the most obvious, is that SOVNET is significantly more robust to train and test-time geometric transformations of the input. Indeed, SOVNET learns to use even extreme transformations of the training data and generalises better to test-time transformations in a majority of the cases. However, in certain splits, some baselines perform better than SOVNET. These cases are briefly discussed below. On the CIFAR-10 experiments, DeepCaps performs significantly better than SOVNET on the untransformed case -generalising to test-time transformations better. However, SOVNET learns from train-time transformations better than DeepCaps -outperforming it in a large majority of the other cases. We hypothesize that the first observation is due to the increased (almost double) number of parameters of DeepCaps that allows it to learn features that generalise better to transformations. Further, as p4-convolutions (the prediction-mechanisms used) are equivariant only to rotations in multiples of 90°, its performance is significantly lower for test-time transformations of 30°and 60°for the untransformed case. However, the equivariance of SOVNET allows it to learn better from train-time geometric transforms than DeepCaps, explaining the second observation. The second case is that GCaps outperforms SOVNET on generalising to extreme transformations on (mainly) MNIST, and once on FashionMNIST, under mild train-time conditions. However, it is unable to sustain this under more extreme train-time perturbations. We infer that this is caused largely by the explicit geometric parameterisation of capsules in G-Caps. While under mild-tomoderate train-time conditions, and on simple datasets, this approach could yield better , this parameterisation, especially with very simple prediction-mechanisms, can prove detrimental. Thus, the convolutional nature of the prediction-mechanisms, which can capture more complex features, and also the greater depth of SOVNET allows it to learn better from more complex training scenarios. This makes the case for deeper models with more expressive and equivariant prediction-mechanisms. A related point of interest is that G-Caps performs very poorly on the CIFAR-10 dataset -achieving the least accuracy on most cases on this dataset -despite provable guarantees on equivariance. We argue that this is significantly due to the nature of the capsules of this model itself. In GCaps, each capsule is explicitly modelled as an element of a Lie group. Thus, capsules capture exclusively geometric information, and use only this information for routing. In contrast, other capsule models have no such parameterisation. In the case of CIFAR-10, where non-geometric features such as texture are important, we see that purely spatio-geometric based routing is not effective. This observation allows us to make a more general hypothesis that could deal with the fundamentals of capsule networks. We propose a trade-off in capsule networks, based on the notion of equivariance. To appreciate this, some is necessary on both equivariance and capsule networks. As the body of literature concerning equivariance is quite vast, we only mention a relevant selection of papers. Equivariance can be seen as a desirable, if not fundamental, inductive bias for neural networks used in computer vision. Indeed, the fact that AlexNet automatically learns representation that are equivariant to flips, rotation and scaling shows the importance of equivariance as well as its natural necessity . Thus, a neural network model that can formally guarantee this property is essential. An early work in this regard is the group-equivariant convolution proposed in . There, the authors proposed a generalisation of the 2-D spatial convolution operation to act on a general group of symmetry transforms -increasing the parameter-sharing and, thereby, improving performance. Since then, several other models exhibiting equivariance to certain groups of transformations have been proposed, for example (b), where a spherical correlation operator that exhibits rotationequivariance was introduced; , where a network equivariant to rotation and scale, but invariant to translations was presented, and , where a model equivariant to translations and 3D right-angled rotations was developed. A general theory of equivariant CNNs was developed in (a). In their paper, they show that convolutions with equivariant kernels are the most general class of equivariant maps between feature spaces. A fundamental issue with group-equivariant convolutional networks is the fact that the grid the convolution works with increases exponentially with the type of the transformations considered. This was pointed out in ; capsules were proposed as an efficient alternative. In a general capsule network model, each capsule is supposed to represent the pose-coordinates of an object-component. Thus, to increase the scope of equivariance, only a linear increase in the dimension of each capsule is necessary. This was however not formalised in most capsule architectures, which focused on other aspects such as routing , , ; general architecture, (Deliège et al., 2018), , , ,; or application. It was only in group-equivariant capsules that this idea of efficient equivariance was formalised. Indeed, in that paper, equivariance changed from preserving the action of a group on a vector space to preserving the group-transformation on an element. While such models scale well to larger transformation groups in the sense of preserving equivariance guarantees, we argue that they cannot efficiently handle compositionalities that involve more than spatial geometry. The direct use of capsules as geometric pose-coordinates could lead to exponential representational inefficiencies in the number of capsules. This is the tradeoff we referred to. We do not attempt a formalisation of this, and instead make the observation given next. While SOVNET (using GCNNs) lacks in transformational efficiency, the use of convolutions allows it to capture non-geometric structures well. Further, SOVNET still retains the advantage of learning compositional structures better than CNN models due to the use of routing, placing it in a favourable position between two extremes. We presented a scalable, equivariant model for capsule networks that uses group-equivariant convolutions and degree-centrality routing. We proved that the model preserves detected compositionalities under transformations. We presented the of experiments on affine variations of various classification datasets, and showed that our model performs better than several capsule network baselines. A second set of experiments showed that our model performs comparably to convolutional baselines on two other datasets. We also discussed a possible tradeoff between efficiency in the transformational sense and efficiency in the representation of non-geometric compositional relations. As future work, we aim at understanding the role of the routing algorithm in the optimality of the capsule-decomposition graph, and various other properties of interest based on it. We also note that SOVNET allows other equivariant prediction mechanisms -each of which could in a wider application of SOVNET to different domains. A tuple (G, •), where G is a non-empty set and • defines a binary operation on G, is said to form a group if the following properties are satisfied: Existence of the identity element: Existence of an inverse: A.2 GROUP ACTION AND GROUP REPRESENTATION Given a group (G, •) and a vector space V, a group action is a function f: G × V → V satisfying the following properties. A group representation is a group action by invertible linear maps. More formally, a group representation of a group (G, •) with respect to a vector space V is a homomorphism from G to GL(V) -the set of linear, invertible maps from V to V. Consider a one-layer GCNN-convolutional prediction network Ψ l+1 j for a SOVNET layer l + 1, and for the d l+1 -dimensional j th capsule-type. Intuitively, P ool l+1 j (g) is defined by the extent of the support of the g-transformed filter Ψ l+1 j. More formally, For a general L-layer GCNN prediction-network, P ool l+1 j (g) is defined by recursively applying the above definition through all the layers of the prediction network. The 2-norm of a vector x = (x 0, ..., x n−1) T ∈ R n, and denoted by x 2, is defined as We present proofs for the theorems mentioned in the main body. Theorem B.1. The SOVNET layer defined in Algorithm 2, and denoted by the operator ⊗ as given above, satisfies, where g belongs to the underlying group of the equivariant convolution. Proof. For the theorem to be true, we must show that each step of Algorithm 2 is equivariant. We do this step-wise. The predictions S l+1 ij made in the first step are group-equivariant. This follows from the fact that . We now show that the DegreeScore procedure is equivariant. We see that Degree From the equivariance of, Moreover, the 2-norm of an equivariant map is also equivariant -from the equivariance of the post-composition of non-linearities over equivariant maps . Also, the division of two (non-zero) equivariant maps is also equivariant. Thus, obtaining the degree-scores is equivariant. Again, the softmax function preserves the equivariance as it is a point-wise non-linearity. The proof is concluded by pointing out that the product and sum of equivariant maps is also equivariant. Theorem B.2. Consider an L-layer SOVNET whose activations are routed according to a procedure belonging to the family given by Algorithm 1. Further, assume that this routing procedure is equivariant with respect to the group G. Then, given an input x and ∀g ∈ G, G(x) and G([L g x]) are isomorphic. Proof. Consider a fixed L-layer SOVNET that is equivariant to transformations from a group G, and an input x: G → R c. Let G(x) be the capsule-decomposition graph corresponding to x. Then G(L h x) denotes the the capsule-decomposition graph of the transformed input L h x. We show that the mapf. This is from the definition of the vertex set of a capsule-decomposition graph and the fact that the map g → h −1 • g is a bijection. We now show that (f First, let us assume (f However, due to the assumed equivariance of the model,f The converse of this is proved in the same way by considering, and applying the above to E(L h x) and E(L h −1 L h x). We performed two experiments to verify that the capsule decomposition-graphs of the transformed and untransformed images are isomorphic. For the first of these, we trained a p4-convolution based SOVNET architecture on untransformed images of MNIST and FashionMNIST. We then considered four variations of the two test-datasetsuntransformed, and three versions rotated exactly by multiples of 90 degrees: 90, 180, and 270. Our experiment verifies that the mapping defined in the proof of Theorem 2.2 is indeed an isomorphism. To this end, we considered the capsule-activations as well as the degree-scores, obtained across all the capsule-layers, for each image of all the variations of the test split of the corresponding dataset. We then mapped the activations and the degree-scores for the untransformed images by the aforesaid mapping for each of the transformations. This corresponds to'rotating' the activations and degree-scores by each transformation. We then computed the squared error of these with each of the activations and degree-scores obtained from the correspondingly transformed image, respectively. A successful verification would in zero error (up to machine precision). The in Table 6 show that this happens. The second of our experiments is an empirical verification that the test-accuracies remain unchanged under transformations for which SOVNET exhibits equivariance. We use the same trained architecture as above, and verify that the accuracy remains unchanged under exact transformations of the images. The are presented in Table 7. The accuracies presented below are only for the purpose of veryfying the isomorphism of the of the graph. 19% 77.19% 77.19% 77.19% C.2 ON TESTING ON UNSEEN TRANSFORMS: AFFNIST We trained a SOVNET architecture on MNIST images that are padded to size 40x40 -the size of AFFNIST images. We augment these images by translation, as is the standard approach. Note that the changed size of the images necessitates a different architecture. The of this experiment is given in Table 8. We see that our SOVNET architecture obtains the highest accuracy when compared to other recent capsule network models. Method Accuracy 79.0% 93.1% 89.10% 87.8% 91.6% SOVNET 97.01% We also trained the above SOVNET architecture on MNIST with translations in the range of [-6,6] pixels and rotations from [-30,30] degrees. While this increases the extent of train-time augmentation, there are several test-time transformations that are unseen. With this scheme, we achieve state-of-the-art accuracy of 99.20%. This improves over the best, to our knowledge, accuracy of 98.3% obtained by . We considered an implementation of the CAPSNET model . Unlike , that uses one prediction-network per connection between capsules, this model uses one prediction-network per class-capsule. The of this model on augmented versions of MNIST and FashionMNIST are presented in Table 9, with corresponding accuracies of capsnet. We have trained a SOVNET architecture on CIFAR100. Our model has achieved an accuracy of 71.55%, an almost 4 percentage improvement over a recent capsule network model -STARCAPS which achieved 67.66%. In order to compare SOVNET with more sophisticated CNN models, we performed a limited set of experiments on MNIST and FashionMNIST. We trained ResNet18 and ResNet34 on the train split of MNIST and FashionMNIST transformed by random translations of up to ± 2 pixels, and random rotations of up to ± 180°. The models were tested on various transformed versions of the test-splits. The of these experiments are given in Table 10. As can be seen in the table, SOVNET compares with the two much deeper CNN models. More testing on more complex datasets, as well as deeper SOVNET models must be done, however, to obtain a better understanding of the relative performance of these two kinds of models. Consider Algorithm 1, which is given below for convenience. The role of the GetW eights and Agreement procedures is to evaluate the relative importances of predictions for a deeper capsule, and the extent of consensus among them, respectively. The second of these is interpreted as a measure of the activation of the corresponding deeper capsule. A formalisation of these concepts to a general framework for even summation-based routing so as to cover all possible notions of relative importance, and consensus is not within the scope of this paper. Indeed, to the best of our knowledge, such a formalisation has not been successfully completed. Thus, instead of a formal description of a general routing procedure, we provide examples to better understand the role of these two functions. We first explain GetW eights, and then Agreement. Algorithm A general weighted-summation routing algorithm for SOVNET. The first example of GetW eights we provide is from the proposed degree-centrality based routing. The algorithm is given below, again. In this case, GetW eights is instantiated by the DegreeScore procedure, which assigns weights to predictions based on their normalised degree centrality scores. Thus, a prediction that agrees with a significant number of its peers obtains a higher importance than one that does not. This scheme follows the principle of routing-by-agreement, that aims to activate a deeper capsule only when its predicting shallower, component-capsules are in an acceptable spatial configuration . The above form for the summation-based routing procedure generalises for several existing routing algorithms. As an example, we present the dynamic routing algorithm of . This differs with our proposed algorithm in that it is a "attention-based", rather than "agreement-based" routing algorithm. That is, the relative importance of a prediction with respect to a fixed deeper capsule is not a direct measure of the extent of its consensus with its peers, but rather a measure of the relative attention it offers to the deeper capsule. Thus, the weight associated with a prediction for a fixed deeper capsule by a fixed shallower capsule depends on other deeper capsules. In order to accomodate such methods into a general procedure, we modify our formalism by having GetW eights take all the predictions as parameters, and return all the routing weights. This modified general procedure is given in Algorithm 5. Consider the dynamic routing algorithm of , given in Algorithm 6 -modified to our notation and also the use of group-equivariant convolutions. The procedure DynamicRouting is the instantiation for GetW eights. Note that the weights c ij (g) depend on the routing weights for the deeper capsules. Due to the formulation of capsules in our paper, as in , we use the 2-norm of a capsule to denote its activation. Thus, our degree-centrality based procedure, and also dynamic routing, do not use a separate value for this. However, examples of algorithms that use a separate activation value exist; for example, spectral routing computes the activation score from the sigmoid of the first singular value of the matrix of stacked predictions. Our theoretical and algorithms admit a generalisation to other groups -as long as an appropriate group-convolution is defined. The equivariance and the preservation of detected compositionality is preserved under the condition that the group-convolution is equivariant. As an example, consider the discrete translation group Z 2 and the regular correlation operation defined for an input with d channels by (f Ψ)(x) = t∈Z 2 d−1 k=0 f k (t)Ψ k (x − t). The translationequivariance of this operation is proved in . The general n-dimensional correlation defined on Z n is given by (f Ψ)(x) = t∈Z n d−1 k=0 f k (t)Ψ k (x − t). This operation is equivariant to translations in n-dimensions. The proof for this is given below. Theorem E.1. The n-dimensional correlation operator is equivariant with respect to Z n and the group representation L. Proof. Consider x, y, t ∈ Z n, and f: Our degree-centrality based algorithm, with its use of discrete convolutions, can be used in its current form with the above convolution. The proof of equivariance and the preservation of compositionality holds from a direct application of the above to Theorem 2.1 and Theorem 2.2, using the underlying group as Z n. For continuous groups such as SO(n), the degree-centrality based algorithm must use equivariant convolutions defined over it to remain equivariant. We consider the specific case of SO below. The correlation of two functions f, Ψ: SO → R d is given by: This correlation is equivariant to transformations in SO, with respect to the group representation L R defined by [L r f (Q)] = f (R −1 Q), as proved in (b). It is to be noted that due to approximations introduced by the sampling of continuous functions in implementations, exact equivariance is not preserved. However, our routing algorithm can still be used with such convolutions and does not contribute to any reduction of equivariance by itself. This is due to the equivariance of the dot-product and the post-composition operators. The equivariance of the post-composition operator was proved in . We formally prove the equivariance of dot-product for the SO group. Theorem E.2. The dot-product between two equivariant functions f, g: SO → R d is equivariant with respect to the group representation L. That is, The proof for the preservation of compositionality also holds by considering the infinite graph G(x). The definition for this is the same as before. The proof follows by using the same mapping between vertices, and from the equivariance of the routing procedure.
A new scalable, group-equivariant model for capsule networks that preserves compositionality under transformations, and is empirically more transformation-robust to older capsule network models.
1,377
scitldr
Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise. Recently deep neural networks (DNNs) have achieved remarkable performance on many tasks, such as speech recognition , image classification , object detection. However, DNNs usually need a large-scale training dataset to generalize well. Such large-scale datasets can be collected by crowd-sourcing, web crawling and machine generation with a relative low price, but the labeling may contain errors. Recent studies; reveal that mislabeled examples hurt generalization. Even worse, DNNs can memorize the training data with completely randomly-flipped labels, which indicates that DNNs are prone to overfit noisy training data. Therefore, it is crucial to develop algorithms robust to various amounts of label noise that still obtain good generalization. To address the degraded generalization of training with noisy labels, one direct approach is to reweigh training examples;;; , which is related to curriculum learning. The general idea is to assign important weights to examples with a high chance of being correct. However, there are two major limitations of existing methods. First, imagine an ideal weighting mechanism. It will only focus on the selected clean examples. For those incorrectly labeled data samples, the weights should be near zero. If a dataset is under 80% noise corruption, an ideal weighting mechanism assigns nonzero weights to only 20% examples and abandons the information in a large amount of 80% examples. This leads to an insufficient usage of training data. Second, previous methods usually need some prior knowledge on the noise ratio or the availability of an additional clean unbiased validation dataset. But it is usually impractical to get this extra information in real applications. Another approach is correction-based, estimating the noisy corruption matrix and correcting the labels;;. But it is often difficult to estimate the underlying noise corruption matrix when the number of classes is large. Further, there may not be an underlying ground truth corruption process but an open set of noisy labels in the real world. Although many complex approaches;; have been proposed to deal with label noise, we find that a simple yet effective baseline can achieve surprisingly good performance compared to the strong competing methods. In this paper, we first analyze the conditions for good generalization. A model with simpler hypothesis and smoother decision boundaries can generalize better. Then we propose a new algorithm which can satisfy the conditions and take advantage of the whole dataset including the noisy examples to improve the generalization. Our main contributions are:• We build a connection between the generalization of models trained with noisy labels and the smoothness of solutions, which is related to the subspace dimensionality.• We propose a novel approach for training with noisy labels, which greatly mitigates overfitting. Our method is simple yet effective and can be applied to any neural network architecture. Additional knowledge on the clean validation dataset is not required.• A thorough empirical evaluation on various datasets (CIFAR-10, CIFAR-100) is conducted and demonstrates a significant improvement over the competing strong baselines. In this section, we briefly introduce some notations and settings in learning with noisy labels. The target is to learn a robust K-class classifier f from a training dataset of images with noisy supervision. Let D = {(x 1,ỹ 1),..., (x N,ỹ N)} denote a training dataset, where x n ∈ X is the n-th image in sample space X (e.g., R d) with its corresponding noisy labelỹ n ∈ {1, 2, ..., K}. The label noise is often assumed to be class-conditional noise in previous work; , where the label y is flipped toỹ ∈ Y with some probability p(ỹ|y). It means that p(ỹ|x, y) = p(ỹ|y),, the corruption of labels is independent of the input x. This kind of assumption is an abstract approximation to the real-world corruption process. For example, non-expert labelers may fail to distinguish some specific species. The probability p(ỹ|y) is represented by a noise transition matrix DISPLAYFORM0 In this section, we present a new robust training algorithm to deal with noisy labels. We argue that a model with lower complexity is more robust to label noise and generalizes well. The dimensionality of the learned subspace and the smoothness of decision boundaries can both indicate how complex the model is. Therefore, we propose a method to regularize the predictive variance to achieve low subspace dimensionality and smoothness, respectively. In order to alleviate over-fitting to the label noise, we propose a regularizer that is not dependent on the labels. We induce the smoothness of decision boundaries along the data manifold, which is shown to improve the generalization and robustness. If an example x is incorrectly labeled withỹ, it has a high probability to lie near the decision boundary or in the wrong cluster not belonging to y. Therefore, the prediction variance can be high on the noisy examples. We propose to regularize the variance term. The mapping function is smoothed and thus also the decision boundaries. Concretely, the variance is estimated by the difference of predictions under perturbations ξ and ξ including the input noise like Gaussian noise and stochastic data augmentation, as well as the network noise like dropout: DISPLAYFORM0 We can show that R V (θ) is an unbiased estimation of the predictive variance if the perturbations are treated as a part of the model uncertainty. Relation to the generalization of DNNs. We show that this regularization helps to learn a lowdimensional feature space that captures the underlying data distribution. The variance term implicitly estimates the Jacobian norm,, the Frobenius norm of the Jacobian of the network output w.r.t. the inputs: J(x) F. A simplified version is to assume ξ is sampled from a Gaussian distribution, i.e., ξ, ξ ∼ N (0, σ 2 I) and the perturbation is small and additive,,x = x + ξ where σ is near zero. DISPLAYFORM1 By first-order Taylor expansion, and let J(x) = ∂f ∂x DISPLAYFORM2 and omitting the high-order terms, we have DISPLAYFORM3 If we further take expectation over N samples of x i, we get DISPLAYFORM4 It can be proved that this is an unbiased estimator. For perturbations of natural images, similar analysis applies. It was shown in Sokolić et al.; that the Jacobian norm is related to the generalization performance both theoretically and empirically. Perturbations on the data manifold can be approximated by stochastic data augmentation. Similar objectives have been explored in semi-supervised learning Laine & Aila FORMULA1. It restricts the solution to be of some specific form, which is equivalent to imposing some prior knowledge of the model structure. The regularizer serves as an inductive bias on the structure of the feature space. By reducing the variance of predictions, the neural network is encouraged to learn a low-dimensional feature space where the training examples are far from the decision boundaries and tend to cluster together. This alleviates the possibility of the model to increase its complexity to fit the noisy labels. Therefore, the learning objective is simply DISPLAYFORM5 where the first term is any loss function including the cross-entropy loss or previously proposed noise-robust losses. In Section 4, we show empirically that the objective can learn a model with low subspace dimensionality and low hypothesis complexity. In this section, we present both quantitative and qualitative to demonstrate the effectiveness of our method. Our method is independent of both the architecture and the dataset. We first provide on the widely adopted benchmarks, CIFAR-10 and CIFAR-100. Results on ImageNet and WebVision will be provided in Sec. 5.5 and Sec., where performance relative to the standard clean settings can be observed. We fix the hyper-parameter λ = 300 in all the experiments for CIFAR-10 and λ = 3000 for CIFAR-100.In all the experiments, our method achieves significantly better resistance to label noise from moderate to severe levels. In particular, our approach attains a 13.31% error rate on CIFAR-10 with a noise fraction of 80%, down from the previous best 32.08%. Using the same network architecture WRN-28-10 as ours and 1000 clean validation images, learning to reweight Ren et al. FORMULA1 achieves 38.66% test error on CIFAR-100 with 40% noise while our method achieves a better 25.73% even without any knowledge on the clean validation images. FIG5 plot the test accuracy against the number of epochs on the two datasets. We provide a simple baseline -CCE, standing for categorical cross-entropy loss that treats all the noisy training examples as clean and trains a WRN-28-10. We can see that the baseline tends to over-fit the label noise at the later stage of training while our method does not suffer from the incorrect training signal. We propose a simple but effective algorithm for robust deep learning with noisy labels. Our method builds upon a variance regularizer that prevents the model from overfitting to the corrupted labels. Extensive experiments given in the paper show that the generalization performance of DNNs trained with corrupted labels can be improved significantly using our method, which can serve as a strong baseline for deep learning with noisy labels. Learning with noisy labels has been broadly studied in previous work, both theoretically Natarajan et al. FORMULA1, that is to minimize D f θ (w(x, y)p(x, y), q(x, y)) where D is some distance measure implicitly learned by f θ and w(x, y) is the density ratio, the learned weights for each example (x, y).Using additional clean validation dataset. proposed a regularization term to encourage the model to select reliable examples. proposed Golden Loss Correction to use a set of trusted clean data to mitigate the effects of label noise. They estimate the corruption matrix using the trained network with noisy labels and then re-train the network corrected by the corruption matrix. also used a small clean validation dataset to determine the weights of training examples. The success of these methods is based on the assumption that clean data is from the same distribution as the corrupted data as well as the test data. However, more realistic scenario are ones where p(x) varies between the clean data and the noisy data, e.g., imbalanced datasets. 2) There is class mismatch: p(y|x) differs. Similar problems exist in semi-supervised learning. All these methods require a clean validation dataset to work well while the proposed method does not require it. We also plot label precision against number of epochs in Figure 4. Here we treat the 1 − η ratio of the training examples with minimal training losses as the clean examples. The label precision is computed as the portion of true clean examples among them. The ideal algorithm without over-fitting will have 100% label precision. The higher the label precision is, the better robustness the model achieves. Figure 4 demonstrates that our method obtains a higher label precision. A more realistic and more challenging noise type than the uniform noise is to corrupt between the semantically similar classes. For CIFAR-10, the class-dependent asymmetric noise is simulated by mapping TRUCK → AUTOMOBILE, BIRD → AIRPLANE, DEER → For CIFAR-100, class dependent noise is simulated by flipping each class into the next class with probability η. The last class is flipped to the first class circularly,, the transition matrix has 1 − η on the diagonal and η off the diagonal: DISPLAYFORM0 DISPLAYFORM1 Results are presented in TAB2. We compare to a range of competing loss-correction methods whose are taken from and our baseline trained with only CCE. We use the same hyper-parameter λ = 300 among all the experiments for CIFAR-10 and λ = 3000 for CIFAR-100. Note that Forward T is the forward correction using the ground-truth noise transition matrix, whose are almost perfect. Our method does not use any ground-truth knowledge of the noise corruption process. We can see that our method is robust to all the settings and is less influenced by the variations of noise types. The test accuracy along the training process on CIFAR-100 is also plotted in Figure 2. We assess the sensitivity of our algorithm with respect to the hyper-parameter λ and the are plotted in Figure 5. We can see that the performance of our method remains stable across a wide range of hyper-parameter choices. We visualize the embeddings of our algorithm on test data. FIG2 shows the representations h(x) ∈ R 128 projected to 2 dimension using t-. Figure 6: t-SNE 2D embeddings of the test dataset on CIFAR-10 trained with 60% uniform label noise. Each color represents a class. Our method learns a more separable feature space.
The paper proposed a simple yet effective baseline for learning with noisy labels.
1,378
scitldr
Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese--English news translation task. We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents. In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences. Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs. Neural machine translation (; BID10 BID0 has become the de-facto standard in machine translation, outperforming earlier phrasebased approaches in many data settings and shared translation tasks BID7 BID9). Some recent suggest that neural machine translation "approaches the accuracy achieved by average bilingual human translators [on some test sets]" , or even that its "translation quality is at human parity when compared to professional human translators" . Claims of human parity in machine translation are certainly extraordinary, and require extraordinary evidence. 1 have released their data publicly to allow external validation of their claims. Their claims are further strengthened by the fact that they follow best practices in human machine translation evaluation, using evaluation protocols and tools that are also used at the yearly Conference on Machine Translation (WMT) BID2, and take great care in guarding against some confounds such as test set selection and rater inconsistency. However, the implications of a statistical tie between two machine translation systems in a shared translation task are less severe than that of a statistical tie between a machine translation system and a professional human translator, so we consider the worthy of further scrutiny. We perform an independent evaluation of the professional translation and best machine translation system that were found to be of equal quality by. Our main interest lies in the evaluation protocol, and we empirically investigate if the lack of document-level context could explain the inability of human raters to find a quality difference between human and machine translations. We test the following hypothesis:A professional translator who is asked to rank the quality of two candidate translations on the document level will prefer a professional human translation over a machine translation. Note that our hypothesis is slightly different from that tested by , which could be phrased as follows:A bilingual crowd worker who is asked to directly assess the quality of candidate translations on the sentence level will prefer a professional human translation over a machine translation. As such, our evaluation is not a direct replication of that by , and a failure to reproduce their findings does not imply an error on either our or their part. Rather, we hope to indirectly assess the accuracy of different evaluation protocols. Our underlying assumption is that professional human translation is still superior to neural machine translation, but that the sensitivity of human raters to these quality differences depends on the evaluation protocol. Machine translation is typically evaluated by comparing system outputs to source texts, reference translations, other system outputs, or a combination thereof (for examples, see BID1 . The scientific community concentrates on two aspects: adequacy, typically assessed by bilinguals; and target language fluency, typically assessed by monolinguals. Evaluation protocols have been subject to controversy for decades (e. g., BID12, and we identify three aspects with particular relevance to assessing human parity: granularity of measurement (ordinal vs. interval scales), raters (experts vs. crowd workers), and experimental unit (sentence vs. document). Granularity of show that ranking (Which of these translations is better?) leads to better inter-rater agreement than absolute judgement on 5-point Likert scales (How good is this translation?) but gives no insight about how much a candidate translation differs from a (presumably perfect) reference. To this end, suggest the use of continuous scales for direct assessment of translation quality. Implemented as a slider between 0 (Not at all) and 100 (Perfectly), their method yields scores on a 100-point interval scale in practice (b BID2, with each raters' rating being standardised to increase homogeneity. use source-based direct assessment to avoid bias towards reference translations. In the shared task evaluation by , raters are shown the source and a candidate text, and asked: How accurately does the above candidate text convey the semantics of the source text? In doing so, they have translations produced by humans and machines rated independently, and parity is assumed if the mean score of the former does not significantly differ from the mean score of the latter. Raters To optimise cost, machine translation quality is typically assessed by means of crowdsourcing. Combined ratings of bilingual crowd workers have been shown to be more reliable than automatic metrics and "very similar" to ratings produced by "experts" 2 BID4). BID2 compare crowdsourced to "expert" ratings on machine translations from WMT 2012, concluding that, with proper quality control, "machine translation systems can indeed be evaluated by the crowd alone." However, it is unclear whether this finding carries over to translations produced by NMT systems where, due to increased fluency, errors are more difficult to identify (a), and concurrent work by BID11 highlights the importance of expert translators for MT evaluation. Experimental Unit Machine translation evaluation is predominantly performed on single sentences, presented to raters in random order (e. g., BID2). There are two main reasons for this. The first is cost: if raters assess entire documents, obtaining the same number of data points in an evaluation campaign multiplies the cost by the average number of sentences per document. The second is experimental validity. When comparing systems that produce sentences without considering documentlevel context, the perceived suprasentential cohesion of a system output is likely due to randomness and thus a confounding factor. While incorporating document-level context into machine translation systems is an active field of research , state-of-the-art systems still operate at the level of single sentences BID8 BID13 ). In contrast, human translators can and do take document-level context into account BID6. The same holds for raters in evaluation campaigns. In the discussion of their , note that their raters "[did] not necessarily fully understand each randomly sampled sentence sufficiently" because it was provided with no context. In such setups, raters cannot reward textual cohesion and coherence. We conduct a quality evaluation experiment with a 2 × 2 mixed factorial design, testing the effect of source text availability (adequacy, fluency) and experimental unit (sentence, document) on ratings by professional translators. We elicit judgements by means of pairwise ranking. Raters choose the better (with ties allowed) of two translations for each item: one produced by a professional translator (HUMAN), the other by machine translation (MT). Since our evaluation includes that of human translation, it is reference-free. We evaluate in two conditions: adequacy, where raters see source texts and translations (Which translation expresses the meaning of the source text more adequately?); and fluency, where raters only see translations (Which text is better English?).Raters We recruit professional translators, only considering individuals with at least three years of professional experience and positive client reviews. Experimental Unit To test the effect of context on perceived translation quality, raters evaluate entire documents as well as single sentences in random order (i. e., context is a within-subjects factor). They are shown both translations (HUMAN and MT) for each unit; the source text is only shown in the adequacy condition. Quality Control To hedge against random ratings, we convert 5 documents and 16 sentences per set into spam items BID5: we render one of the two options nonsensical by shuffling its words randomly, except for 10 % at the beginning and end. Statistical Analysis We test for statistically significant preference of HUMAN over MT or vice versa by means of two-sided Sign Tests. Let a be the number of ratings in favour of MT, b the number of ratings in favour of HUMAN, and t the number of ties. We report the number of successes x and the number of trials n for each test, such that x = b and n = a + b. 3 We use the experimental protocol described in the previous section for a quality assessment of Chinese to English translations of news articles. To this end, we randomly sampled 55 documents and 2×120 sentences from the WMT 2017 test set. We only considered the 123 articles (documents) which are native Chinese, 4 containing 8.13 sentences on average. Human and machine translations (REFERENCE-HT as HUMAN, and COMBO-6 as MT) were obtained from data released by. 5 The sampled documents and sentences were rated by professional translators we recruited from ProZ: 6 4 native in Chinese, English, or both to rate adequacy, and 4 native in English to rate fluency. On average, translators had 13.7 years of experience and 8.8 positive client reviews on ProZ, and received US$ 188.75 for rating 55 documents and 120 sentences. The averages reported above include an additional translator we recruited when one rater showed poor performance on document-level spam items in the fluency condition, whose judgements we exclude from analysis. We also exclude sentence-level from 4 raters because there was overlap with the documents they annotated, which means that we cannot rule out that the sentence-level decisions were informed by access to the full document. To allow for external validation and further experimentation, we make all experimental data publicly available. 7 In the adequacy condition, MT and HUMAN are not statistically significantly different on the sentence level (x = 86, n = 189, p = .244). This is consistent with the Hassan et al. FORMULA0 obtained with an alternative evaluation protocol (crowdsourcing and direct assessment; see Section 2.1). However, when evaluating entire doc- uments, raters show a statistically significant preference for HUMAN (x = 104, n = 178, p < .05).While the number of ties is similar in sentenceand document-level evaluation, preference for MT drops from 50 to 37 % in the latter FIG0 ). In the fluency condition, raters prefer HU-MAN on both the sentence (x = 106, n = 172, p < .01) and document level (x = 99, n = 143, p < .001). In contrast to adequacy, fluency ratings in favour of HUMAN are similar in sentence-and document-level evaluation, but raters find more ties with document-level context as preference for MT drops from 32 to 22 % FIG0.We note that these large effect sizes lead to statistical significance despite modest sample size. Inter-annotator agreement (Cohen's κ) ranges from 0.13 to 0.32 (see Appendix for full and discussion). Our emphasise the need for suprasentential context in human evaluation of machine translation. Starting with Hassan et al.'s finding of no statistically significant difference in translation quality between HUMAN and MT for their Chinese-English test set, we set out to test this with an alternative evaluation protocol which we expected to strengthen the ability of raters to judge translation quality. We employed professional translators instead of crowd workers, and pairwise ranking instead of direct assessment, but in a sentence-level evaluation of adequacy, raters still found it hard to discriminate between HUMAN and MT: they did not show a statistically significant preference for either of them. Conversely, we observe a tendency to rate HU-MAN more favourably on the document level than on the sentence level, even within single raters. Adequacy raters show a statistically significant preference for HUMAN when evaluating entire documents. We hypothesise that document-level evaluation unveils errors such as mistranslation of an ambiguous word, or errors related to textual cohesion and coherence, which remain hard or impossible to spot in a sentence-level evaluation. For a subset of articles, we elicited both sentence-level and document-level judgements, and inspected articles for which sentence-level judgements were mixed, but where HUMAN was strongly preferred in document-level evaluation. In these articles, we do indeed observe the hypothesised phenomena. We find an example of lexical coherence in a 6-sentence article about a new app "微信挪 车", which HUMAN consistently translates into "WeChat Move the Car". In MT, we find three different translations in the same article: "Twitter Move Car", "WeChat mobile", and "WeChat Move". Other observations include the use of more appropriate discourse connectives in HU-MAN, a more detailed investigation of which we leave to future work. To our surprise, fluency raters show a stronger preference for HUMAN than adequacy raters FIG0. The main strength of neural machine translation in comparison to previous statistical approaches was found to be increased fluency, while adequacy improvements were less clear (b; b), and we expected a similar pattern in our evaluation. Does this indicate that adequacy is in fact a strength of MT, not fluency? We are wary to jump to this . An alternative interpretation is that MT, which tends to be more literal than HUMAN, is judged more favourably by raters in the bilingual condition, where the majority of raters are native speakers of the source language, because of L1 interference. We note that the availability of document-level context still has a strong impact in the fluency condition (Section 3). In response to recent claims of parity between human and machine translation, we have empirically tested the impact of sentence and document level context on human assessment of machine translation. Raters showed a markedly stronger preference for human translations when evaluating at the level of documents, as compared to an evaluation of single, isolated sentences. We believe that our findings have several implications for machine translation research. Most importantly, if we accept our interpretation that human translation is indeed of higher quality in the dataset we tested, this points to a failure of current best practices in machine translation evaluation. As machine translation quality improves, translations will become harder to discriminate in terms of quality, and it may be time to shift towards document-level evaluation, which gives raters more context to understand the original text and its translation, and also exposes translation errors related to discourse phenomena which remain invisible in a sentence-level evaluation. Our evaluation protocol was designed with the aim of providing maximal validity, which is why we chose to use professional translators and pairwise ranking. For future work, it would be of high practical relevance to test whether we can also elicit accurate quality judgements on the document-level via crowdsourcing and direct assessment, or via alternative evaluation protocols. The data released by could serve as a test bed to this end. One reason why document-level evaluation widens the quality gap between machine translation and human translation is that the machine translation system we tested still operates on the sentence level, ignoring wider context. It will be interesting to explore to what extent existing and future techniques for document-level machine translation can narrow this gap. We expect that this will require further efforts in creating document-level training data, designing appropriate models, and supporting research with discourse-aware automatic metrics. TAB1 shows detailed , including those of individual raters, for all four experimental conditions. Raters choose between three labels for each item: MT is better than HUMAN (a), HUMAN is better than MT (b), or tie (t). TAB3 lists interrater agreement. Besides percent agreement (same label), we calculate Cohen's kappa coefficient DISPLAYFORM0 where P (A) is the proportion of times that two raters agree, and P (E) the likelihood of agreement by chance. We calculate Cohen's kappa, and specifically P (E), as in WMT (b, Section 3.3), on the basis of all pairwise ratings across all raters. In pairwise rankings of machine translation outputs, κ coefficients typically centre around 0.3 (b). We observe lower inter-rater agreement in three out of four conditions, and attribute this to two reasons. Firstly, the quality of the machine translations produced by Hassan et al. FORMULA0 is high, making it difficult to discriminate from professional translation particularly at the sentence level. Secondly, we do not provide guidelines detailing error severity and thus assume that raters have differing interpretations of what constitutes a "better" or "worse" translation. Confusion matrices in TAB4 indicate that raters handle ties very differently: in document-level adequacy, for example, rater E assigns no ties at all, while rater F rates 15 out of 50 items as ties (Table 4g). The assignment of ties is more uniform in documents assessed for fluency TAB1, leading to higher κ in this condition TAB3.Despite low inter-annotator agreement, the quality control we apply shows that raters assess items carefully: they only miss 1 out of 40 and 5 out of 128 spam items in the document-and sentence-level conditions overall, respectively, a very low number compared to crowdsourced work BID5. All of these misses are ties (i. e., not marking spam items as "better", but rather equally bad as their counterpart), and 5 out of 9 raters (A, B1, B2, D, F) do not miss a single spam item. A common procedure in situations where interrater agreement is low is to aggregate ratings of different annotators BID2. As shown in TAB2
Raters prefer adequacy in human over machine translation when evaluating entire documents, but not when evaluating single sentences.
1,379
scitldr
Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model. However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions. We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE). The experiment demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games). Imitation learning (IL) also known as learning from demonstrations allows agents to imitate expert demonstrations to make optimal decisions without direct interactions with the environment. Especially, inverse reinforcement learning (IRL) recovers a reward function of an expert from collected demonstrations, where it assumes that the demonstrator follows an (near-)optimal policy that maximizes the underlying reward. However, IRL is an ill-posed problem, because a number of reward functions match the demonstrated data (Ziebart et al. (2008; ; ;), where various principles, including maximum entropy, maximum causal entropy, and relative entropy principles, are employed to solve this ambiguity (Ziebart et al. (2008; ; ; ;). Going beyond imitation learning with single agents discussed above, recent works including , , have investigated a more general and challenging scenario with demonstration data from multiple interacting agents. Such interactions are modeled by extending Markov decision processes on individual agents to multi-agent Markov games (MGs) (Littman & Szepesvári ). However, these works only work for synchronous MGs, with all agents making simultaneous decisions in each turn, and do not work for general MGs, allowing agents to make asynchronous decisions in different turns, which is common in many real world scenarios. For example, in multiplayer games , such as Go game, and many card games, players take turns to play, thus influence each other's decision. The order in which agents make decisions has a significant impact on the game equilibrium. In this paper, we propose a novel framework, asynchronous multi-agent generative adversarial imitation learning (AMAGAIL): A group of experts provide demonstration data when playing a Markov game (MG) with an asynchronous decision-making process, and AMAGAIL inversely learns each expert's decision-making policy. We introduce a player function governed by the environment to capture the participation order and dependency of agents when making decisions. The participation order could be deterministic (i.e., agents take turns to act) or stochastic (i.e., agents need to take actions by chance). A player function of an agent is a probability function: given the perfectly known agent participation history, i.e., at each previous round in the history, we know which agent(s) participated, it provides the probability of the agent participating in the next round. With the general MG model, our framework generalizes MAGAIL from the synchronous Markov games to (asynchronous) Markov games, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE) , a stronger equilibrium than the Nash equilibrium (NE) (guaranteed in). The experiment demonstrate that compared to GAIL and MAGAIL , our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios. Markov games (MGs) are the cases of N interacting agents, with each agent making a sequence of decisions with strategies only depending on the current state. A Markov game 1 is denoted as a tuple (N, S, A, Y, ζ, P, η, r, γ) with a set of states S and N sets of actions. At each time step t with a state s t ∈ S, if the indicator variable I i,t = 1, an agent i is allowed to take an action; otherwise, I i,t = 0, the agent i does not take an action. As a , the participation vector I t = [I 1,t, · · ·, I N,t] indicates active vs inactive agents at step t. The set of all possible participation vectors is denoted as I, namely, I t ∈ I. Moreover, h t−1 = [I 0, · · ·, I t−1] represent the participation history from step 0 to t − 1. The player function Y (governed by the environment) describes the probability of an agent i being allowed to make an action at a step t, given the participation history h t−1, namely, Y (i|h t−1). ζ defines the participation probability of an agent at the initial time step Note that, the player function can be naturally extended to a higher-order form when the condition includes both previous participation history and previous state-action history; thus, it can be adapted to non-Markov processes. The initial states are determined by a distribution η: S →. Let φ denotes no participation, determined by player function Y, the transition process to the next state follows a transition function: P: S × A 1 ∪ {φ} × · · · × A N ∪ {φ} → P(S). Agent i obtains a (bounded) reward given by a function r i: S ×A i → R 2. Agent i aims to maximize its own total expected return R i = ∞ t=0 γ t r i,t, where γ ∈ is the discount factor. Actions are chosen through a stationary and stochastic policy π i: S × A i →. In this paper, bold variables without subscript i denote the concatenation of variables for all the agents, e.g., all actions as a, the joint policy defined as π(a|s) = N i=1 π i (a i |s), r as all rewards. Subscript −i denotes all agents except for i, then (a i, a −i) represents the action of all N agents (a 1, · · ·, a N). We use expectation with respect to a policy π to denote an expectation with respect to the trajectories it generates. For, denotes the following sample process as s 0 ∼ η, I 0 ∼ ζ, I t ∼ Y, a ∼ π(·|s t), s t+1 ∼ P (s t+1 |s t, a), for ∀i ∈ [N]. Clearly, when the player function Y (i|h t−1) = 1 for all agents i's at any time step t, a general Markov game boils down to a synchronous Markov game , where all agents take actions at all steps. To distinguish our work from MAGAIL and be consistent with the literature and , we refer the game setting discussed in MAGAIL as synchronous Markov games (SMGs), and that of our work as Markov games (MGs). In synchronous Markov games (SMGs), all agents make simultaneous decisions at any time step t, with the same goal of maximizing its own total expected return. Thus, agents' optimal policies are interrelated and mutually influenced. Nash equilibrium (NE) has been employed as a solution concept to resolve the dependency across agents, where no agents can achieve a higher expected reward by unilaterally changing its own policy . However, in Markov games (MGs) allowing asynchronous decisions, there exist situations where agents encounter states (subgames) ed from other agents' "trembling-hand" actions. Since the NE does not consider the "trembling-hand" ed states and subgames, when trapped in these situations, agents are not able to make optimal decisions based on their polices under NE. To address this problem, Selten firstly proposed subgame perfect equilibrium (SPE) . SPE ensures NE for every possible subgame of the original game. It has been shown that in a finite or infinite extensive-form game with either discrete or continuous time, best-response strategies all converge to SPE, rather than NE (; ;). In synchronous Markov games, MAGAIL was proposed to learn experts' policies constrained by Nash equilibrium. Since there may exist multiple Nash equilibrium solutions, a maximum causal entropy regularizer is employed to resolve the ambiguity. Thus, the optimal policies can be found by solving the following multi-agent reinforcement learning problem. where, and β is a weight to the entropy regularization term. In practice, the reward function is unknown. MAGAIL applies multi-agent IRL (MAIRL) below to recover experts' reward functions, with ψ as a convex regularizer, Moreover, MAGAIL solves MARL • MAIRL ψ (π E) to inversely learn each expert's policy via applying generative adversarial imitation learning (D wi is a discriminator for agent i that classifies the experts' vs policy trajectories. π θ represent the learned experts' parameterized policies, which generate trajectories with maximized the scores from Extending multi-agent imitation learning to general Markov games is challenging, because of the asynchronous decision making and dynamic state (subgame) participating. In this section, we will tackle this problem using subgame perfect equilibrium (SPE) solution concept. In a Markov game (MG), the Nash equilibrium needs to be guaranteed at each state s ∈ S 3, namely, we apply subgame perfect equilibrium (SPE) solution concept instead. Formally, a set of agent is an SPE if at each state s ∈ S (also considered as a root node of a subgame), no agent can achieve a higher reward by unilaterally changing its policy on the root node or any other descendant nodes of the root node, i.e., ∀i Therefore, our constrained optimization problem is (, Theorem 3.7.2) For an agent i with a probability of taking action a at state s t given a history h t−1, its Q-function is where P r(I t |h t−1) = i:Ii,t=1 Y (i|h t−1) j:Ij,t=0 (1 − Y (j|h t−1)) is the probability of participation vector I t given history h t−1. The constraints in eq. guarantee an SPE, i.e.,. Consistent with MAGAIL the objective has a global minimum of zero under SPE, and π forms SPE if and only if f r (π, v) reaches zero while being a feasible solution. We use AMA-RL(r) to denote the set of policies that form a sSPE under reward function r, and can maximize γ-discounted causal entropy of policies: where q i is defined in eq.. Our objective is to define a suitable inverse operator AMAIRL in analogy to MAIRL in eq.. The key idea of MAIRL is to choose a reward that creates a margin between a set of experts and every other set of policies. However, the constraints in SPE optimization eq. can make this challenging. To that end, we derive an equivalent Lagrangian formulation of eq. to defined a margin between the expected rewards of two sets of policies to capture the "difference". The SPE constraints in eq. state that no agent i can obtain a higher expected reward via 1-step temporal (TD) difference learning. We replace 1-step constraints with (t+1)-step constraints with the solution remaining the same as AMARL. The general idea is consistent with MAGAIL . The detailed derivation is in Appx A.1. The updated (t+1)-step constraints arê By implementing the (t+1)-step formulation eq., we aim to construct the Lagrangian dual of the primal in eq.. Since for any policy π, f r (π,v) = 0 givenv i defined as in Theorem 1 in Appx A.1 (proved in Lemma 1 in Appx A.2), we just focus on the constraints in eq. to get the dual problem where T t i is the set of all length-t trajectories of the form {s (j), a i | · |H| Lagrange multipliers, andv i is defined as in Theorem 1 in Appx A.1. Theorem 2 illustrates that a specific λ is able to recover the difference of the sum of expected rewards between not all optimal and all optimal policies. Theorem 2 For any two policies π * and π, let be the probability of generating the sequence τ i using policy π i and π * −i and h t−1, where P r(h t−1) = P r(I 0) t−1 k=1 P r(I k |h k−1) is the probability of history h t−1. Then where the dual function is L (t+1) r (π *, λ * π) and each multiplier can be considered as the probability of generating a trajectory of agent i ∈ N, τ i ∈ T t i, and h t−1 ∈ H. Theorem 2 (proved in Appx A.3) provides a horizon to establish AMAIRL objective function with regularizer ψ. is the discounted causal entropy for policy π i when other agents follow π E−i, and β is a hyper-parameter controlling the strength of the entropy regularization term as in GAIL . We first define the asynchronous occupancy measure in Markov games: with a policy π i ∈ Π, define its asynchronous occupancy measure ρ The occupancy measure can be interpreted as the distribution of state-action pairs that an agent i encounters under the participating and nonparticipating situations. Notably, when ζ(i) = 1, Y (i|h t−1) = 1 for all t ∈ {1, ..., ∞}, h t−1 ∈ H, asynchronous occupancy measure in MG turns to the occupancy measure defined in MAGAIL and GAIL, i.e., ρ p πi = ρ πi. With the additively separable regularization ψ, for each agent i, π Ei is the unique optimal response to other experts π E−i. Therefore we obtain the following theorem (see proof of Theorem 3 in Appendix A.4):, and that AMA-RL(r) has a unique solution 4 for all r ∈ AMA-IRL ψ (π E), then where π i, E −i denotes π i for agent i, and π E−i for other agents. In practice, we are only able to calculate ρ p π E and ρ p π. As following MAGAIL (In this section, we propose practical algorithms for asynchronous multi-agent imitation learning, and introduce three representative scenarios with different player function structures. The selected ψ i in Proposition 1 (in Appx A.5) contributes to the corresponding generative adversarial model where each agent i has a generator π θi and a discriminator, D wi. When the generator is allowed to behave, the produced behavior will receive a score from discriminator. The generator attempts to train the agent to maximize its score and fool the discriminator. We optimize the following Three agents all have stochastic player functions (i.e., yellow boxes), thus, each agent has a certain probability to make an action w.r.t the player function given the participation history h t−1; in this example, only agents #2 and #3 happen to make actions, and agent #1 does not. objective: In practice, the input of AMAGAIL is Z, the demonstration data from N expert agents in the same environment, where the demonstration data Z = {(s t, a)} T t=0 are collected by sampling s 0 ∼ η, The assumptions include knowledge of N, γ, S, A. Transition P, initial state distribution η, agent distribution ζ, player function Y are all considered as black boxes, and no additional expert interactions with environment during training process are allowed. In the RL process of finding each agent's policy π θi, we follow MAGAIL to apply Multi-agent Actor-Critic with Kronecker-factors (MACK) and use the advantage function with the baseline V ν for variance reduction. The summarized algorithm is presented in Algorithm 1 in Appx B. In MGs, the order in which agents make decisions is determined by the player function Y. Below, we discuss three representative structures of player function Y, including synchronous participation, deterministic participation, and stochastic participation. Synchronous participation. When Y (i|h t−1) = 1 holds for all agents i ∈ [N] at every step t (as shown in Figure 1a), agents make simultaneous actions, and a general Markov game boils down to a simple synchronous Markov game. Deterministic participation. When the player function Y (i|h t−1) is deterministic for all agents i ∈ [N], it can only output 1 or 0 at each step t. Many board games, e.g., Go, and Chess, have deterministic player functions, where agents take turns to play. Figure 1b shows an example of deterministic participation structure. Stochastic participation. When the player function is stochastic, namely, Y (i|h t−1) ∈ for some agent i ∈ [N] at certain time step t, the agent i will make an action by chance. As illustrated in Figure 1c, three agents all have stochastic player functions at step t, and agent #1 does not take an action at step t, while agent #2 and #3 happen to take actions. We evaluate AMAGAIL with both stochastic and deterministic player function structures under cooperative and competitive games, respectively. We compared our AMAGAIL with two baselines, including Behavior Cloning (BC) by OpenAI and decentralized Multi-agent generative adversarial imitation learning (MAGAIL) . The are collected by averaging over 5 random seeds (refer to Appx C for implementation details). We use the particle environment as a basic setting, and customize it into four games to allow different asynchronous player function structures. Deterministic Cooperative Navigation: Three agents (agent #1, #2 and #3) need to cooperate to get close to three randomly placed landmarks through physical actions. They get high rewards if they are close to the landmarks and Figure 2: Average true reward from cooperative tasks. Performance of experts and random policies are normalized to one and zero respectively. We use inverse log scale for better comparison. are penalized for any collision with each other. Ideally, each agent should cover a single distinct landmark. In this process, the agents must follow a deterministic participation order to take actions, i.e., in the first round all three agents act, in the second round only agent #1 and #2 act, in the third round only agent #1 acts, and repeat these rounds until the game is completed. Stochastic Cooperative Navigation: This game is the same with deterministic cooperative navigation except that all three agents have a stochastic player function. Each agent has 50% chance to act at each round t. Deterministic Cooperative Reaching: This game has three agents with their goals as cooperatively reaching a single landmark with minimum collision. In this game, agents follow a deterministic player function, same as that in deterministic cooperative navigation game, to make actions. Stochastic Predator-Prey: Three slower cooperating agents (referred to as adversaries) chase a faster agent in an environment of two landmarks; the faster agent acts first, then each adversary with a stochastic player function of 50% chance to act with the same goal of catching the faster agent. The adversaries and the agent need to avoid two randomly placed landmarks. The adversaries collect rewards when touching the agent, where the agent is penalized. Note that, an agent that does not participate in a round of a game does not get a reward. In these four game environments, agents are first trained with Multi-agent ACKTR; ), thus the true reward functions are available, which enable us to evaluate the quality of recovered policies. When generating demonstrations from well-trained expert agents, a "null" (no-participation) as a placeholder action is recorded for each no-participation round in the trajectory. The quality of a recovered policy is evaluated by calculating agents' average true reward of a set of generated trajectories. We compare our AMAGAIL with two baselines -behavior cloning (BC) and decentralized Multi-agent generative adversarial imitation learning (MAGAIL) . Behavior cloning (BC) utilizes the maximum likelihood estimation for each agent independently to approach their policies. Decentralized multi-agent generative adversarial imitation learning (MAGAIL) treats each agent with a unique discriminator working as the agent's reward signal and a unique generator as the agent's policy. It follows the maximum entropy principle to match agents' occupancy measures from recovered policies to demonstration data. We compare AMAGAIL with baselines under three particle environment games, namely, deterministic cooperative navigation, stochastic cooperative navigation, and deterministic cooperative reaching games. Figure 2 show the normalized rewards, when learning policies with BC, MAGAIL and AMAGAIL, respectively. When there is only a small amount of expert demonstrations, the normalized rewards of BC and AMAGAIL increase, especially, when less demonstration data are used, i.e., less than 400 demonstrations. After a sufficient amount of demonstrations are used, i.e., more than 400, AMAGAIL has higher rewards than BC and MAGAIL. This makes sense since at certain time steps there exist non-participating agents (based on the player functions), but BC and MAGAIL models consider the no-participation as an action the agent can choose, where in reality it is governed by the environment. On the other hand, with the introduced player function Y, AMAGAIL characterizes such no participation events correctly, thus more accurately learns the expert policies. The normalized awards of BC are roughly unchanged in Figure 2 (a)&(c), and in Figure 2 (b) after 400 demonstrations, which seems contradictory to that of; , and can be explained as follows. In Figure 2 (b) (stochastic cooperative navigation), the performance of BC is low when using less demonstrations, but increases rapidly as more demonstrations are used, and finally converges to the "best" performance around 0.65 with 300 demonstrations. In Figure 2 (a) (resp. Figure 2(c) ), deterministic cooperative navigation (resp. reaching) is easier to learn compared Table 1: Average agent rewards in stochastic predator-prey. We compare behavior cloning (BC) and multi-agent GAIL (MAGAIL) methods. Best are marked in bold. Note that high vs low rewards are preferred, when running BC for agent vs adversaries, respectively). Task Stochastic Predator-Prey Agent Behavior Cloning MAGAIL AMAGAIL Adversaries BC MAGAIL AMAGAIL Behavior Cloning Rewards −5.0 ± 10.8 −9.0 ± 13.1 −14.0 ± 19.4 −3.6 ± 8.5 −2.1 ± 6.9 with the stochastic cooperative navigation game shown in Figure 2 (b), since there is no randomness in the player function. The performance with only 200 demonstrations is already stabilized at 0.7 (resp. 0.94). In the stochastic cooperative navigation game (Figure 2(b) ), AMAGAIL performs consistently better than MAGAIL and BC. However, in the deterministic cooperative navigation game (Figure 2(b) ), with 200 demonstration, AMAGAIL does not perform as well as MAGAIL. This is due to the game setting, namely, two players actively searching for landmarks are sufficient to gain a high reward in this game. The last agent, player #3, learned to be "lazy", without any motivation to promote the total shared reward among all agents. In this case, it is hard for AMAGAIL to learn a good policy of player #3 with small amount of demonstration data, because player #3's has 2 3 absence rate, given the pre-defined deterministic participation function. Hence, AMAGAIL does not have enough state-action pairs to learn player #3. This gets improved when there are sufficient data, say, more than 400 demonstrations. When we adjust the game setting from 3 landmarks to 1 landmark, i.e., all agents need to act actively to reach the landmark. This is captured in the deterministic cooperative reaching game. In this scenario, an inactive player will lower down the overall reward. As shown in Figure 2 (c), AMAGAIL outperforms BC and MAGAIL consistently, even with a small amount of demonstration data. 5.2 PERFORMANCE WITH MIXED GAME MODE Now, we further evaluate the performance of AMAGAIL under a mixed game mode with both cooperative and adversarial players, i.e., stochastic predator-prey game. Since there are two competing sides in this game, we cannot directly compare each methods' performance via expected reward. Therefore, we use the's evaluation paradigm and compare with baselines by letting (agents trained by) BC play against (adversaries trained by) other methods, and vice versa. From Table 1, AMAGAIL consistently performs better than MAGAIL and BC. Imitation learning (IL) aims to learn a policy from expert demonstrations, which has been extensively studied in the literature for single agent scenarios . Behavioral cloning (BC) uses the observed demonstrations to directly learn a policy . Apprenticeship learning and inverse reinforcement learning (IRL) (( ; ; Ziebart et al. (2008;)) seek for recovering the underlying reward based on expert trajectories in order to further learn a good policy via reinforcement learning. The assumption is that expert trajectories generated by the optimal policy maximize the unknown reward. Generative adversarial imitation learning (GAIL) and conditional GAIL (cGAIL) incorporate maximum casual entropy IRL and the generative adversarial networks to simultaneously learn non-linear policy and reward functions (; ;). A few recent studies on multi-agent imitation learning, such as MAGAIL (and MAAIRL , model the interactions among agents as synchronous Markov games, where all agents make simultaneous actions at each step t. These works fail to characterize a more general and practical interaction scenario, i.e., Markov games including turn-based games , where agents make asynchronous decisions over steps. In this paper, we make the first attempt to propose an asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) framework, which models the asynchronous decision-making process as a Markov game and develops a player function to capture the participation dynamics of agents. Experimental demonstrate that our proposed AMAGAIL can accurately learn the experts' policies from their asynchronous trajectory data, comparing to state-of-the-art baselines. Beyond capturing the dynamics of participation vs no-participation (as only two participation choices), our proposed player function Y (and AMAGAIL framework) can also capture a more general case 5, where Y determines how the agent participates in a particular round, i.e., which action set A A.1 TIME DIFFERENCE LEARNING Theorem 1. For a certain policy π and reward r, letv i (s (t); π, r, h t−1 ) be the unique solution to the Bellman equation: i; π, r, h t−1 ) as the discounted expected return for the i-th agent conditioned on visiting the trajectory {s (j), a (j) } t−1 j=0, s (t) in the first t − 1 steps and choosing action a (t) i at the t-th step, when other agents using policy π −i: i; π, r, h t−1 ) Then π is subgame perfect equilibrium if and only if: Theorem 1 illustrates that if we replace the 1-step constraints with (t + 1)-step constraints, we still get the same solution as AMA-RL(r) in terms of a subgame perfect equilibrium solution. A.2 EXISTENCE AND EQUIVALENCE OF V AND SUBGAME PERFECT EQUILIBRIUM Lemma 1 By definition ofv i (s (t); π, r, h t−1 ) in Theorem 1 andq i (s (t), a i; π, r, h t−1 ) in eq. 7. Then for any π, f r (π,v) = 0. Furthermore, π is subgame perfect equilibrium under r if and only if v i (s; π, r, h t−1) ≥q i (s, a i ; π, r, h t−1) for all i ∈ [N], s ∈ S, a i ∈ A i and h t−1 ∈ H. Proof We havê i; π, r, h t−1 )]. which utilizes the fact that a i and a −i are independent at s. Therefore, we can easily get f r (π,v) = 0. If π is a subgame perfect equilibrium, and existing one or more of the constrains does not hold, so agent i can receive a strictly higher expected reward for rest of the states, which is against the subgame perfect equilibrium assumption. If the constraints hold, i.e., for all i and (s, a i),v i (s; π, r, h t−1) ≥q i (s, a i ; π, r, h t−1) then Value iteration, thus, overv i (s; π, r, h t−1) converges. If one can find another policy π so that v i (s; π, r, h t−1) < E πi [q i (s, a i ; π, r, h t−1)], then at least one violation exists in the constraints since π i is a convex combination over action a i. Therefore, for any policy π i and action a i for any agent i, E πi [q i (s, a i ; π, r, h t−1)] ≥ E π i [q i (s, a i ; π, r, h t−1)] always hold, so π i is the optimal reply to π −i, and π constitutes a subgame perfect equilibrium once it repeats this argument for all agents. Notably, by assuming f r (π, v) = 0 for some v; if v satisfies the assumptions, then v =v. Proof We use Q *,q *,v * to denote the Q,q andv quantities defined for policy π *. For the two For agent i, τ i and h t−1 we have, λ * π (τ i ; h t−1) · Q * i (τ i ; π *, r, h t−1) = P r(τ i ; h t−1) · Q * i (τ i ; π *, r, h t−1). Therefore, the proof of AMA-RL•AMA-IRL can be derived in a similar fashion with GAIL and MAGAIL (Theorem 3 and Proposition 1 discuss the differences from the single agent scenario similar in . On the one hand, in Theorem 3 we make the assumption that AMA-RL(r) has a unique solution, which is always true in the single agent case due to convexity of the space of the optimal policies. On the other hand, in Proposition 1 we remove the entropy regularizer because here the causal entropy for π i may depend on the policies of the other agents, so the entropy regularizer on two sides are not the same quantity. Specifically, the entropy for the left hand side conditions on π E−i and the entropy for the right hand side conditions on π −i (which would disappear in the single-agent case). B APPENDIX B. ALGORITHM Generate state-action pairs of batch size B from π u through the process: s 0 ∼ η, I 0 ∼ ζ, I t ∼ Y, a ∼ π u (·|s t), s t+1 ∼ P (s t+1 |s t, a); φ is recorded as a placeholder action when an agent does not participate in a round; denote the generated state-action pair set as X. Sample state-action pairs from Z with batch size B; denote the demonstrated state-action pair set as X E. for each agent i = 1, · · ·, N do Filter out state-action pairs (s, φ) from X and X E. Filter out state-action pairs (s, φ) from X and X E. Update ν i to decrease the objective: E X,Y [(V νi (s) − V * (s)) 2 ]. Update θ i by policy gradient with the setting step sizes: E X,Y [θi π θi (a i |s i)A i (s, a)]. C APPENDIX C. EXPERIMENT DETAILS C.1 HYPERPARAMETERS For the particle environment, we follow the setting of MAGAIL to use two layer multiple layer perceptrons with 128 cells in each layer for the policy generator network, value
This paper extends the multi-agent generative adversarial imitation learning to extensive-form Markov games.
1,380
scitldr
Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction. Self-training has mostly been well-studied to classification problems. However, in complex sequence generation tasks such as machine translation, it is still not clear how self-training woks due to the compositionality of the target space. In this work, we first show that it is not only possible but recommended to apply self-training in sequence generation. Through careful examination of the performance gains, we find that the noise added on the hidden states (e.g. dropout) is critical to the success of self-training, as this acts like a regularizer which forces the model to yield similar predictions for similar inputs from unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, ing in a “noisy” version of self-training. Empirical study on standard benchmarks across machine translation and text summarization tasks under different resource settings shows that noisy self-training is able to effectively utilize unlabeled data and improve the baseline performance by large margin. Deep neural networks often require large amounts of labeled data to achieve good performance. However, acquiring labels is a costly process, which motivates research on methods that can effectively utilize unlabeled data to improve performance. Towards this goal, semi-supervised learning methods that take advantage of both labeled and unlabeled data are a natural starting point. In the context of sequence generation problems, semi-supervised approaches have been shown to work well in some cases. For example, back-translation makes use of the monolingual data on the target side to improve machine translation systems, latent variable models are employed to incorporate unlabeled source data to facilitate sentence compression or code generation . In this work, we revisit a much older and simpler semi-supervised method, self-training , where a base model trained with labeled data acts as a "teacher" to label the unannotated data, which is then used to augment the original small training set. Then, a "student" model is trained with this new training set to yield the final model. Originally designed for classification problems, common wisdom suggests that this method may be effective only when a good fraction of the predictions on unlabeled samples are correct, otherwise mistakes are going to be reinforced . In the field of natural language processing, some early work have successfully applied self-training to word sense disambiguation and parsing (; ;). However, self-training has not been studied extensively when the target output is natural language. This is partially because in language generation applications (e.g. machine translation) hypotheses are often very far from the ground-truth target, especially in low-resource settings. It is natural to ask whether self-training can be useful at all in this case. and Apply f θ to the unlabeled instances U Select a subset S ⊂ {(x, f θ (x))|x ∈ U } Train a new model f θ on S ∪ L 6: until convergence or maximum iterations are reached gains were reported and, to the best of our knowledge, it is still unclear what makes self-training work. did not update the decoder parameters when using pseudo parallel data noting that "synthetic target parts may negatively influence the decoder model of NMT". In this paper, we aim to answer two questions: How does self-training perform in sequence generation tasks like machine translation and text summarization? Are "bad" pseudo targets indeed catastrophic for self-training? If self-training helps improving the baseline, what contributes to its success? What are the important ingredients to make it work? Towards this end, we first evaluate self-training on a small-scale machine translation task and empirically observe significant performance gains over the supervised baseline (§3.2), then we perform a comprehensive ablation analysis to understand the key factors that contribute to its success (§3.3). We find that the decoding method to generate pseudo targets accounts for part of the improvement, but more importantly, the perturbation of hidden states -dropout -turns out to be a crucial ingredient to prevent self-training from falling into the same local optimum as the base model, and this is responsible for most of the gains. To understand the role of such noise in self-training, we use a toy experiment to analyze how noise effectively propagates labels to nearby inputs, sometimes helping correct incorrect predictions (§4.1). Motivated by this analysis, we propose to inject additional noise by perturbing also the input. Comprehensive experiments on machine translation and text summarization tasks demonstrate the effectiveness of noisy self-training. Formally, in conditional sequence generation tasks like machine translation, we have a parallel dataset L = {x i, y i} l i=1 and a large unlabeled dataset U = {x j} l+u j=l+1, where |U | > |L| in most cases. As shown in Algorithm 1, classic self-training starts from a base model trained with parallel data L, and iteratively applies the current model to obtain predictions on unlabeled instances U, then it incorporates a subset of the pseudo parallel data S to update the current model. There are two key factors: Selection of the subset S. S is usually selected based on some confidence scores (e.g. log probability) but it is also possible for S to be the whole pseudo parallel data . Combination of real and pseudo parallel data. A new model is often trained on the two datasets jointly as in back-translation, but this introduces an additional hyper-parameter to weigh the importance of the parallel data relative to the pseudo data . Another way is to treat them separately -first we train the model on pseudo parallel data S, and then fine-tune it on real data L. In our preliminary experiments, we find that the separate training strategy with the whole pseudo parallel dataset (i.e. S = {(x, f θ (x))|x ∈ U }) produces better or equal performance for neural sequence generation while being simpler. Therefore, in the remainder of this paper we use this simpler setting. We include quantitative comparison regarding joint training, separate training, and pseudo-parallel data filtering in Appendix B, where separate training is able to match (or surpass) the performance of joint training. In self-training, the unsupervised loss L U from unlabeled instances is defined as: where p(x) is the empirical data distribution approximated with samples from S, p θ (y|x) is the conditional distribution defined by the model. θ * is the parameter from the last iteration (initially it is set as the parameter of the supervised baseline), and fixed within the current iteration. Eq. 1 reveals the connection between self-training and entropy regularization . 16.8 17.9 ST (baseline) 16.5 17.5 Table 1: Test tokenized BLEU on WMT100K. Self-training are from the first iteration. "Scratch" denotes that the system is initialized randomly and trained from scratch, while "baseline" means it is initialized with the baseline model. In the context of classification, self-training can be understood from the view of entropy regularization , which favors a low-density separation between classes, a commonly assumed prior for semi-supervised learning . To examine the effectiveness of self-training on neural sequence generation, we start by analyzing a machine translation task. We then perform ablation analysis to understand the contributing factors of the performance gains. We work with the standard WMT 2014 English-German dataset consisting of about 3.9 million training sentence pairs after filtering long and imbalanced pairs. Sentences are encoded using 40K byte-pair codes . As a preliminary experiment, we randomly sample 100K sentences from the training set to train the model and use the remaining English sentences as the unlabeled monolingual data. For convenience, we refer to this dataset as WMT100K. Such synthetic setting allows us to have high-quality unlabeled data to verify the performance of self-training. We train with the Base Transformer architecture and dropout rate at 0.3. Full training and optimization parameters can be found in Appendix A.1. All experiments throughout this paper including the transformer implementation are based on the fairseq toolkit, and all are in terms of case-sensitive tokenized BLEU . We use beam search decoding (beam size 5) to create the pseudo targets and to report BLEU on test set. In Figure 1, we use green bars to show the of applying self-training for three iterations. We include both pseudo-training (PT): the first step of self-training where we train a new model (from scratch) using only the pseudo parallel data generated by the current model, and finetuning (FT): the fine-tuned system using real parallel data based on the pretrained model from the PT step. Note that in the fine-tuning step the system is re-initialized from scratch. Surprisingly, we find that the pseudo-training step at the first iteration is able to improve BLEU even if the model is only trained on its own predictions, and fine-tuning further boosts the performance. The test BLEU keeps improving over the first three iterations, until convergence to outperform the initial baseline by 3 BLEU points. This behaviour is unexpected because no new information seems to be injected during this iterative process -target sentences of the monolingual data are from the base model's predictions, thus translation errors are likely to remain, if not magnified. This is different from back-translation where new knowledge may originate from an additional backward translation model and real monolingual targets may help the decoder generate more fluent sentences. One straightforward hypothesis is that the added pseudo-parallel data might implicitly change the training trajectory towards a (somehow) better local optimum, given that we train a new model from scratch at each iteration. To rule out this hypothesis, we perform an ablation experiment and initialize θ from the last iteration (i.e. θ *). Formally, based on Eq. 1 we have: because the conditional log likelihood is maximized when p θ (y|x) matches the underlying data distribution p θ * (y|x). Therefore, the parameter θ should not (at least not significantly) change if we initialize it with θ * from the last iteration. Table 1 shows the comparison of these two initialization schemes at the first iteration. Surprisingly, continuing training from the baseline model also yields an improvement of 1.9 BLEU points, comparable to initializing from random. While stochastic optimization introduces randomness in the training process, it is startling that continuing training gives such a non-trivial improvement. Next, we investigate the underlying reasons for this. To understand why continuing training contradicts Eq. 2 and improves translation performance, we examine possible discrepancies between our assumptions and the actual implementation, and formulate two new hypotheses: H1. Decoding Strategy. According to this hypothesis, the gains come from the use of beam search for decoding unlabeled data. Since our focus is a sequence generation task, we decode y with beam search to approximate the expectation in E y∼p θ * (y|x) log p θ (y|x), yielding a biased estimate, while sampling decoding would in an unbiased Monte Carlo estimator. The in Table 2 demonstrate that the performance drops by 0.5 BLEU when we change the decoding strategy to sampling, which implies that beam search does contribute a bit to the performance gains. This phenomenon makes sense intuitively since beam search tends to generate higher-quality pseudo targets than sampling, and the subsequent cross-entropy training might benefit from implicitly learning the decoding process. However, the decoding strategy hypothesis does not fully explain it, as we still observe a gain of 1.4 BLEU points over the baseline from sampling decoding with dropout. H2. Dropout . Eq. 1 and Eq. 2 implicitly ignore a (seemingly) small difference between the model used to produce the pseudo targets and the model used for training: at test/decoding time the model does not use dropout while at training time dropout noise is injected in the model hidden states. At training time, the model is forced to produce the same (pseudo) targets given the same set of inputs and the same parameter set but various noisy versions of the hidden states. The conjecture is that the additional expectation over dropout noise renders Eq. 2 false. To verify this, we remove dropout in the pseudo training step 2. The in Table 2 indicate that without dropout the performance of beam search decoding drops by 1.2 BLEU, just 0.7 BLEU higher than the baseline. Moreover, the pseudo-training performance of sampling without dropout is almost the same as the baseline, which finally agrees with our intuitions from Eq. 2. In summary, Table 2 suggests that beam-search decoding contributes only partially to the performance gains, while the implicit perturbation -dropout -accounts for most of it. However, it is still mysterious why such perturbation in such large performance gains. If dropout is meant to avoid overfitting and fit the target distribution better in the pseudo-training step, why does it bring advantages over the baseline given that the target distribution is from the baseline model itself? This is the subject of the investigation in the next section. One hypothesis as to why noise (perturbation) is beneficial for self-training, is that it enforces local smoothness for this task, that is, semantically similar inputs are mapped to the same or similar targets. Since the assumption that similar input should ideally produce similar target largely holds for most tasks in practice, this smoothing effect of pseudo-training step may provide a favorable regularization for the subsequent finetuning step. Unlike standard regularization in supervised training which is local to the real parallel data, self-training smooths the data space covered by the additional and much larger monolingual data. To verify this hypothesis more easily, we work with the toy task of summing two integers in the range 0 to 99. We concatenate the two integers and view them as a sequence of digits, the sum is also predicted at the digit level, thus this is still a sequence to sequence task. There are 10000 possible data points in the entire space, and we randomly sample 250 instances for training, 3 100 for validation, 5000 for test, and 4000 as the unlabeled data. Test errors are computed as the absolute difference between the predicted integer and the ground-truth integer. We use an LSTM model to tackle this task. We perform self-training for one iteration on this toy sum dataset and initialize the model with the base model to rule out differences due to the initialization. Setup details are in Appendix A.1. For any integer pair (x 1, x 2), we measure local smoothness as the standard deviation of the predictions in a 3 × 3 neighborhood of (x 1, x 2). These values are averaged over all the 10000 points to obtain the overall smoothness. We compare smoothness between baseline and ST pseudo-training in Table 3. To demonstrate the effect of smoothing on the fine-tuning step, we also report test errors after fine-tuning. We observe that ST pseudo-training attains better smoothness, which helps reducing test errors in the subsequent fine-tuning step. One natural question is whether we could further improve performance by encouraging even lower smoothness value, although there is a clear trade-off, as a totally smooth model that outputs a constant value is also a bad predictor. One way to decrease smoothness is by increasing the dropout probability in the pseudo-training step, but a large dropout (like 0.5) makes the model too unstable and slow at converging. Therefore, we consider a simple model-agnostic perturbation processperturbing the input, which we refer to as noisy self-training (noisy ST). If we perturb the input during the pseudo-training step, then Eq. 1 would be modified to: where g(x) is a perturbation function. Note that we apply both input perturbation and dropout in the pseudo-training step for noisy ST throughout the paper, but include ablation analysis in §4.3. We first validate noisy ST in the toy sum task. We shuffle the two integers in the input as the perturbation function. Such perturbation is suitable for this task since it would help the model learn the commutative law as well. To check that, we also measure the symmetry of the output space. Specifically, for any point (x 1, x 2), we compute |f (x 1, x 2) − f (x 2, x 1)| and average it over all the points. Both smoothness and symmetry values are reported in Table 3. While we do not explicitly perturb the input at nearby integers, the shuffling perturbation greatly improves the smoothness metric as well. Furthermore, predictions are more symmetric and test errors are reduced. In order to illustrate the effect of smoothness, in Figure 2 we show two examples of error heat map. 4 When a point with large error is surrounded by points with small errors, the labels might propagate due to smoothing and its error is likely to become smaller, ing in a "self-correcting" behaviour, as demonstrated in the left example of Figure 2. However, the prediction of some points might become worse due to the opposite phenomenon too, as shown in the right example of Figure 2. Therefore, the smoothing effect by itself does not guarantee a performance gain in the pseudotraining step, but fine-tuning benefits from it and seems to consistently improve the baseline in all datasets we experiment with. Next, we apply noisy self-training to the more realistic WMT100 translation task. We try two different perturbation functions: Synthetic noise as used in unsupervised MT , where the input tokens are randomly dropped, masked, and shuffled. We use the default noising parameters as in unsupervised MT but study the influence of noise level in §5.4. Paraphrase. We translate the source English sentences to German and translate it back to obtain a paraphrase as the perturbation. Figure 1 shows the over three iterations. Noisy ST (NST) greatly outperforms the supervised baseline by over 6 BLEU points and normal ST by 3 BLEU points, while synthetic noise does not exhibit much difference from paraphrasing. Since synthetic noise is much simpler and more general, in the remaining experiments we use synthetic noise unless otherwise specified. Next, we report an ablation analysis of noisy ST when removing dropout at the pseudo-training step in Table 2. Noisy ST without dropout improves the baseline by 2.3 BLEU points and is comparable to normal ST with dropout. When combined together, noisy ST with dropout produces another 1.4 BLEU improvement, indicating that the two perturbations are complementary. Our experiments below are designed to examine whether the noisy self-training is generally useful across different sequence generation tasks and resource settings. To this end, we conduct experiments on two machine translation datasets and one text summarization dataset to test the effectiveness under both high-resource and low-resource settings. Table 4: Results on two machine translation datasets. For WMT100K, we use the remaining 3.8M English and German sentences from training data as unlabeled data for noisy ST and BT, respectively. We run noisy self-training for three iterations or until performance converges. The model is trained from scratch in the pseudo-training step at each iteration since we found this strategy to work slightly better empirically. Full model and training details for all the experiments can be found in Appendix A.1. In some settings, we also include back-translation as a reference point, since this is probably the most successful semi-supervised learning method for machine translation. However, we want to emphasize that BT is not directly comparable to ST since they use different resources (ST utilizes the unlabeled data on the source side while BT leverages target monolingual data) and use cases. For example, BT is not very effective when we translate English to extremely low-resource languages where there is almost no in-domain target monolingual data available. We follow the practice in to implement BT where we use unrestricted sampling to translate the target data back to the source. Then, we train the real and pseudo parallel data jointly and tune the upsampling ratio of real parallel data. We test the proposed noisy self-training on a high-resource translation benchmark: WMT14 EnglishGerman and a low-resource translation benchmark: FloRes English-Nepali. • WMT14 English-German: In addition to WMT100K, we also report with all 3.9M training examples. For WMT100K we use the Base Transformer architecture, and the remaining parallel data as the monolingual data. For the full setting, we use the Big Transformer architecture and randomly sample 20M English sentences from the News Crawl corpus for noisy ST. • FloRes English-Nepali: We evaluate noisy self-training on a low-resource machine translation dataset FloRes (Guzmán et al., 2019) from English (en) to Nepali (ne), where we have 560K training pairs and a very weak supervised system that attains BLEU smaller than 5 points. For this dataset we have 3.6M Nepali monolingual instances in total (for BT) but 68M English Wikipedia sentences. 5 We randomly sample 5M English sentences for noisy ST. We use the same transformer architecture as in (Guzmán et al., 2019). The overall are shown in Table 4. For almost all cases in both datasets, the noisy ST outperforms the baselines by a large margin (1 ∼ 5 BLEU scores), and we see that noisy ST still improves the baseline even when this is very weak. Effect of Domain Mismatch. Test sets of the FloRes benchmark were built with mixed originaltranslationese -some sentences are from English sources and some are from Nepali sources. Intuitively, English monolingual data should be more in-domain with English-origin sentences and Nepali monolingual data should help more for Nepali-origin sentences. To demonstrate this possible domain-mismatch effect, in Table 4 we report BLEU on the two different test sets separately. As expected, ST is very effective when the source sentences originate from English. Comparison to Back-Translation. Table 4 shows that noisy ST is able to beat BT on WMT100K and on the en-origin test set of FloRes. In contrast, BT is more effective on the ne-origin test set according to BLEU, which is not surprising as the ne-origin test is likely to benefit more from Nepali than English monolingual data. We further evaluate noisy self-training on the Gigaword summarization dataset that has 3.8M training sentences. We encode the data with 30K byte-pair codes and use the Base Transformer architecture. Similar to the setting of WMT100K, for Gigaword we create two settings where we sample 100K or 640K training examples and use the remaining as unlabeled data to compare with BT. We also consider the setting where all the 3.8M parallel samples are used and we mine in-domain monolingual data by revisiting the original preprocessing procedure 7 and using the ∼4M samples that disregarded because they had low-quality targets. We report ROUGE scores in Table 5. Noisy ST consistently outperforms the baseline in all settings, sometimes by a large margin (100K and 640K). It outperforms BT with 100K parallel data but underperforms with 640K parallel data. We conjecture that BT is still effective in this case because the task is still somewhat symmetric as Gigaword mostly contains short sentences and their compressed summaries. Notably, noisy ST in the full setting approaches the performance of state-of-the-art systems which use much larger datasets for pretraining . In this section, we focus on the WMT English-German dataset to examine the effect of three factors on noisy self-training: the size of the parallel dataset, the size of the monolingual dataset, and the noise level. All the noisy ST are after the fine-tuning step. Parallel data size. We fix the monolingual data size as 20M from News Crawl dataset, and vary the parallel data size as shown in Figure 3(a). We use a small LSTM model for 10K, Base Transformer for 100K/640K, and Big Transformer for 3.9M. 8 Noisy ST is repeated for three iterations. We see that in all cases noisy ST is able to improve upon the baseline, while the performance gain is larger for intermediate value of the size of the parallel dataset, as expected. Monolingual data size. We fix the parallel data size to 100K samples, and use the rest 3.8M English sentences from the parallel data as monolingual data. We sample from this set 100K, 500K, 1.5M, and 3.8M sentences. We also include another point that uses 20M monolingual sentences from a subset of News Crawl dataset. We report performance at the first iteration of noisy ST. PT FT parallel baseline -15.6 noisy ST, 100K mono + fake target 10.2 16.6 noisy ST, 3.8M mono + fake target 16.6 19.3 noisy ST, 100K parallel + real target 6.7 11.3 noisy ST, 100K parallel + fake target 10.4 16.0 Table 6: Results on WMT100K data. All are from one single iteration. "Parallel + real/fake target" denotes the noise process applied on parallel data but using real/fake target in the "pseudo-training" step. " Mono + fake target" is the normal noisy self-training process described in previous sections. Figure 3(b) illustrates that the performance keeps improving as the monolingual data size increases, albeit with diminishing returns. Noise level. We have shown that noisy ST outperforms ST, but intuitively larger noise must not always be better since at some point it may destroy all the information present in the input. We adopt the WMT100K setting with 100K parallel data and 3.8M monolingual data, and set the word blanking probability in the synthetic noise to 0.2 (default number), 0.4, 0.6, and 0.8. We also include the baseline ST without any synthetic noise. Figure 3 (c) demonstrates that performance is quite sensitive to noise level, and that intermediate values work best. It is still unclear how to select the noise level a priori, besides the usual hyper-parameter search to maximize BLEU on the validation set. In this section, we justify whether the proposed noisy self-training process would help the supervised baseline alone without the help of any monolingual data. Similar to the training process on the monolingual data, we first train the model on the noisy source data (pseudo-training), and then finetune it on clean parallel data. Different from using monolingual data, there are two variations here in the "pseudo-training" step: we can either train with the fake target predicted by the model as on monolingual data, or train with the real target paired with noisy source. We denote them as "parallel + fake target" and "parallel + real target" respectively, and report the performance on WMT100K in Table 6. We use the same synthetic noise as used in previous experiments. When applying the same noise process to parallel data using fake target, the smoothing effect is not significant since it is restricted into the limited parallel data space, producing marginal improvement over the baseline (+0.4 BLEU). As a comparison, 100K monolingual data produces +1.0 BLEU and the effect is enhanced when we increase the monolingual data to 3.8M, which leads to +3.7 BLEU. Interestingly, pairing the noisy source with real target in much worse performance than the baseline, which implies that the use of fake target predicted by the model (i.e. distillation) instead of real target is important for the success of noisy self-training, at least in the case where parallel data size is small. Intuitively, the distilled fake target is simpler and relatively easy for the model to fit, but the real target paired with noisy source makes learning even harder than training with real target and real source, which might lead to a bad starting point for fine-tuning. This issue would be particularly severe when the parallel data size is small, in that case the model would have difficulties to fit real target even with clean source. 6 RELATED WORK Self-training belongs to a broader class of "pseudo-label" semi-supervised learning approaches. These approaches all learn from pseudo labels assigned to unlabelled data, with different methods on how to assign such labels. For instance, co-training learns models on two independent feature sets of the same data, and assigns confident labels to unlabeled data from one of the models. Co-training reduces modeling bias by taking into account confidence scores from two models. In the same spirit, democratic co-training or tri-training trains multiple models with different configurations on the same data feature set, and a subset of the models act as teachers for others. Another line of more recent work perturb the input or feature space of the student's inputs as data augmentation techniques. Self-training with dropout or noisy self-training can be viewed as an instantiation of this. These approaches have been very successful on classification tasks (; ; ; ;) given that a reasonable amount of predictions of unlabeled data (at least the ones with high confidence) are correct, but their effect on language generation tasks is largely unknown and poorly understood because the pseudo language targets are often very different from the ground-truth labels. Recent work on sequence generation employs auxiliary decoders when processing unlabeled data, overall showing rather limited gains. In this paper we revisit self-training for neural sequence generation, and show that it can be an effective method to improve generalization, particularly when labeled data is scarce. Through a comprehensive ablation analysis and synthetic experiments, we identify that noise injected during self-training plays a critical role for its success due to its smoothing effect. To encourage this behaviour, we explicitly perturb the input to obtain a new variant of self-training, dubbed noisy selftraining. Experiments on machine translation and text summarization demonstrate the effectiveness of this approach in both low and high resource settings. For all experiments, we optimize with Adam using β 1 = 0.9, β 2 = 0.98, = 1e − 8. All implementations are based on fairseq, and we basically use the same learning rate schedule and label smoothing as in fairseq examples to train the transformers. 9 Except for the toy sum dataset which we runs on a single GPU and each batch contains 32 examples, all other experiments are run on 8 GPUs with an effective batch size of 33K tokens. All experiments are validated with loss on the validation set. For self-training or noisy self-training, the pseudo-training takes 300K synchronous updates while the fine-tuning step takes 100K steps. We use the downloading and preprocessing scripts in fairseq to obtain the WMT 2014 EnglishGerman dataset, 10 which hold out a small fraction of the original training data as the validation set. The model architecture for the toy sum dataset is a single-layer LSTM with word embedding size 32, hidden state size 32, and dropout rate 0.3. The model architecture of WMT10K baseline in Figure 3 (a) is a single layer LSTM with word embeddings size 256, hidden state size 256, and dropout rate 0.3. We provide more details and evidence to show that our baseline model on WMT100K dataset is trained properly. In all the experiments on WMT100K dataset including baseline and self-training ones, we use Adam optimizer with learning rate 0.0005, which is defaulted in fairseq. We do not use early stop during training but select the best model in terms of the validation loss. We train with 30K update steps for the baseline model and (300K pseudo-training + 100K fine-tuning) update steps for self-training. In both cases we verified that the models are trained sufficiently to fully converge through observing the increase of validation loss. Figure 4 shows the validation curve of the baseline model. Note that the model starts to overfit, and we select the model checkpoint at the lowest point. We also varied the learning rate hyperparameter as 0.0002, 0.0005, and 0.001, which produced BLEU score 15.0, 15.6 (reported in the paper), and 15.5 respectively -our baseline model in previous sections obtained the best performance. In the paper we perform self-training with separate pseudo-training and fine-tuning steps and always use all monolingual data. However, there are other variants such as joint training or iteratively adding confident examples. Here we compare these variants on WMT100K dataset, noisy self-training uses paraphrase as the perturbation function. For joint training, we tune the upsampling ratio of parallel data just as in back-translation . We perform noisy self-training for 3 iterations, and for filtering experiments we iteratively use the most confident 2.5M, 3M, and 3.8M monolingual data respectively in these 3 iterations. Table 7 shows that the filtering process helps joint training but still underperforms separate-training methods by over 1.5 BLEU points. Within separate training filtering produces comparable to using all data. Since separate training with all data is the simplest method and produces the best performance, we stick to this version in the paper. We additionally show the error heat maps of the entire data space on the toy sum datasets for the first two iterations. Here the model at pseudo-training step is initialized as the model from last iteration to clearly examine how the decodings change due to injected noise. As shown in Figure 5, for each iteration the pseudo-training step smooths the space and fine-tuning step benefits from it and greatly reduces the errors 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96
We revisit self-training as a semi-supervised learning method for neural sequence generation problem, and show that self-training can be quite successful with injected noise.
1,381
scitldr
We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train. Recent work in machine learning has examined a variety of novel ways to augment neural networks with fast memory stores. However, the basic problem of how to most efficiently use memory remains an open question. For instance, the slot-based external memory in models like Differentiable Neural Computers (DNCs BID10) often collapses reading and writing into single slots, even though the neural network controller can in principle learn more distributed strategies. As as , information is not shared across memory slots, and additional slots have to be recruited for new inputs, even if they are redundant with existing memories. Similarly, Matching Networks BID25 BID4 and the Neural Episodic Controller BID21 directly store embeddings of data. They therefore require the volume of memory to increase with the number of samples stored. In contrast, the Neural Statistician BID7 summarises a dataset by averaging over their embeddings. The ing "statistics" are conveniently small, but a large amount of information may be dropped by the averaging process, which is at odds with the desire to have large memories that can capture details of past experience. Historically developed associative memory architectures provide insight into how to design efficient memory structures that store data in overlapping representations. For example, the Hopfield Net BID14 pioneered the idea of storing patterns in low-energy states in a dynamic system. This type of model is robust, but its capacity is limited by the number of recurrent connections, which is in turn constrained by the dimensionality of the input patterns. The Boltzmann Machine BID1 lifts this constraint by introducing latent variables, but at the cost of requiring slow reading and writing mechanisms (i.e. via Gibbs sampling). This issue is resolved by Kanerva's sparse distributed memory model BID15, which affords fast reads and writes and dissociates capacity from the dimensionality of input by introducing addressing into a distributed memory store whose size is independent of the dimension of the data 1.In this paper, we present a conditional generative memory model inspired by Kanerva's sparse distributed memory. We generalise Kanerva's original model through learnable addresses and reparametrised latent variables BID23 BID17 BID5. We solve the challenging problem of learning an effective memory writing operation by exploiting the analytic tractability of our memory model -we derive a Bayesian memory update rule that optimally trades-off preserving old content and storing new content. The ing hierarchical generative model has a memory dependent prior that quickly adapts to new data, providing top-down knowledge in addition to bottom-up perception from the encoder to form the latent code representing data. As a generative model, our proposal provides a novel way of enriching the often over-simplified priors in VAE-like models BID22 ) through a adaptive memory. As a memory system, our proposal offers an effective way to learn online distributed writing which provides effective compression and storage of complex data. Our memory architecture can be viewed as an extension of the variational autoencoder (VAE) BID23 BID17, where the prior is derived from an adaptive memory store. A VAE has an observable variable x and a latent variable z. Its generative model is specified by a prior distribution p θ (z) and the conditional distribution p θ (x|z). The intractable posterior p θ (z|x) is approximated by a parameterised inference model q φ (z|x). Throughout this paper, we use θ to represent the generative model's parameters, and φ to represent the inference model's parameters. All parameterised distributions are implemented as multivariate Gaussian distributions with diagonal covariance matrices, whose means and variances are outputs from neural networks as in BID23 BID17.We assume a dataset with independently and identically distributed (iid) samples D = {x 1, . . ., x n, . . ., x N}. The objective of training a VAE is to maximise its log-likelihood DISPLAYFORM0. This can be achieved by jointly optimising θ and φ for a variational lower-bound of the likelihood (omitting the expectation over all x for simplicity): DISPLAYFORM1 where the first term can be interpreted as the negative reconstruction loss for reconstructing x using its approximated posterior sample from q φ (z|x), and the second term as a regulariser that encourages the approximated posterior to be near the prior of z. To introduce our model, we use the concept of an exchangeable episode: X = {x 1, . . ., x t, . . ., x T} ⊂ D is a subset of the entire dataset whose order does not matter. The objective of training is the expected conditional log-likelihood BID5, DISPLAYFORM0 The equality utilises the conditional independence of x t given the memory M, which is equivalent to the assumption of an exchangeable episode X BID2. We factorise the joint distribution of p(X, M) into the marginal distribution p(X) and the posterior p(M |X), so that computing p(M |X) can be naturally interpreted as writing X into the memory. We propose this scenario as a general and principled way of formulating memory-based generative models, since J is directly related to the mutual information DISPLAYFORM1 As the entropy of the data H(X) is a constant, maximising J is equivalent to maximising I(X; M), the mutual information between the memory and the episode to store. We write the collection of latent variables corresponding to the observed episode X as Y = {y 1, . . ., y t, . . ., y T} and Z = {z 1, . . ., z t, . . ., z T}. As illustrated in FIG0, the joint distribution of the generative model can be factorised as DISPLAYFORM0 The first equality uses the conditional independence of z t, y t, x t given M, shown by the "plates" in FIG0. The memory M is a K × C random matrix with the matrix variate Gaussian distribution BID11: where R is a K × C matrix as the mean of M, U is a K × K matrix that provides the covariance between rows of M, and V is a C × C matrix providing covariances between columns of M. This distribution is equivalent to the multivariate Gaussian distribution of vectorised M: DISPLAYFORM1 DISPLAYFORM2, where vec (·) is the vectorisation operator and ⊗ denotes the Kronecker product. We assume independence between the columns but not the rows of M, by fixing V to be the identity matrix I C and allow the full degree of freedom for U. Since our experiments suggest the covariance between rows is useful for coordinating memory access, this setting balances simplicity and performance FIG0 ).Accompanying M are the addresses A, a K × S real-value matrix that is randomly initialised and is optimised through back-propagation. To avoid degeneracy, rows of A are normalised to have L2-norms of 1. The addressing variable y t is used to compute the weights controlling memory access. As in VAEs, the prior p θ (y t) is an isotropic Gaussian distribution N. A learned projection b t = f (y t) then transforms y t into a S × 1 key vector. The K × 1 vector w t, as weights across the rows of M, is computed via the product: DISPLAYFORM3 The projection f is implemented as a multi-layer perception (MLP), which transforms the distribution of y t, as well as w t, to potentially non-Gaussian distributions that may better suit addressing. The code z t is a learned representation that generates samples of x t through the parametrised conditional distribution p θ (x t |z t). This distribution is tied for all t ∈ {1 . . . T}. Importantly, instead of the isotropic Gaussian prior, z t has a memory dependent prior: DISPLAYFORM4 whose mean is a linear combination of memory rows, with the noise covariance matrix fixed as an identity matrix by setting σ 2 = 1. This prior in a much richer marginal distribution, because of its dependence on memory and the addressing variable DISPLAYFORM5 In our hierarchical model, M is a global latent variable for an episode that captures statistics of the entire episode BID4 BID7, while the local latent variables y t and z t capture local statistics for data x t within an episode. To generate an episode of length T, we first sample M once, then sample y t, z t, and x t sequentially for each of the T samples.3.2 THE READING INFERENCE MODEL As illustrated in FIG0, the approximated posterior distribution is factorised using the conditional independence: DISPLAYFORM6 where q φ (y t |x t) is a parameterised approximate posterior distribution. The posterior distribution q φ (z t |x t, y t, M) refines the (conditional) prior distribution p θ (z t |y t, M) with additional evidence from x t. This parameterised posterior takes the concatenation of x t and the mean of p θ (z t |y t, M) (eq. 6) as input. The constant variance of p θ (z t |y t, M) is omitted. Similar to the generative model, q φ (y t |x t) is shared for all t ∈ {1 . . . T}. A central difficulty in updating memory is the trade-off between preserving old information and writing new information. It is well known that this trade-off can be balanced optimally through Bayes' rule. From the generative model perspective (eq. 2), it is natural to interpret memory writing as inference -computing the posterior distribution of memory p(M |X). This section considers both batch inference -directly computing p(M |X) and on-line inference -sequentially accumulating evidence from x 1,..., x T.Following FIG0, the approximated posterior distribution of memory can be written as DISPLAYFORM0 The last line uses one sample of y t, x t to approximate the intractable integral. The posterior of the addressing variable q φ (y t |x t) is the same as in section 3.2, and the posterior of code q φ (z t |x t) is a parameterised distribution. We use the short-hand DISPLAYFORM1 when Y, Z are sampled as described here. We abuse notation in this section and use Z = (z 1 ;, . . ., ; z T) as a T ×C matrix with all the observations in an episode, and W = (w 1 ; . . . ; w T) as a T × K matrix with all corresponding weights for addressing. Given the linear Gaussian model (eq. 6), the posterior of memory p θ (M |Y, Z) is analytically tractable, and its parameters R and U can be updated as follows: DISPLAYFORM2 where ∆ is the prediction error before updating the memory, Σ c is a T × K matrix providing the cross-covariance between Z and M, Σ ξ is a T × T diagonal matrix whose diagonal elements are the noise variance σ 2 and Σ z is a T × T matrix that encodes the covariance for z 1,..., z T. This update rule is derived from applying Bayes' rule to the linear Gaussian model (Appendix E). The prior parameters of p(M), R 0 and U 0 are trained through back-propagation. Therefore, the prior of M can learn the general structure of the entire dataset, while the posterior is left to adapt to features presented in a subset of data observed within a given episode. The main cost of the update rule comes from inverting Σ z, which has a complexity of O(T 3). One may reduce the per-step cost via on-line updating, by performing the update rule using one sample at a time -when X = x t, Σ z is a scalar which can be inverted trivially. According to Bayes' rule, updating using the entire episode at once is equivalent to performing the one-sample/on-line update iteratively for all observations in the episode. Similarly, one can perform intermediate updates using mini-batch with size between 1 and T.Another major cost in the update rule is the storage and multiplication of the memory's row-covariance matrix U, with the complexity of O(K 2). Although restricting this covariance to diagonal can reduce this cost to O(K), our experiments suggested this covariance is useful for coordinating memory accessing FIG0. Moreover, the cost of O(K 2) is usually small, since parameters of the model are dominated by the encoder and decoder. Nevertheless, a future direction is to investigating low-rank approximation of U that better balance cost and performance.3.4 TRAINING To train this model, we optimise a variational lower-bound of the conditional likelihood J (eq. 2), which can be derived in a fashion similar to standard VAEs: DISPLAYFORM3 To maximise this lower bound, we sample y t, z t from q φ (y t, z t |x t, M) to approximate the inner expectation. For computational efficiency, we use a mean-field approximation for the memoryusing the mean R in the place of memory samples (since directly sampling M requires expensive Cholesky decomposition of the non-diagonal matrix U). Alternatively, we can further exploit the analytical tractability of the Gaussian distribution to obtain distribution-based reading and writing operations (Appendix F).Inside the bracket, the first term is the usual VAE reconstruction error. The first KL-divergence penalises complex addresses, and the second term penalises deviation of the code z t from the memory-based prior. In this way, the memory learns useful representations that do not rely on complex addresses, and the bottom-up evidence only corrects top-down memory reading when necessary.3.5 ITERATIVE SAMPLING An important feature of Kanerva's sparse distributed memory is its iterative reading mechanism, by which output from the model is fed back as input for several iterations. Kanerva proved that the dynamics of iterative reading will decrease errors when the initial error is within a generous range, converging to a stored memory BID15. A similar iterative process is also available in our model, by repeatedly feeding-back the reconstructionx t. This Gibbs-like sampling follows the loop in FIG0. While we cannot prove convergence, in our experiments iterative reading reliably improves denoising and sampling. To understand this process, notice that knowledge about memory is helpful in reading, which suggests using q φ (y t |x t, M) instead of q φ (y t |x t) for addressing (section 3.2). Unfortunately, training a parameterised model with the whole matrix M as input can be prohibitively costly. Nevertheless, it is well-known in the coding literature that such intractable posteriors that usually arise in non-tree graphs (as in FIG0) can be approximated efficiently by loopy belief-propagation, as has been used in algorithms like Turbo coding BID8. Similarly, we believe iterative reading works in our model because q φ (y t |x t) models the local coupling between x t and y t well enough, so iterative sampling with the rest of the model is likely to converge to the true posterior q φ (y t |x t, M). Future research will seek to better understand this process. Details of our model implementation are described in Appendix C. We use straightforward encoder and decoder models in order to focus on evaluating the improvements provided by an adaptive memory. In particular, we use the same model architecture for all experiments with both Omniglot and CIFAR dataset, changing only the the number of filters in the convolutional layers, memory size, and code size. We always use the on-line version of the update rule (section 3.3). The Adam optimiser was used for all training and required minimal tuning for our model BID16. In all experiments, we report the value of variational lower bound (eq. 12) L divided by the length of episode T, so the per-sample value can be compared with the likelihood from existing models. We first used the Omniglot dataset to test our model. This dataset contains images of hand-written characters with 1623 different classes and 20 examples in each class BID18. This large variation creates challenges for models trying to capture the entire complex distribution. We use a 64 × 100 memory M, and a smaller 64 × 50 address matrix A. For simplicity, we always randomly sample 32 images from the entire training set to form an "episode", and ignore the class labels. This represents a worst case scenario since the images in an episode will tend to have relatively little redundant information for compression. We use a mini-batch size of 16, and optimise the variational lower-bound (eq. 12) using Adam with learning rate 1 × 10 −4.We also tested our model with the CIFAR dataset, in which each 32 × 32 × 3 real-valued colour image contains much more information than a binary omniglot pattern. Again, we discard all the label information and test our model in the unsupervised setting. To accommodate the increased complexity of CIFAR, we use convolutional coders with 32 features at each layer, use a code size of 200, and a 128 × 200 memory with 128 × 50 address matrix. All other settings are identical to experiments with Omniglot. We first use the 28 × 28 binary Omniglot from BID6 and follow the same split of 24,345 training and 8,070 test examples. We first compare the training process of our model with a baseline VAE model using the exact same encoder and decoder. Note that there is only a modest increase of parameters in the Kanerva Machine compared the VAE since the encoder and decoder dominates the model parameters. Figure 2: The negative variational lower bound (left), reconstruction loss (central), and KL-Divergence (right) during learning. The dip in the KL-divergence suggests that our model has learned to use the memory. Fig. 2 shows learning curves for our model along with those for the VAE trained on the Omniglot dataset. We plot 4 randomly initialised instances for each model. The training is stable and insensitive to initialisation. Fig. 2 (left) shows that our model reached a significantly lower negative variational lower-bound versus the VAE. Fig. 2 (central) and (right) further shows that the Kanerva Machine achieved better reconstruction and KL-divergence. In particular, the KL-divergence of our model "dips" sharply from about the 2000th step, implying our model learned to use the memory to induce a more informative prior. FIG0 confirms this: the KL-divergence for z t has collapsed to near zero, showing that the top-down prior from memory q φ (z t |y t, M) provides most of the information for the code. This rich prior is achieved at the cost of an additional KL-divergence for y t FIG0, right) which is still much lower than the KL-divergence for z t in a VAE. Similar training curves are observed for CIFAR training FIG0. BID9 also observed such KL-divergence dips with a memory model. They report that the reduction in KL-divergence, rather than the reduction in reconstruction loss, was particularly important for improving sample quality, which we also observed in our experiments with Omniglot and CIFAR.At the end of training, our VAE reached a negative log-likelihood (NLL) of ≤ 112.7 (the lower-bound of likelihood), which is worse than the state-of-the-art unconditioned generation that is achieved by rolling out 80 steps of a DRAW model (NLL of 95.5, BID22, but comparable to with IWAE training (NLL of 103.4, BID6 . In contrast, with the same encoder and decoders, the Kanerva Machine achieve conditional NLL of 68.3. It is not fair to directly compare our with unconditional generative models since our model has the advantage of its memory contents. Nevertheless, the dramatic improvement of NLL demonstrates the power of incorporating an adaptive memory into generative models. Fig. 3 (left) shows examples of reconstruction at the end of training; as a signature of our model, the weights were well distributed over the memory, illustrating that patterns written into the memory were superimposed on others.iterations Figure 3: Left: reconstruction of inputs and the weights used in reconstruction, where each bin represents the weight over one memory slot. Weights are widely distributed across memory slots. Right: denoising through iterative reading. In each panel: the first column shows the original pattern, the second column (in boxes) shows the corrupted pattern, and the following columns show the reconstruction after 1, 2 and 3 iterations. We generalise "one-shot" generation from a single image BID22, or a few sample images from a limited set of classes BID7 BID4, to a batch of images with many classes and samples. To better illustrate how samples are shaped by the conditioning data, in this section we use the same trained models, but test them using episodes with samples from only 2, 4 or 12 classes (omniglot characters) 2. FIG1 compares samples from the VAE and the Kanerva Machine. While initial samples from our model (left most columns) are visually about as good as those from the VAE, the sample quality improved in consecutive iterations and the final samples clearly reflects the statistics of the conditioning patterns. Most samples did not change much after the 6th iteration, suggesting the iterative sampling had converged. Similar conditional samples from CIFAR are shown in Fig. 5. Notice that this approach, however, does not apply to VAEs, since VAEs do not have the structure we discussed in section 3.5. This is illustrated in FIG3 by feeding back output from VAEs as input to the next iteration, which shows the sample quality did not improve after iterations. Figure 5: Comparison of samples from CIFAR. The 24 conditioning images (top-right) are randomly sampled from the entire CIFAR dataset, so they contains a mix of many classes. Samples from the matched VAE are blurred and lack meaningful local structure. On the other hand, samples from the Kanerva Machine have clear local structures, despite using the same encoder and decoder as the VAE. The 5 columns show samples after 0, 2, 4, 6, and 8 iterations. To further examine generalisation, we input images corrupted by randomly positioned 12 × 12 blocks, and tested whether our model can recover the original image through iterative reading. Our model was not trained on this task, but Fig. 3 (right) shows that, over several iterations, input images can be recovered. Due to high ambiguity, some cases (e.g., the second and last) ended up producing incorrect but still reasonable patterns. The structure of our model affords interpretability of internal representations in memory. Since representations of data x are obtained from a linear combination of memory slots (eq. 6), we expect linear interpolations between address weights to be meaningful. We examined interpolations by computing 2 weight vectors from two random input images, and then linearly interpolating between these two vectors. These vectors were then used to read z t from memory (eq. 6), which is then decoded to produce the interpolated images. FIG2 in Appendix A shows that interpolating between these access weights indeed produces meaningful and smoothly changing images. Figure 6: Left: the training curves of DNC and Kanerva machine both shows 6 instances with the best hyperparameter configuration for each model found via grid search. DNCs were more sensitive to random initilisation, slower, and plateaued with larger error. Right: the test variational lower-bounds of a DNC (dashed lines) and a Kanerva Machine as a function of different episode sizes and different sample classes. This section compares our model with the Differentiable Neural Computer (DNC, BID10, and a variant of it, the Least Recently Used Architecture (LRUA, BID24 . We test these using the same episode storage and retrieval task as in previous experiments with Omniglot data. For a fair comparison, we fit the DNC models into the same framework, as detailed in Appendix D. Fig. 6 (left) illustrates the process of training the DNC and the Kanerva Machine. The LRUA did not passed the loss level of 150, so we did not include it in the figure. The DNC reached a test loss close to 100, but was very sensitive to hyper-parameters and random initialisation: only 2 out of 6 instances with the best hyper-parameter configuration (batch size = 16, learning rate= 3 × 10 −4) found by grid search reached this level. On the other hand, the Kanerva Machine was robust to these hyper-parameters, and worked well with batch sizes between 8 and 64, and learning rates between 3 × 10 −5 and 3 × 10 −4. The Kanerva Machine trained fastest with batch size 16 and learning rate 1 × 10 −4 and eventually converged below 70 test loss with all tested configurations. Therefore, the Kanerva Machine is significantly easier to train, thanks to principled reading and writing operations that do not depend on any model parameter. We next analysed the capacity of our model versus the DNC by examining the lower bound of then likelihood when storing and then retrieving patterns from increasingly large episodes. As above, these models are still trained with episodes containing 32 samples, but are tested on much larger episodes. We tested our model with episodes containing different numbers of classes and thus varying amounts of redundancy. Fig. 6 (right) shows both models are able to exploit this redundancy, since episodes with fewer classes (but the same number of images) have lower reconstruction losses. Overall, the Kanerva Machine generalises well to larger episodes, and maintained a clear advantage over the DNC (as measured by the variational lower-bound). In this paper, we present the Kanerva Machine, a novel memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory. While our architecture is inspired by Kanerva's seminal model, we have removed the assumption of a uniform data distribution by training a generative model that flexibly learns the observed data distribution. By implementing memory as a generative model, we can retrieve unseen patterns from the memory through sampling. This phenomenon is consistent with the observation of constructive memory neuroscience experiments BID12.Probabilistic interpretations of Kanerva's model have been developed in previous works: explored a conditional probability interpretation of Kanerva's sparse distributed memory, and generalised binary data to discrete data with more than two values. BID0 provides an approximate Bayesian interpretation based on importance sampling. To our knowledge, our model is the first to generalise Kanerva's memory model to continuous, non-uniform data while maintaining an analytic form of Bayesian inference. Moreover, we demonstrate its potential in modern machine learning through integration with deep neural networks. Other models have combined memory mechanisms with neural networks in a generative setting. For example, BID19 used attention to retrieve information from a set of trainable parameters in a memory matrix. Notably, the memory in this model is not updated following learning. As a , the memory does not quickly adapt to new data as in our model, and so is not suited to the kind of episode-based learning explored here. BID5 used discrete (categorical) random variables to address an external memory, and train the addressing mechanism, together with the rest of the generative model, though a variational objective. However, the memory in their model is populated by storing images in the form of raw pixels. Although this provides a mechanism for fast adaptation, the cost of storing raw pixels may be overwhelming for large data sets. Our model learns to to store information in a compressed form by taking advantage of statistical regularity in the images via the encoder at the perceptual level, the learned addresses, and Bayes' rule for memory updates. Central to an effective memory model is the efficient updating of memory. While various approaches to learning such updating mechanisms have been examined recently BID10 BID7 BID24, we designed our model to employ an exact Bayes' update-rule without compromising the flexibility and expressive power of neural networks. The compelling performance of our model and its scalable architecture suggests combining classical statistical models and neural networks may be a promising direction for novel memory models in machine learning. This section reviews Kanerva's sparse distributed memory BID15. For consistency with the rest of this paper, many of the notations are different from Kanerva's description. In contrast to many recent models, Kanerva's memory model is characterised by its distributed reading and writing operations. The model has two main components: a fixed table of addresses A pointing to a modifiable memory M. Both A and M have the same size of K × D, where K is the number of addresses that and D is the input dimensionality. Kanerva assumes all the inputs are uniform random vectors y ∈ {−1, 1} D. Therefore, the fixed addresses A i are uniformly randomly sampled from {−1, 1} D to reflect the input statistics. An input y is compared with each address A k in A through the Hamming distance. For binary vectors a, b ∈ {−1, 1} D, the Hamming distance can be written as h(a, b) = 1 2 (D − a · b) where · represents inner product between two vectors. An address k is selected when the hamming distance between x and A k is smaller than a threshold τ, so the selection can be summarised by the binary weight vector: DISPLAYFORM0 During writing, a pattern x is stored into M by adding M k ← M k + w k x. For reading, the memory contents pointed to by all the selected addresses are summed together to pass a threshold at 0 to produce a read out:x DISPLAYFORM1 This reading process can be iterated several times by repeatedly feeding-back the outputx as input. It has been shown analytically by Kanerva that when both K and D are large enough, a small portion of the addresses will always be selected, thus the operations are sparse and distributed. Although an address' content may be over-written many times, the stored vectors can be retrieved correctly. Moreover, Kanerva proved that even a significantly corrupted query can be discovered from the memory through iterative reading. However, the application of Kanerva's model is restricted by the assumption of a uniform and binary data distribution, on which Kanerva's analyses and bounds of performance rely BID15. Unfortunately, this assumption is rarely true in practice, since real-world data typically lie on low-dimensional manifolds, and binary representation of data is less efficient in high-level neural network implementations that are heavily optimised for floating-point numbers. C MODEL DETAILS FIG4 shows the architecture of our model compared with a standard VAE. For all experiments, we use a convolutional encoder to convert input images into 2C embedding vectors e(x t), where C is the code size (dimension of z t). The convolutional encoder has 3 consecutive blocks, where each block is a convolutional layer with 4 × 4 filter with stride 2, which reduces the input dimension, followed by a basic ResNet block without bottleneck BID13. All the convolutional layers have the same number of filters, which is either 16 or 32 depending on the dataset. The output from the blocks is flattened and linearly projected to a 2C dimensional vector. The convolutional decoder mirrors this structure with transposed convolutional layers. All the "MLP" boxes in FIG4 are 2-layer multi-layer perceptron with ReLU non-linearity in between. We found that adding noise to the input into q φ (y t |x t) helped stabilise training, possibly by restricting the information in the addresses. The exact magnitude of the added noise matters little, and we use Gaussian noise with zero mean and standard deviation of 0.2 for all experiments. We use Bernoulli likelihood function for Omniglot dataset, and Gaussian likelihood function for CIFAR. To avoid Gaussian likelihood collapsing, we added uniform noise U(0, 1 256) to CIFAR images during training. For a fair comparison, we wrap the differentiable neural computer (DNC) with the same interface as the Kanerva memory so that it can simply replace the memory M in FIG4. More specifically, the DNC receives the addressing variable y t with the same size and sampled the same ways as described in the main text in reading and writing stages. During writing it also receives z t sampled from q φ (z t |x t) as input, by concatenating y t and z t together as input into the memory controller. Since DNCs do not have separated reading and writing stages, we separated this two process in our experiments: during writing, we discard the read-out from the DNC, and only keep its state as the memory; during reading, we discard the state at each step so it cannot be used for storing new information. In addition, we use a 2-layer MLP with 200 hidden neurons and ReLU nonlinearity as the controller instead of the commonly used LSTM to avoid the recurrent state being used as memory and interference with DNC's external memory. Another issue with off-the-shelf DNC BID10 BID24 is that controllers may generate output bypassing the memory, which can be particularly confusing in our auto-encoding setting by simply ignoring the memory and functioning as a skip connection. We avoid this situation by removing this controller output and ensure that the DNC only reads-out from its memory. Further, to focus on the memory performance, we remove Figure 10: Covariance between memory rows is important. The two curves shows the test loss (negative variational lower bound) as a function of iterations. Four models using full K × K covariance matrix U are shown by red curves and four models using diagonal covariance matrix are shown in blue. All other settings for these 8 models are the same (as described in section 4). These 8 models are trained on machines with similar setup. The models using full covariance matrices were slightly slower per-iteration, but the test loss decreased far more quickly.the bottom-up stream in our model that compensates for the memory. This means directly sampling z t from p θ (z t |y t, M), instead of p θ (z t |x t, y t, M), for the decoder p θ (x t |z t), forcing the model to reconstruct solely using read-outs from the memory. The negative variational lower bound, reconstruction loss, and total KL-divergence during CIFAR training. Although the difference between the lower bound objective is smaller than that during Omniglot training, the general patterns of these curves are similar to those in Fig. 2. The relatively small difference in KL-divergence significantly influences sample quality. Notice at the time of our submission, the training is continuing and the advantage of the Kanerva Machine over the VAE is increasing. Eq. 6 defines a linear Gaussian model. Using notations in the main paper, can write the joint distribution p(vec (Z), vec(M)) = N (vec (Z), vec(M); µ j, Σ j ), where DISPLAYFORM0 DISPLAYFORM1 We can then use the conditional formula for the Gaussian to derive the posterior distribution p(vec (M) |vec (Z)) = N (vec (M); µ p, Σ p ), using the property Kronecker product: DISPLAYFORM2 From properties of matrix variate Gaussian distribution, the above two equations can be re-arranged to the update rule in eq. 9 to 11.F DISTRIBUTION-BASED READING AND WRITING While the model we described in this paper works well using samples from q φ (z t |x t) for writing to the memory (section 3.3) and the mean-field approximation during reading (section 3.4), here we describe an alternative that fully exploits the analytic tractability of the Gaussian distribution. To simplify notation, we use ψ = {R, U, V} for all parameters of the memory.
A generative memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory.
1,382
scitldr
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task. Despite the success of deep neural networks in machine learning, they are often found to be highly overparametrized making them computationally expensive with excessive memory requirements. Pruning such large networks with minimal loss in performance is appealing for real-time applications, especially on resource-limited devices. In addition, compressed neural networks utilize the model capacity efficiently, and this interpretation can be used to derive better generalization bounds for neural networks BID0 ).In network pruning, given a large reference neural network, the goal is to learn a much smaller subnetwork that mimics the performance of the reference network. The majority of existing methods in the literature attempt to find a subset of weights from the pretrained reference network either based on a saliency criterion BID29; BID22; BID11 ) or utilizing sparsity enforcing penalties BID3; BID2 ). Unfortunately, since pruning is included as a part of an iterative optimization procedure, all these methods require many expensive prune -retrain cycles and heuristic design choices with additional hyperparameters, making them non-trivial to extend to new architectures and tasks. In this work, we introduce a saliency criterion that identifies connections in the network that are important to the given task in a data-dependent way before training. Specifically, we discover important connections based on their influence on the loss function at a variance scaling initialization, which we call connection sensitivity. Given the desired sparsity level, redundant connections are pruned once prior to training (i.e., single-shot), and then the sparse pruned network is trained in the standard way. Our approach has several attractive properties:• Simplicity. Since the network is pruned once prior to training, there is no need for pretraining and complex pruning schedules. Our method has no additional hyperparameters and once pruned, training of the sparse network is performed in the standard way.• Versatility. Since our saliency criterion chooses structurally important connections, it is robust to architecture variations. Therefore our method can be applied to various architectures including convolutional, residual and recurrent networks with no modifications.• Interpretability. Our method determines important connections with a mini-batch of data at single-shot. By varying this mini-batch used for pruning, our method enables us to verify that the retained connections are indeed essential for the given task. We evaluate our method on MNIST, CIFAR-10, and Tiny-ImageNet classification datasets with widely varying architectures. Despite being the simplest, our method obtains extremely sparse networks with virtually the same accuracy as the existing baselines across all tested architectures. Furthermore, we investigate the relevance of the retained connections as well as the effect of the network initialization and the dataset on the saliency score. Classical methods. Essentially, early works in network pruning can be categorized into two groups BID34 ): 1) those that utilize sparsity enforcing penalties; and 2) methods that prune the network based on some saliency criterion. The methods from the former category BID3; BID38; BID15 ) augment the loss function with some sparsity enforcing penalty terms (e.g., L 0 or L 1 norm), so that back-propagation effectively penalizes the magnitude of the weights during training. Then weights below a certain threshold may be removed. On the other hand, classical saliency criteria include the sensitivity of the loss with respect to the neurons BID29 ) or the weights BID17 ) and Hessian of the loss with respect to the weights BID22; BID12 ). Since these criteria are heavily dependent on the scale of the weights and are designed to be incorporated within the learning process, these methods are prohibitively slow requiring many iterations of pruning and learning steps. Our approach identifies redundant weights from an architectural point of view and prunes them once at the beginning before training. Modern advances. In recent years, the increased space and time complexities as well as the risk of overfitting in deep neural networks prompted a surge of further investigation in network pruning. While Hessian based approaches employ the diagonal approximation due to its computational simplicity, impressive (i.e., extreme sparsity without loss in accuracy) are achieved using magnitude of the weights as the criterion BID11 ). This made them the de facto standard method for network pruning and led to various implementations BID9 BID2 ). The magnitude criterion is also extended to recurrent neural networks BID30 ), yet with heavily tuned hyperparameter setting. Unlike our approach, the main drawbacks of magnitude based approaches are the reliance on pretraining and the expensive pruneretrain cycles. Furthermore, since pruning and learning steps are intertwined, they often require highly heuristic design choices which make them non-trivial to be extended to new architectures and different tasks. Meanwhile, Bayesian methods are also applied to network pruning BID37; BID27 ) where the former extends the soft weight sharing in BID32 to obtain a sparse and compressed network, and the latter uses variational inference to learn the dropout rate which can then be used to prune the network. Unlike the above methods, our approach is simple and easily adaptable to any given architecture or task without modifying the pruning procedure. Network compression in general. Apart from weight pruning, there are approaches focused on structured simplification such as pruning filters BID24; BID28 ), structured sparsity with regularizers BID39 ), low-rank approximation BID16 ), matrix and tensor factorization BID31 ), and sparsification using expander graphs BID33 ) or Erdős-Rényi random graph BID26 ). In addition, there is a large body of work on compressing the representation of weights. A non-exhaustive list includes quantization BID7 ), reduced precision BID10 ) and binary weights BID14 ). In this work, we focus on weight pruning that is free from structural constraints and amenable to further compression schemes. The main hypothesis behind the neural network pruning literature is that neural networks are usually overparametrized, and comparable performance can be obtained by a much smaller network BID34 ) while improving generalization BID0 ). To this end, the objective is to learn ii a sparse network while maintaining the accuracy of the standard reference network. Let us first formulate neural network pruning as an optimization problem. DISPLAYFORM0, and a desired sparsity level κ (i.e., the number of non-zero weights) neural network pruning can be written as the following constrained optimization problem: DISPLAYFORM1 DISPLAYFORM2 Here, (·) is the standard loss function (e.g., cross-entropy loss), w is the set of parameters of the neural network, m is the total number of parameters and · 0 is the standard L 0 norm. The conventional approach to optimize the above problem is by adding sparsity enforcing penalty terms BID3 BID38 BID15 ). Recently, Carreira-Perpiñán & attempts to minimize the above constrained optimization problem using the stochastic version of projected gradient descent (where the projection is accomplished by pruning). However, these methods often turn out to be inferior to saliency based methods in terms of ing sparsity and require heavily tuned hyperparameter settings to obtain comparable . On the other hand, saliency based methods treat the above problem as selectively removing redundant parameters (or connections) in the neural network. In order to do so, one has to come up with a good criterion to identify such redundant connections. Popular criteria include magnitude of the weights, i.e., weights below a certain threshold are redundant BID11; BID9 ) and Hessian of the loss with respect to the weights, i.e., the higher the value of Hessian, the higher the importance of the parameters BID22; BID12 ), defined as follows: DISPLAYFORM3 Here, for connection j, s j is the saliency score, w j is the weight, and H jj is the value of the Hessian matrix, where the Hessian H = ∂ 2 L/∂w 2 ∈ R m×m. Considering Hessian based methods, the Hessian matrix is neither diagonal nor positive definite in general, approximate at best, and intractable to compute for large networks. Despite being popular, both of these criteria depend on the scale of the weights and in turn require pretraining and are very sensitive to the architectural choices. For instance, different normalization layers affect the scale of the weights in a different way, and this would non-trivially affect the saliency score. Furthermore, pruning and the optimization steps are alternated many times throughout training, ing in highly expensive prune -retrain cycles. Such an exorbitant requirement hinders the use of pruning methods in large-scale applications and raises questions about the credibility of the existing pruning criteria. In this work, we design a criterion which directly measures the connection importance in a datadependent manner. This alleviates the dependency on the weights and enables us to prune the network once at the beginning, and then the training can be performed on the sparse pruned network. Therefore, our method eliminates the need for the expensive prune -retrain cycles, and in theory, it can be an order of magnitude faster than the standard neural network training as it can be implemented using software libraries that support sparse matrix computations. Given a neural network and a dataset, our goal is to design a method that can selectively prune redundant connections for the given task in a data-dependent way even before training. To this end, we first introduce a criterion to identify important connections and then discuss its benefits. Since we intend to measure the importance (or sensitivity) of each connection independently of its weight, we introduce auxiliary indicator variables c ∈ {0, 1} m representing the connectivity of iii parameters w. 1 Now, given the sparsity level κ, Equation 1 can be correspondingly modified as: DISPLAYFORM0 DISPLAYFORM1 where denotes the Hadamard product. Compared to Equation 1, we have doubled the number of learnable parameters in the network and directly optimizing the above problem is even more difficult. However, the idea here is that since we have separated the weight of the connection (w) from whether the connection is present or not (c), we may be able to determine the importance of each connection by measuring its effect on the loss function. For instance, the value of c j indicates whether the connection j is active (c j = 1) in the network or pruned (c j = 0). Therefore, to measure the effect of connection j on the loss, one can try to measure the difference in loss when c j = 1 and c j = 0, keeping everything else constant. Precisely, the effect of removing connection j can be measured by, DISPLAYFORM2 where e j is the indicator vector of element j (i.e., zeros everywhere except at the index j where it is one) and 1 is the vector of dimension m. Note that computing ∆L j for each j ∈ {1 . . . m} is prohibitively expensive as it requires m + 1 (usually in the order of millions) forward passes over the dataset. In fact, since c is binary, L is not differentiable with respect to c, and it is easy to see that ∆L j attempts to measure the influence of connection j on the loss function in this discrete setting. Therefore, by relaxing the binary constraint on the indicator variables c, ∆L j can be approximated by the derivative of L with respect to c j, which we denote g j (w; D). Hence, the effect of connection j on the loss can be written as: DISPLAYFORM3 In fact, ∂L/∂c j is an infinitesimal version of ∆L j, that measures the rate of change of L with respect to an infinitesimal change in c j from 1 → 1 − δ. This can be computed efficiently in one forward-backward pass using automatic differentiation, for all j at once. Notice, this formulation can be viewed as perturbing the weight w j by a multiplicative factor δ and measuring the change in loss. This approximation is similar in spirit to BID19 where they try to measure the influence of a datapoint to the loss function. Here we measure the influence of connections. Furthermore, ∂L/∂c j is not to be confused with the gradient with respect to the weights (∂L/∂w j), where the change in loss is measured with respect to an additive change in weight w j.Notably, our interest is to discover important (or sensitive) connections in the architecture, so that we can prune unimportant ones in single-shot, disentangling the pruning process from the iterative optimization cycles. To this end, we take the magnitude of the derivatives g j as the saliency criterion. Note that if the magnitude of the derivative is high (regardless of the sign), it essentially means that the connection c j has a considerable effect on the loss (either positive or negative), and it has to be preserved to allow learning on w j. Based on this hypothesis, we define connection sensitivity as the normalized magnitude of the derivatives: DISPLAYFORM4 Once the sensitivity is computed, only the top-κ connections are retained, where κ denotes the desired number of non-zero weights. Precisely, the indicator variables c are set as follows: DISPLAYFORM5 wheres κ is the κ-th largest element in the vector s and 1[·] is the indicator function. Here, for exactly κ connections to be retained, ties can be broken arbitrarily. We would like to clarify that the above criterion FORMULA8 ) is different from the criteria used in early works by BID29 DISPLAYFORM6 Pruning: choose top-κ connections 6: w * ← arg min w∈R m L(c w; D) Regular training 7: w * ← c w * connection sensitivity. The fundamental idea behind them is to identify elements (e.g. weights or neurons) that least degrade the performance when removed. This means that their saliency criteria (i.e. −∂L/∂w or −∂L/∂α; α refers to the connectivity of neurons), in fact, depend on the loss value before pruning, which in turn, require the network to be pre-trained and iterative optimization cycles to ensure minimal loss in performance. They also suffer from the same drawbacks as the magnitude and Hessian based methods as discussed in Section 3. In contrast, our saliency criterion (Equation 6) is designed to measure the sensitivity as to how much influence elements have on the loss function regardless of whether it is positive or negative. This criterion alleviates the dependency on the value of the loss, eliminating the need for pre-training. These fundamental differences enable the network to be pruned at single-shot prior to training, which we discuss further in the next section. Note that the saliency measure defined in Equation 6 depends on the value of weights w used to evaluate the derivative as well as the dataset D and the loss function L. In this section, we discuss the effect of each of them and show that it can be used to prune the network in single-shot with initial weights w. Firstly, in order to minimize the impact of weights on the derivatives ∂L/∂c j, we need to choose these weights carefully. For instance, if the weights are too large, the activations after the non-linear function (e.g., sigmoid) will be saturated, which would in uninformative gradients. Therefore, the weights should be within a sensible range. In particular, there is a body of work on neural network initialization BID8 ) that ensures the gradients to be in a reasonable range, and our saliency measure can be used to prune neural networks at any such initialization. Furthermore, we are interested in making our saliency measure robust to architecture variations. Note that initializing neural networks is a random process, typically done using normal distribution. However, if the initial weights have a fixed variance, the signal passing through each layer no longer guarantees to have the same variance, as noted by BID23. This would make the gradient and in turn our saliency measure, to be dependent on the architectural characteristics. Thus, we advocate the use of variance scaling methods (e.g., BID6) to initialize the weights, such that the variance remains the same throughout the network. By ensuring this, we empirically show that our saliency measure computed at initialization is robust to variations in the architecture. Next, since the dataset and the loss function defines the task at hand, by relying on both of them, our saliency criterion in fact discovers the connections in the network that are important to the given task. However, the practitioner needs to make a choice on whether to use the whole training set, or a mini-batch or the validation set to compute the connection saliency. Moreover, in case there are memory limitations (e.g., large model or dataset), one can accumulate the saliency measure over multiple batches or take an exponential moving average. In our experiments, we show that using only one mini-batch of a reasonable number of training examples can lead to effective pruning. Finally, in contrast to the previous approaches, our criterion for finding redundant connections is simple and directly based on the sensitivity of the connections. This allows us to effectively identify and prune redundant connections in a single step even before training. Then, training can be performed on the ing pruned (sparse) network. We name our method SNIP for Single-shot Network Pruning, and the complete algorithm is given in Algorithm 1. Figure 1: Test errors of LeNets pruned at varying sparsity levelsκ, whereκ = 0 refers to the reference network trained without pruning. Our approach performs as good as the reference network across varying sparsity levels on both the models. We evaluate our method, SNIP, on MNIST, CIFAR-10 and Tiny-ImageNet classification tasks with a variety of network architectures. Our show that SNIP yields extremely sparse models with minimal or no loss in accuracy across all tested architectures, while being much simpler than other state-of-the-art alternatives. We also provide clear evidence that our method prunes genuinely explainable connections rather than performing blind pruning. Experiment setup For brevity, we define the sparsity level to beκ = (m − κ)/m · 100 (%), where m is the total number of parameters and κ is the desired number of non-zero weights. For a given sparsity levelκ, the sensitivity scores are computed using a batch of 100 and 128 examples for MNIST and CIFAR experiments, respectively. After pruning, the pruned network is trained in the standard way. Specifically, we train the models using SGD with momentum of 0.9, batch size of 100 for MNIST and 128 for CIFAR experiments and the weight decay rate of 0.0005, unless stated otherwise. The initial learning rate is set to 0.1 and decayed by 0.1 at every 25k or 30k iterations for MNIST and CIFAR, respectively. Our algorithm requires no other hyperparameters or complex learning/pruning schedules as in most pruning algorithms. We spare 10% of the training data as a validation set and used only 90% for training. For CIFAR experiments, we use the standard data augmentation (i.e., random horizontal flip and translation up to 4 pixels) for both the reference and sparse models. The code can be found here: https://github.com/namhoonlee/snip-public. We first test our approach on two standard networks for pruning, LeNet-300-100 and LeNet-5-Caffe. LeNet-300-100 consists of three fully-connected (fc) layers with 267k parameters and LeNet-5-Caffe consists of two convolutional (conv) layers and two fc layers with 431k parameters. We prune the LeNets for different sparsity levelsκ and report the performance in error on the MNIST image classification task. We run the experiment 20 times for eachκ by changing random seeds for dataset and network initialization. The are reported in Figure 1.The pruned sparse LeNet-300-100 achieves performances similar to the reference (κ = 0), only with negligible loss atκ = 90. For LeNet-5-Caffe, the performance degradation is nearly invisible. Note that our saliency measure does not require the network to be pre-trained and is computed at random initialization. Despite such simplicity, our approach prunes LeNets quickly (single-shot) and effectively (minimal accuracy loss) at varying sparsity levels. What happens if we increase the target sparsity to an extreme level? For example, would a model with only 1% of the total parameters still be trainable and perform well? We test our approach for extreme sparsity levels (e.g., up to 99% sparsity on and compare with various pruning algorithms as follows: LWC BID11), DNS BID9 Table 1: Pruning on LeNets and comparisons to other approaches. Here, "many" refers to an arbitrary number often in the order of total learning steps, and "soft" refers to soft pruning in Bayesian based methods. Our approach is capable of pruning up to 98% for LeNet-300-100 and 99% for LeNet-5-Caffe with marginal increases in error from the reference network. Notably, our approach is considerably simpler than other approaches, with no requirements such as pretraining, additional hyperparameters, augmented training objective or architecture dependent constraints. Idelbayev FORMULA1 ), SWS BID37 ), SVD BID27 ), OBD BID22 ), L-OBS BID5 ). The are summarized in Table 1.We achieve errors that are comparable to the reference model, degrading approximately 0.7% and 0.3% while pruning 98% and 99% of the parameters in LeNet-300-100 and LeNet-5-Caffe respectively. For slightly relaxed sparsities (i.e., 95% for LeNet-300-100 and 98% for, the sparse models pruned by SNIP record better performances than the dense reference network. Considering 99% sparsity, our method efficiently finds 1% of the connections even before training, that are sufficient to learn as good as the reference network. Moreover, SNIP is competitive to other methods, yet it is unparalleled in terms of algorithm simplicity. To be more specific, we enumerate some key points and non-trivial aspects of other algorithms and highlight the benefit of our approach. First of all, the aforementioned methods require networks to be fully trained (if not partly) before pruning. These approaches typically perform many pruning operations even if the network is well pretrained, and require additional hyperparameters (e.g., pruning frequency in BID9, annealing schedule in Carreira-Perpiñán & Idelbayev FORMULA1). Some methods augment the training objective to handle pruning together with training, increasing the complexity of the algorithm (e.g., augmented Lagrangian in Carreira-Perpiñán & , variational inference in BID27). Furthermore, there are approaches designed to include architecture dependent constraints (e.g., layer-wise pruning schemes in BID5).Compared to the above approaches, ours seems to cost almost nothing; it requires no pretraining or additional hyperparameters, and is applied only once at initialization. This means that one can easily plug-in SNIP as a preprocessor before training neural networks. Since SNIP prunes the network at the beginning, we could potentially expedite the training phase by training only the survived parameters (e.g., reduced expected FLOPs in BID25). Notice that this is not possible for the aforementioned approaches as they obtain the maximum sparsity at the end of the process. In this section we show that our approach is generally applicable to more complex modern network architectures including deep convolutional, residual and recurrent ones. Specifically, our method is applied to the following models:• AlexNet-s and AlexNet-b: Models similar to BID20 in terms of the number of layers and size of kernels. We set the size of fc layers to 512 (AlexNet-s) and to 1024 (AlexNet-b) to adapt for CIFAR-10 and use strides of 2 for all conv layers instead of using pooling layers.• VGG-C, VGG-D and VGG-like: Models similar to the original VGG models described in BID36. VGG-like BID40 ) is a popular variant adapted for CIFAR-10 which has one less fc layers. For all VGG models, we set the size of fc layers to 512, remove dropout layers to avoid any effect on sparsification and use batch normalization instead.• and Same models as in BID41 Table 2: Pruning of the proposed approach on various modern architectures (before → after). AlexNets, VGGs and WRNs are evaluated on CIFAR-10, and LSTMs and GRUs are evaluated on the sequential MNIST classification task. The approach is generally applicable regardless of architecture types and models and in a significant amount of reduction in the number of parameters with minimal or no loss in performance.• LSTM-s, LSTM-b, GRU-s, GRU-b: One layer RNN networks with either LSTM BID42 ) or GRU BID4 ) cells. We develop two unit sizes for each cell type, 128 and 256 for {·}-s and {·}-b, respectively. The model is adapted for the sequential MNIST classification task, similar to BID21. Instead of processing pixel-by-pixel, however, we perform rowby-row processing (i.e., the RNN cell receives each row at a time).The are summarized in Table 2. Overall, our approach prunes a substantial amount of parameters in a variety of network models with minimal or no loss in accuracy (< 1%). Our pruning procedure does not need to be modified for specific architectural variations (e.g., recurrent connections), indicating that it is indeed versatile and scalable. Note that prior art that use a saliency criterion based on the weights (i.e., magnitude or Hessian based) would require considerable adjustments in their pruning schedules as per changes in the model. We note of a few challenges in directly comparing against others: different network specifications, learning policies, datasets and tasks. Nonetheless, we provide a few comparison points that we found in the literature. On CIFAR-10, SVD prunes 97.9% of the connections in VGG-like with no loss in accuracy (ours: 97% sparsity) while SWS obtained 93.4% sparsity on but with a non-negligible loss in accuracy of 2%. There are a couple of works attempting to prune RNNs (e.g., GRU in BID30 and LSTM in BID35). Even though these methods are specifically designed for RNNs, none of them are able to obtain extreme sparsity without substantial loss in accuracy reflecting the challenges of pruning RNNs. To the best of our knowledge, we are the first to demonstrate on convolutional, residual and recurrent networks for extreme sparsities without requiring additional hyperparameters or modifying the pruning procedure. So far we have shown that our approach can prune a variety of deep neural network architectures for extreme sparsities without losing much on accuracy. However, it is not clear yet which connections are actually being pruned away or whether we are pruning the right (i.e., unimportant) ones. What if we could actually peep through our approach into this inspection?Consider the first layer in LeNet-300-100 parameterized by w l=1 ∈ R 784×300. This is a layer fully connected to the input where input images are of size 28 × 28 = 784. In order to understand which connections are retained, we can visualize the binary connectivity mask for this layer c l=1, by averaging across columns and then reshaping the vector into 2D matrix (i.e., c l=1 ∈ {0, 1} 784×300 → R 784 → R 28×28 ). Recall that our method computes c using a minibatch of examples. In this experiment, we curate the mini-batch of examples of the same class and see which weights are retained for that mini-batch of data. We repeat this experiment for all classes viii Published as a conference paper at ICLR 2019 Visualizations of pruned parameters of the first layer in LeNet-300-100; the parameters are reshaped to be visualized as an image. Each column represents the visualizations for a particular class obtained using a batch of 100 examples with varying levels of sparsityκ, from 10 (top) to 90 (bottom). Bright pixels indicate that the parameters connected to these region had high importance scores (s) and survived from pruning. As the sparsity increases, the parameters connected to the discriminative part of the image for classification survive and the irrelevant parts get pruned.(i.e., digits for MNIST and fashion items for Fashion-MNIST) with varying sparsity levelsκ. The are displayed in FIG1 (see Appendix A for more ).The are significant; important connections seem to reconstruct either the complete image (MNIST) or silhouettes (Fashion-MNIST) of input class. When we use a batch of examples of the digit 0 (i.e., the first column of MNIST ), for example, the parameters connected to the foreground of the digit 0 survive from pruning while the majority of is removed. Also, one can easily determine the identity of items from Fashion-MNIST . This clearly indicates that our method indeed prunes the unimportant connections in performing the classification task, receiving signals only from the most discriminative part of the input. This stands in stark contrast to other pruning methods from which carrying out such inspection is not straightforward. Recall that our connection saliency measure depends on the network weights w as well as the given data D (Section 4.2). We study the effect of each of these in this section. Effect of data. Our connection saliency measure depends on a mini-batch of train examples D b (see Algorithm 1). To study the effect of data, we vary the batch size used to compute the saliency (|D b |) and check which connections are being pruned as well as how much performance change this in on the corresponding sparse network. We test with to visualize the remaining parameters, and set the sparsity levelκ = 90. Note that the batch size used for training remains the same as 100 for all cases. The are displayed in Figure 3.Effect of initialization. Our approach prunes a network at a stochastic initialization as discussed. We study the effect of the following initialization methods: 1) RN (random normal), 2) TN (truncated random normal), 3) VS-X (a variance scaling method using BID6), and 4) VS-H (a variance scaling method BID13). We test on LeNets and RNNs on MNIST and run 20 sets of experiments by varying the seed for initialization. We set the sparsity levelκ = 90, and train with Adam optimizer BID18 ) with learning rate of 0.001 without weight decay. Note that for training VS-X initialization is used in all the cases. The are reported in Figure 3.For all models, VS-H achieves the best performance. The differences between initializers are marginal on LeNets, however, variance scaling methods indeed turns out to be essential for complex RNN models. This effect is significant especially for GRU where without variance scaling initialization, the pruned networks are unable to achieve good accuracies, even with different optimizers. Overall, initializing with a variance scaling method seems crucial to making our saliency measure reliable and model-agnostic.ix DISPLAYFORM0 (1.72%) (1.64%) (1.56%) (1.40%) - Figure 3: The effect of different batch sizes: (top-row) survived parameters in the first layer of LeNet-300-100 from pruning visualized as images; (bottom-row) the performance in errors of the pruned networks. For |D b | = 1, the sampled example was 8; our pruning precisely retains the valid connections. As |D b | increases, survived parameters get close to the average of all examples in the train set (last column), and the error decreases. Init. To further explore the use cases of SNIP, we run the experiment introduced in BID43 and check whether the sparse network obtained by SNIP memorizes the dataset. Specifically, we train LeNet-5-Caffe for both the reference model and pruned model (withκ = 99) on MNIST with either true or randomly shuffled labels. To compute the connection sensitivity, always true labels are used. The are plotted in FIG2. Given true labels, both the reference (red) and pruned (blue) models quickly reach to almost zero training loss. However, the reference model provided with random labels (green) also reaches to very low training loss, even with an explicit L2 regularizer (purple), indicating that neural networks have enough capacity to memorize completely random data. In contrast, the model pruned by SNIP (orange) fails to fit the random labels (high training error). The potential explanation is that the pruned network does not have sufficient capacity to fit the random labels, but it is able to classify MNIST with true labels, reinforcing the significance of our saliency criterion. It is possible that a similar experiment can be done with other pruning methods BID27 ), however, being simple, SNIP enables such exploration much easier. We provide a further analysis on the effect of varyingκ in Appendix B. In this work, we have presented a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications. While SNIP in extremely sparse models, we find that our connection sensitivity measure itself is noteworthy in that it diagnoses important connections in the network from a purely untrained network. We believe that this opens up new possibilities beyond pruning in the topics of understanding of neural network architectures, multi-task transfer learning and structural regularization, to name a few. In addition to these potential directions, we intend to explore the generalization capabilities of sparse networks. Notably, compared to the case of using SNIP FIG1, the are different: Firstly, the on the original (Fashion-)MNIST (i.e., (a) and (c) above) are not the same as the ones using SNIP (i.e., (a) and (b) in FIG1. Moreover, the pruning patterns are inconsistent with different sparsity levels, either intra-class or inter-class. Furthermore, using ∂L/∂w in different pruning patterns between the original and inverted data in some cases (e.g., the 2 nd columns between (c) and (Figure 7: The effect of varying sparsity levels (κ). The lowerκ becomes, the lower training loss is recorded, meaning that a network with more parameters is more vulnerable to fitting random labels. Recall, however, that all pruned models are able to learn to perform the classification task without losing much accuracy (see Figure 1). This potentially indicates that the pruned network does not have sufficient capacity to fit the random labels, but it is capable of performing the classification. C TINY-IMAGENET Table 4: Pruning of SNIP on Tiny-ImageNet (before → after). Tiny-ImageNet 2 is a subset of the full ImageNet: there are 200 classes in total, each class has 500 and 50 images for training and validation respectively, and each image has the spatial resolution of 64×64. Compared to CIFAR-10, the resolution is doubled, and to deal with this, the stride of the first convolution in all architectures is doubled, following the standard practice for this dataset. In general, the Tiny-ImageNet classification task is considered much more complex than MNIST or CIFAR-10. Even on Tiny-ImageNet, however, SNIP is still able to prune a large amount of parameters with minimal loss in performance. AlexNet models lose more accuracies than VGGs, which may be attributed to the fact that the first convolution stride for AlexNet is set to be 4 (by its design of no pooling) which is too large and could lead to high loss of information when pruned.
We present a new approach, SNIP, that is simple, versatile and interpretable; it prunes irrelevant connections for a given task at single-shot prior to training and is applicable to a variety of neural network models without modifications.
1,383
scitldr
Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset. A well chosen source with a large numberof labeled data leads to significant improvement in accuracy. We demonstrate atechnique that automatically labels large unlabeled datasets so that they can trainsource models for transfer learning. We experimentally evaluate this method, usinga baseline dataset of human-annotated ImageNet1K labels, against five variationsof this technique. We show that the performance of these automatically trainedmodels come within 17% of baseline on average. In many domains, the task performance of deep learning techniques is heavily dependent on the number of labeled examples, which are difficult and expensive to acquire. This demand for large labeled datasets has inspired alternative techniques, such as weak supervision or automated labeling, whose algorithms create plausible labels to be used to guide supervised training on other tasks. In this work, we develop a content-aware model-selection technique for transfer learning. We take an unlabeled data point (here, an unlabeled image), and compute its distance to the average response of a number of specialized deep learning models, such as those trained for "animal", "person", or "sport". We then create a "pseudolabel" for the point, consisting of a short ordered sequence of the most appropriate model names, like "animal-plant-building". We use these synthetic labels to augment the ground truth labels. We validate the technique by applying it to the ImageNet1K dataset, as well as on a number of other large, unlabeled datasets. There are several well-established approaches that attempt to automatically assign labels to unlabeled images. For example, some use clusters of features to predict labels BID2, or augment image data with linguistic constraints from sources such as WordNet BID0 BID3. These approaches augment tasks by pretraining models using larger unlabeled data-sets. Pretraining approaches have also improved when attempting a target task for which there is a limited amount of accurately labeled training data BID4, by using weakly labeled data, such as social media hashtags, which are much more plentiful. However, effectiveness only appears to grow as the log of the image count. Further approaches use generative models such as GANs as in BID5 to explore and refine category boundaries between clusters of data, which exploit the rich statistical structures of both real and generated examples, sometimes augmented with labels or linguistic constraints. All of these automatic approaches use the structures present in large unlabeled datasets, to extend the expressivity of known labels, and to augment the raw size of training sets. We present our technique using a specific case study involving images, and with source datasets created by vertically partitioning ImageNet22K BID1 ) along its distinct subtrees: animal, plant, weapon, tools, music, fungus, sport, person, food, fruit, garment, building, nature, furniture, vehicle, and fabric. We represent each such dataset by a single average feature vector. In this study, this vector is generated from the second last layer of a reference VGG16 BID6 model trained on ImageNet1K, as shown in FIG0, with the average taken over all the images in the dataset. To label a new image, we first calculate its own feature vector, then compute the distance between it and each of the representatives of the datasets; in this study, we use the Kullback-Leibler (KL) divergence, after appropriate normalization. These 16 distances are then used to determine the synthetic label. To label an image, we encode its relative positions with respect to some subset of the known labeled datasets. To better explain the technique, we will use ImageNet1K as a source of images and ground truth labels. ImageNet1K has 1000 labels; the number of images/label is almost uniform; but the labels broadly fall under few broad categories like animals, vehicles, food, musical instruments, garment, furniture, buildings etc. About 446 out of 1000 labels (∼ 45%) belong to animals while the other top categories are vehicles (5.2%), food (2.2%), musical instruments (2.1%) and garment (2.1%). The distribution of number of labels for different categories has a long tail. Our first labeling methods choose labels to be the names of the N source datasets which were the closest to the image. We concatenate those names together, in order of closeness. Thus, the Nearest-3 method generates labels like "tree-animal-fungus". With 16 source datasets, the Nearest-3 approach yields 16x15x14 = 3360 possible pseudolabels; in our study with the 1.3M images from ImageNet1K, each label had a mean of 381 images and a standard deviation of 1503. In a similar fashion, the Nearest-2 and Nearest-1 pseudolabels were also computed, with 240 and 16 possible labels, respectively. (We found no benefits above Nearest-3.) FIG1 shows the distribution of images for the Nearest-3 method. The high peaks on the right are unsurprisingly related to animals. Our fourth method exploited all 16 distances. We first fixed the order of the 16 source datasets, forming a 16-dimensional coordinate system, so that each image could be viewed as a vector in the ing 16-dimensional simplex of all possible positive distances to them. All of the unlabeled images were then k-means clustered within this space; we used 240 cluster centers to imitate the size of the Nearest-2 space. These ant clusters could not be expected to be of uniform size, so a second round of splitting and merging, based on relative distances, balanced the cluster membership to be nearly uniform, with an average of 1000 images per cluster, and a standard deviation was 193.The "names" of these final clusters were used as the pseudolabels. Our fifth method accommodates those incoming datasets that are characterized by a wide variety of low-level image features, such as flowers. These labels were again devised as a sequence of three source names, but chosen to span the 16-dimensional space as widely as possible. The first source dataset was the closest one, and the second was the farthest one. The third was chosen from the remaining 14 to maximize the area of the ing triangle in 16-space, computed via Heron's formula. In practice, this method only ed in about 200 labels, with an average of about 6300 images per label, and with a very high standard deviation of about 17000 (see FIG1).As an example, consider the images in FIG2, taken from ImageNet1K. The nearest cluster (Nearest-1) for the body armour image is music. This is possibly due to the brass and metallic textures of the photo, similar to that seen in brass music instruments. The Nearest-2 and Nearest-3 labels are (music-weapon) and (music-weapon-person) respectively. In contrast, the label for Largest-Area is music-fungus-sport. So for this image, fungus is the source which is most unlike it, and sport maximizes the area of the triangle defined by the third source. Similarly, for the elephant image, the Nearest-3 label is tree-animal-fungus, and the Largest-Area label is tree-furniture-fungus. The Nearest-2 and Nearest-1 labels are tree-animal and tree, respectively. Experiment 1 Using techniques described in Section 4, we first created five pseudo-labeled datasets for the images in ImageNet1K, as shown in Figure 4. We then trained ResNet27 using each of these pseudo-labeled datasets, creating base models for further transfer learning. We also created two baseline models, one using the vanilla ImageNet1K dataset of images and human-annotated labels, and a second by assigning random labels to the ImageNet1K images. For perspective, the figure shows the accuracy of these seven base models, but since we are interested in the transferability of representations from these base models to a target domain, their absolute accuracy is not the important measure. Then, for each of 12 candidate target datasets, we fine-tuned and calculated the transfer learning accuracy of each of the 7 base models; each of these 84 experiments were carried out with the same hyperparameters. As shown in Figure 4, the accuracy obtained with Vanilla as the base model can serve as an upperbound on transfer learning accuracy for experiments using Nearest-N, Uniform, and Largest-Area base models. Similarly, Random can provide a lower bound. Table 1 shows that Nearest-3 gives the best accuracy for 8 out of 12 datasets. For two datasets, Nearest-2 performs slightly better than Nearest-3, while Uniform and Largest-Area perform best for the person and the flowers datasets. Experiment 2 To capture the performance of pseudo-labeled vs. human-labeled datasets, we define in the usual way the relative error of transfer learning accuracy between a pseudo-labeled dataset, i, and the Vanilla base model, v, as: Figures 5a and 5b show plots of Err i for 12 different target datasets. The average value of Err i is 17.2%, with minimum and maximum being 6.1% and 26.3%. Thus, using base models trained with automatically generated labels, transfer learning accuracy is on average only 17.2% worse when compared to base models trained using human labeled images. Further, the error shrinks when the divergence increases. This implies that when the base dataset is far away in feature space, the transferability of representations is less sensitive to noise in the labels. DISPLAYFORM0 We have shown that generation of content-aware pseudolabels can provide transfer performance approaching that of human labels, and that models trained on psuedolabels can be used as source models for transfer learning. The automated approach presented here suggests that the internal representations of content models trained on specialized datasets contain some descriptive features of those datasets. By treating each of these specialized representations as a "word" in a longer "sentence" that describes a category of images, we can create labels such as a "music-weapon-person" to describe a suit of armor, or a "tree-animal-fungus" to describe an elephant. These rich labels capture features of these objects such as visual information about the materials they are made out of, that better describe the contents than reliance on a single label would produce. Using multiple, content-aware models to achieve greater descriptive power may be a valuable future avenue of research.
A technique for automatically labeling large unlabeled datasets so that they can train source models for transfer learning and its experimental evaluation.
1,384
scitldr
Recent studies in attention modules have enabled higher performance in computer vision tasks by capturing global contexts and accordingly attending important features. In this paper, we propose a simple and highly parametrically efficient module named Tree-structured Attention Module (TAM) which recursively encourages neighboring channels to collaborate in order to produce a spatial attention map as an output. Unlike other attention modules which try to capture long-range dependencies at each channel, our module focuses on imposing non-linearities be- tween channels by utilizing point-wise group convolution. This module not only strengthens representational power of a model but also acts as a gate which controls signal flow. Our module allows a model to achieve higher performance in a highly parameter-efficient manner. We empirically validate the effectiveness of our module with extensive experiments on CIFAR-10/100 and SVHN datasets. With our proposed attention module employed, ResNet50 and ResNet101 models gain 2.3% and 1.2% accuracy improvement with less than 1.5% parameter over- head. Our PyTorch implementation code is publicly available. Advancements in attention modules have boosted up the performance where they are employed over broad fields in deep learning such as machine translation, image generation, image and video classification, object detection, segmentation, etc (; a; b; c; ; ;). In the fields of computer vision tasks, numerous attention modules have been proposed in a way that one can attach it to a backbone network obtaining an efficient trade-off between additional parameters of the attached attention module and the model's performance. SENet (b) encodes global spatial information using global average pooling and captures channel-wise dependencies using two fully-connected layers over the previously encoded values at each channel. Input feature maps of the SE module are recalibrated with output values corresponding to each channel after applying a sigmoid activation function to produce output feature maps of the module. In this manner, the model can distinguish which channels to attend than others. GENet (a) shows simply gathering spatial information with depth-wise strided convolution and redistributing each gathered value across all positions with nearest neighbor upsampling can significantly help a network to understand global feature context. NLNet aggregates query-specific global context and adds values to each corresponding channel. GCNet simplifies NLNet in a computationally efficient way using the fact that a non-local block used in the NLNet tends to produce attention map independent of query position. BAM efficiently enhances backbone networks by placing attention modules in bottleneck regions, which requires few increase in both parameters and computation. CBAM incorporates channel and spatial attentions and employs a max descriptor as well as an average descriptor for more precise attention. It is clear that proposed modules in aforementioned studies have brought remarkable , most of their main focus has been on how to capture long-range dependencies across spatial dimension. That is, they mainly focus on contextual modeling rather than capturing inter-channel relations both of which are regarded indispensable for an attention module as depicted in. In this work, we propose a module which strengthens model representational power by imposing nonlinearities between neighboring channels in a parameter efficient manner. While this work deviates Figure 1: An instance of our proposed module with group size 2. f p denotes a point-wise convolution followed by an activation function which combines neighboring channels. C m n denotes a n-th channel after applying m point-wise group convolutions to the input feature map. One channel attention map followed by a sigmoid σ is produced. A color refers to information a channel contains. The repetition of point-wise group convolution yields a tree-like structure. from the current trend of capturing long-range dependencies within spatial dimension, we argue that taking consideration of inter-channel relations can also achieve highly competitive even without capturing any kind of spatial dependencies. Our module incorporates all channels to produce a single meaningful attention map as an output whereas most previous studies restore the input channel dimension in order to attend important channels and to suppress less meaningful ones. For this, we repeatedly apply light-weight point-wise group convolution with a fixed group size to an input feature map until the number of channels becomes one. While the increased parameters and computation are almost negligible, we find this simple design remarkably boosts up the performance of various backbone networks. As we see in section 3, the module performance is highly competitive to other attention modules and enhances baseline models with few additional parameter overhead. This gives one a clue to another notion for attention deviating from the current trend of taking global context. Our contributions are two-fold: • we propose Tree-structured Attention Module (TAM) which allows the network to learn inter-channel relationships using light-weight point-wise group convolutions. This treestructure enables convolution filters in the mid and later phase of a network to have a higher variance so that it can have more presentation power. • by proving validity of TAM with extensive experiments, we highlight the potential importance of inter-channel relations. The Tree-structured Attention Module is a simple and light-weight module which helps the network to learn inter-channel relations by repeating a pairing function f p. Given an input feature map X and intermediate feature map Y of the module, where N, H, W, g means the number of input channels, height, width, and group size, respectively., then the output Z of the module can be described by where for m repetitive operations of f p with a fixed group size g. To ensure the module to look through all Table 1. channels we repeat f p until the number of remaining channels is not divisible by the factor g. Then, we apply 1x1 convolution followed by sigmoid function in order to produce one channel output. We replicate the output through channel axis to restore the original input dimension in order to recalibrate the input feature map by element-wise multiplication. Figure 1 shows an instance of our model with g = 2. To permit a parametric efficiency, we adopt point-wise group convolution for f p. Formally, where and δ refers to point-wise convolution and non-linear activation. For activation we use channel-wise PReLU function (b). Unlike an ordinary point-wise convolution which increases the model parameter and computation proportional to N input × N output, a point-wise group convolution increases them proportional to N input × N output /n groups where n groups refers to the number of groups, while allowing a model to learn channel dependencies. With a larger n groups, that is a smaller g, increased parameters and computation decrease rapidly. As can be see in section 3, the increased parameters and computations are almost negligible throughout the experiments while offering huge accuracy gains over baselines. To inspect the validity and efficiency of our module, we conduct extensive experiments on CIFAR-10/100 and SVHN 1. We attach our module to the last part of every residual block before addition. For fair comparisons, we fix training configurations and reimplement all codes using PyTorch framework . We first explore the model performances by varying the group size g. We experiment with ResNet50 model with group sizes = {2, 4, 8, 16, 32, 64} on CIFAR-100 dataset as shown in Figure 2. Both top-1/5 errors go down with a smaller group size reaching the lowest point of 22.42/5.94% at g = 2. For a reference, the original ResNet101 produces 23.17%. Even a simple repetition of point-wise group convolution offers large gains as it permits the model to learn channel dependencies. Note that we do not apply any kind of context modeling within the module. Even though we see a favorable tendency that an error reduces with a lower g, it severely degrades the model performance with a large group size. In case of g = 64, it produces poorer than the baseline model for both top-1/5. This indicates that for the module to learn channel dependencies, it is important to keep the number of channels interacting each other at a time sufficiently small. It is intuitively understandable that the model can finely process channel information with more nonlinearities with a smaller group size as the number of non-linear operations increases logarithmically anti-proportional to g. For simplicity, we hereafter denote TAM as our module with g = 2. Table 1: Top-1/5 errors based on various backbone networks on CIFAR-10/100 data set. For CIFAR-10, we only report top-1 error as we don't see much difference between performances. The value in parenthesis denotes the increased parameters over the baseline network. RESNET To prove generality of the module, we experiment with other various architectures which are ResNet18, ResNet50, ResNext50(8×64d) , and WideResNet(WRN)16-8 . We follow an ordinary preprocessing, which is zero-padding 4 pixels around input pixels, normalizing each channel with mean and standard deviation from training set, allowing horizontal flip by half chance and randomly crop 32×32 pixels. We initialize weights following He et al. (2015a). We use 128 mini-batch size and SGD with 0.9 momentum and 10 −4 weight decay. It is worth noticing that we allow the decay affects to PReLU weights as we found it works better to do so in preliminary experiments. We train models for 300 epochs and set an initial learning rate 0.1 decaying by 10 at epoch 150 and 225. Table 1 shows the . Our model outperforms the baseline model and shows competitive to ones with different attention modules. Unlike other modules which a parametric overhead percentage largely increases according to a depth of a backbone architecture, one from our module is negligible regardless of a depth. Interestingly, our module shows better performance over other attention modules with less than 1.5% parametric overhead on ResNet50 and ResNext50 where over 10% increase are made in other modules. This supports the effectiveness of capturing inter-channel relations. The Street View House Numbers(SVHN) dataset is composed of 73,257 training and 26,032 test images of 32-by-32 RGB pixels and corresponding labels. We don't do any preprocessing except for normalizing images to have range. We set the identical hyperparameters with CIFAR-10/100 training except that we train the model for 160 epochs and decay the learning rate at epoch. Table 2 shows the . We find other modules sometimes work more poorly than baseline network. We surmise this is because of its non-negligible increase in a parameter number as it may cause overfitting for a relatively small dataset. For a spatial attention activation to be discriminable, it should have a large variance across spatial dimension. For this reason, we derive mean and variance of each activation using ResNet50 on CIFAR-100 test dataset from every attention module. Here "stage" means a stage where feature maps of a fixed spatial size are processed. Stage number starts with 1 and is increased whenever the spatial pooling is applied. Figure 3 shows the variance versus mean scatter plot. Unexpectedly, all attention maps from TAM have a zero variance which means they do not contain spatial discriminability and they are independent to input feature map. Rather, it depends only on the location where the module is placed and controls the input signal accordingly. Figure 4a shows how TAM manages the signal flow depending on its location. Overall, a trend is found that TAM suppresses signals flowing through earlier blocks and gradually allows signals. As abstract features which contains a larger, thus more meaningful context are delivered in the mid and later phase of a network, TAM preserves the information to some extent for the later part of the network. Also, TAM allows relatively more signals at the first block of every stage where a spatial pooling is applied, that is, at block 4, 8, and 14. It can be explained as TAM considers for a loss caused by a spatial pooling. To investigate how these activation values affect a performance, we conduct a lesion study as shown in Figure 4b. We omit each attention at a time and measure the top-1/5 accuracy. Where an attention is omitted, the flow is fully transmitted. Interestingly, when we disable the first attention within a stage rapid decrease is observed. This is counter-intuitive but explains why TAM does not fully allow the input signal but just to a certain degree. To further investigate the working mechanism of TAM, we eliminate non-linearities within the module. We first average weights, i.e. negative slopes, of PReLU per layer within the attention module. We sort them out from one closer to 1 to one far from 1. We gradually replace the weights with 1 which eliminates non-linearities of the layers. Figure 5a shows the . Surprisingly, the performance remains almost constant over eliminating ratio. Unlike conventional thoughts that nonlinearities are an indispensable part of a neural network, TAM, once training is finished, does not depend on activation functions. To answer how TAM offers benefits to a network, we plot average variances of each convolutional filter weight within residual blocks from the trained ResNet50, ResNet50-SE, ResNet50-CBAM and ResNet50-TAM. Figure 5b shows this. Compared to the baseline and ones with other attention modules, the average variances have higher values except for the last part of the network. We believe this is because it is more important to pass selected meaningful features to the classifier. This indicates that TAM gives more choices of various filters in the early and mid phase and focuses on a few important filter weights in the last stage by allowing learning inter-channel relationships. As earlier noted in Hu et al. (2018a; b), TAM also shows faster convergence speed during training. Figure 6 presents this. We attribute this to its light-weight property of TAM. As TAM does not require many additional weights which may require more time to adjust while helping the network to understand inter-channel relations. In summary, • TAM behaves like a static gate which only depends on its location and controls signal flow. • TAM allows the model to have various filters in the early and mid phase of the network and narrows the choices the choices down to important ones in the last stage. • TAM helps a network to converge fast. In this paper, we propose Tree-structure Attention module which enables a network to learn interchannel relationships which deviates from the current trend of capturing long-range dependencies in attention literature. TAM adopts light-weight point-wise group convolutions to allow communication between neighboring channels. Once trained, TAM acts as a static gate controlling signal at a certain location which does not depend on input feature but on the location where it is placed. Moreover, TAM permits higher variances in filter weights in the early and mid phase and helps the filters to focus on important ones at the last phase before classifier. On top of that, TAM produces favorable performance gains with only a few additional parameters to a backbone network. These advantages of TAM shed a light on a new way to attend features.
Our paper proposes an attention module which captures inter-channel relationships and offers large performance gains.
1,385
scitldr
A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior. Reinforcement learning (RL) algorithms have the potential to automate a wide range of decisionmaking and control tasks across a variety of different domains, as demonstrated by successful recent applications ranging from robotic control BID19 BID23 to game playing BID28 BID44. A key assumption of the RL problem statement is the availability of a reward function that accurately describes the desired tasks. For many real world tasks, reward functions can be challenging to manually specify, while being crucial to good performance BID1. Most real world tasks are multifaceted and require reasoning over multiple factors in a task (e.g. an autonomous vehicle navigating a city at night), while simultaneously providing appropriate reward shaping to make the task feasible with tractable exploration BID32. These challenges are compounded by the inherent difficulty of specifying rewards for tasks with high-dimensional observation spaces such as images. Inverse reinforcement learning (IRL) is an approach that aims to address this problem by instead inferring the reward function from demonstrations of the task BID31. This has the appealing benefit of taking a data-driven approach to reward specification in place of hand engineering. In practice however, rewards functions are rarely learned as it can be prohibitively expensive to provide demonstrations that cover the variability common in real world tasks (e.g., collecting demonstrations of opening every type of door knob). In addition, while learning a complex function from high dimensional observations might make an expressive function approximator seem like a reasonable modelling assumption, in the "few-shot" domain it is notoriously difficult to unambiguously recover a good reward function with expressive function approximators. Prior solutions have thus instead relied on low-dimensional linear models with handcrafted features that effectively encode a strong prior on the relevant features of a task. This requires engineering a set of features by hand that work well for a specific problem. In this work, we propose an approach that instead explicitly learns expressive features that are robust even when learning with limited demonstrations. Our approach relies on the key observation that related tasks share common structure that we can leverage when learning new tasks. To illustrate, considering a robot navigating through a home. While the exact reward function we provide to the robot may differ depending on the task, there is a structure amid the space of useful behaviours, such as navigating to a series of landmarks, and there are certain behaviors we always want to encourage or discourage, such as avoiding obstacles or staying a reasonable distance from humans. This notion agrees with our understanding of why humans can easily infer the intents and goals (i.e., reward functions) of even abstract agents from just one or a few demonstrations BID4, as humans have access to strong priors about how other humans accomplish similar tasks accrued over many years. Similarly, our objective is to discover the common structure among different tasks, and encode the structure in a way that can be used to infer reward functions from a few demonstrations. Figure 1: A diagram of our meta-inverse RL approach. Our approach attempts to remedy over-fitting in few-shot IRL by learning a "prior" that constraints the set of possible reward functions to lie within a few steps of gradient descent. Standard IRL attempts to recover the reward function directly from the available demonstrations. The shortcoming of this approach is that there is little reason to expect generalization as it is analogous to training a density model with only a few examples. More specifically, in this work we assume access to a set of tasks, along with demonstrations of the desired behaviors for those tasks, which we refer to as the meta-training set. From these tasks, we then learn a reward function parameterization that enables effective few-shot learning when used to initialize IRL in a novel task. Our method is summarized in Fig. 1. Our key contribution is an algorithm that enables efficient learning of new reward functions by using meta-training to build a rich "prior" for goal inference. Using our proposed approach, we show that we can learn deep neural network reward functions from raw pixel observations with substantially better data efficiency than existing methods and standard baselines. Inverse reinforcement learning (IRL) BID31 is the problem of inferring an expert's reward function directly from demonstrations. Prior methods for performing IRL range from margin based approaches BID0 BID34 to probabilistic approaches BID33 BID55. Although it is possible to extend our approach to any other IRL method, in this work we base on work on the maximum entropy (MaxEnt) framework BID55. In addition to allowing for sub-optimality in the expert demonstrations, MaxEnt-IRL can be re-framed as a maximum likelihood estimation problem. 4).In part to combat the under-specified nature of IRL, prior work has often used low-dimensional linear parameterizations with handcrafted features BID0 BID55. In order to learn from high dimensional input, BID52 proposed applying fully convolutional networks BID41 to the MaxEnt IRL framework BID55 for several navigation tasks BID51 BID4. Other methods that have incorporated neural network rewards include guided cost learning (GCL) BID11, which uses importance sampling and regularization for scalability to high-dimensional spaces, and adversarial IRL BID13. Several other methods have also proposed imitation learning approaches based on adversarial frameworks that resemble IRL, but do not aim to directly recover a reward function BID16 BID15 BID21. In this work, instead of improving the ability to learn reward functions on a single task, we focus on the problem of effectively learning to use prior demonstration data from other IRL tasks, allowing us to learn new tasks from a limited number demonstrations even with expressive non-linear reward functions. Prior work has explored the problem of multi-task IRL, where the demonstrated behavior is assumed to have originated from multiple experts achieving different goals. Some of these approaches include those that aim to incorporate a shared prior over reward functions through extending the Bayesian IRL BID33 framework to the multi-task setting BID7 BID6. Other approaches have clustered demonstrations while simultaneously inferring reward functions for each cluster BID3 or introduced regularization between rewards to a common "shared reward" BID25. Our work is similar in that we also seek to encode prior information common to the tasks. However, a critical difference is that our method specifically aims to distill the meta-training tasks into a prior that can then be used to learn rewards for new tasks efficiently. The goal therefore is not to acquire good reward functions that explain the meta-training tasks, but rather to use them to learn efficiently on new tasks. Our approach builds on work on the broader problem of meta-learning BID40 BID5 BID30 BID48 and generative modelling BID37 BID36 BID29. Prior work has proposed a variety of solutions for learning to learn including memory based methods BID8 BID38 BID27, methods that learn an optimizer and/or initialization BID2 BID35 BID11 BID24, and methods that compare new datapoints in a learned metric space BID20 BID49 BID42 BID46. Our work is motivated by the goal of broadening the applicability of IRL, but in principle it is possible to adapt many of these meta-learning approaches for our problem statement. We leave it to future work to do a comprehensive investigation of different meta-learning approaches which could broaden the applicability of IRL. In this section, we introduce our notation and describe the IRL and meta-learning problems. The standard Markov decision process (MDP) is defined by the tuple (S, A, p s, r, γ) where S and A denote the set of possible states and actions respectively, r: S × A → R is the reward function, γ ∈ is the discount factor and p s: S × S × A → denotes the transition distribution over the next state s t+1, given the current state s t and current action a t. Typically, the goal of "forward" RL is to maximize the expected discounted return R(τ) = T t=1 γ t−1 r(s t, a t).In IRL, we instead assume that the reward function is unknown but that we instead have access to a set of expert demonstrations D = {τ 1, . . ., τ K}, where DISPLAYFORM0 The goal of IRL is to recover the unknown reward function r from the set of demonstrations. We build on the maximum entropy (MaxEnt) IRL framework by BID55, which models the probability of the trajectories as being distributed proportional to their exponentiated return DISPLAYFORM1 where Z is the partition function, Z = τ exp(R(τ))dτ. This distribution can be shown to be induced by the optimal policy in entropy regularized forward RL problem: DISPLAYFORM2 This formulation allows us to pose the reward learning problem as a maximum likelihood estimation (MLE) problem in an energy-based model r φ: DISPLAYFORM3 Learning in general energy-based models of this form is common in many applications such as structured prediction. However, in contrast to applications where learning can be supervised by millions of labels (e.g. semantic segmentation), the learning problem in Eq. 3 must typically be performed with a relatively small number of example demonstrations. In this work, we seek to address this issue in IRL by providing a way to integrate information from prior tasks to constrain the optimization in Eq. 3 in the regime of limited demonstrations. The goal of meta-learning algorithms is to optimize for the ability to learn efficiently on new tasks. Rather than attempt to generalize to new datapoints, meta-learning can be understood as attempting to generalize to new tasks. It is assumed in the meta-learning setting that there are two sets of tasks that we refer to as the meta-training set {T i ; i = 1..N} and meta-test set {T j ; j = 1..M}, which are both drawn from a distribution p(T). During meta-training time, the meta-learner attempts to learn the structure of the tasks in the meta-training set, such that when it is presented with a new test task, it can leverage this structure to learn efficiently from a limited number of examples. To illustrate this distinction, consider the case of few-shot learning setting. Let f θ denote the learner, and let a task be defined by learning from K training examples DISPLAYFORM0 One approach to meta-learning is to directly parameterize the meta-learner with an expressive model such as a recurrent or recursive neural network BID8 BID27 conditioned on the task training data and the inputs for the test task: DISPLAYFORM1. Such a model is optimized using log-likelihood across all tasks. In this approach to meta-learning, since neural networks are known to be universal function approximators BID43, any desired structure between tasks can be implicitly encoded. Rather than learn a single black-box function, another approach to meta-learning is to learn components of the learning procedure such as the initialization BID11 or the optimization algorithm BID35 BID2. In this work we extend the approach of model agnostic meta-learning (MAML) introduced by BID11, which learns an initialization that is adapted by gradient descent. Concretely, in the supervised learning case, given a loss function L(θ, X T, Y T) (e.g. cross-entropy), MAML performs the following optimization DISPLAYFORM2 where the optimization is over an initial set of parameters θ and the loss on the held out tasks X test T becomes the signal for learning the initial parameters for gradient descent on X tr T. This optimization is analogous to adding a constraint in a multi-task setting, which we show in later sections is analogous in our setting to learning a prior over reward functions. Our goal in meta-IRL is to learn how to learn reward functions across many tasks such that the model can infer the reward function for a new task using only one or a few expert demonstrations. Intuitively, we can view this problem as aiming to learn a prior over the intentions of human demonstrators, such that when given just one or a few demonstrations of a new task, we can combine the learned prior with the new data to effectively determine the human's reward function. Such a prior is helpful in inverse reinforcement learning settings, since the space of relevant reward functions is much smaller than the space of all possible rewards definable on the raw observations. During meta-training, we have a set of tasks {T i ; i = 1..N}. Each task T i has a set of demonstrations D T = {τ 1, . . ., τ K} from an expert policy which we partition into disjoint D tr T and D test T sets. The demonstrations for each meta-training task are assumed to be produced by the expert according to the maximum entropy model in Section 3.1. During meta-training, these tasks will be used to encodes common structure so that our model can quickly acquire rewards for new tasks from just a few demonstrations. After meta-training, our method is presented with a new task. During this meta-test phase, the algorithm must infer the parameters of the reward function r φ (s t, a t) for the new task from a few demonstrations. As is standard in meta-learning, we assume that the test task is from the same distribution of tasks seen during meta-training, a distribution that we denote as p(T). In order to meta-learn a reward function that can act as a prior for new tasks and new environments, we first formalize the notion of a good reward by defining a loss L T (θ) on the reward function r θ for a particular task T. We use the MaxEnt IRL loss L IRL discussed in Section 3, which, for a given D T, leads to the following gradient BID55: DISPLAYFORM0 DISPLAYFORM1 via chain rule 16:Compute updated parameters DISPLAYFORM2 Sample demos DISPLAYFORM3 via chain rule Compute meta-gradient 20: DISPLAYFORM4 Update initial parameters where µ τ are the state visitations under the optimal maximum entropy policy under r θ, and µ D are the mean state visitations under the demonstrated trajectories. If our end goal were to achieve a single reward function that works as well as possible across all tasks in {T i ; i = 1..N}, then we could simply follow the mean gradient across all tasks. However, our objective is different: instead of optimizing performance on the meta-training tasks, we aim to learn a reward function that can be quickly and efficiently adapted to new tasks at meta-test time. In doing so, we aim to encode prior information over the task distribution in this learned reward prior. We propose to implement such a learning algorithm by finding the parameters θ, such that starting from θ and taking a small number of gradient steps on a few demonstrations from given task leads to a reward function for which a set of test demonstrations have high likelihood, with respect to the MaxEnt IRL model. In particular, we would like to find a θ such that the parameters DISPLAYFORM5 lead to a reward function r φ T for task T, such that the IRL loss (corresponding to negative loglikelihood) for a disjoint set of test demonstrations, given by L DISPLAYFORM6 IRL, is minimized. The corresponding optimization problem for θ can therefore be written as follows: DISPLAYFORM7 Our method acquires this prior θ over rewards in the task distribution p(T) by optimizing this loss. This amounts to an extension of the MAML algorithm in Section 3.2 to the inverse reinforcement learning setting. This extension is quite challenging, because computing the MaxEnt IRL gradient requires repeatedly solving for the current maximum entropy policy and visitation frequencies, and the MAML objective requires computing derivatives through this gradient step. Next, we describe in detail how this is done. An overview of our method is also outlined in Alg. 1.Meta-training. The computation of the meta-gradient for the objective in Eq. 7 can be conceptually separated into two parts. First, we perform the update in Eq. 6 by computing the expected state visitations µ, which is the expected number of times an agent will visit each state. We denote this overall procedure as STATE-VISITATIONS-POLICY, and follow BID55 by first computing the maximum entropy optimal policy in Eq. 2 under the current r θ, and then approximating µ using dynamic programming. Next, we compute the state visitation distribution of the expert using a procedure which we denote as STATE-VISITATIONS-TRAJ. This can be done either empirically, by averaging the state visitation of the experts demonstrations, or by using STATE-VISITATIONS-POLICY if the true reward is available at meta-training time. This allows us to recover the IRL gradient according to Eq. 5, which we can then apply to compute φ T according to Eq. 6.Second, we need to differentiate through this update to compute the gradient of the meta-loss in Eq. 7. Note that the meta-loss itself is the IRL loss evaluated with a different set of test demonstrations. We follow the same procedure as above to evaluate the gradient of L IRL with respect to the post-update parameters φ T, and then apply the chain rule to compute the meta-gradient: DISPLAYFORM0 where on the second line we differentiate through the MaxEnt-IRL update. The derivation of this expression is somewhat more involved and provided in the supplementary Appendix D.Meta-testing. Once we have acquired the meta-trained parameters θ that encode a prior over p(T), we can leverage this prior to enable fast, few-shot IRL of novel tasks in {T j ; j = 1..M}. For each task, we first compute the state visitations from the available set of demonstrations for that task. Next, we use these state visitations to compute the gradient, which is the same as the inner loss gradient computation of the meta-training loop in Alg. 1. We apply this gradient to adapt the parameters θ to the new task. Even if the model was trained with only one to three inner gradient steps, we found in practice that it was beneficial to take substantially more gradient steps during meta-testing; performance continued to improve with up to 20 steps. The objective in Eq. 6 optimizes for parameters that enable that reward function to adapt and generalize efficiently on a wide range of tasks. Intuitively, constraining the space of reward functions to lie within a few steps of gradient descent can be interpreted as expressing a "locality" prior over reward function parameters. This intuition can be made more concrete with the following analysis. DISPLAYFORM0 Figure 2: Our approach can be understood as approximately learning a distribution over the demonstrations τ, in the factor graph p(τ) = 1 Z T t=1 Φr(φ T, st, at)Φdyn(st+1, st, at) (above) where we learn a prior over φT, which during meta-test is used for MAP inference over new expert demonstrations. By viewing IRL as maximum likelihood estimation, we can take the perspective of BID14 who showed that for a linear model, fast adaptation via a few steps of gradient descent in MAML is performing MAP inference over φ, under a Gaussian prior with the mean θ and a covariance that depends on the step size, number of steps and curvature of the loss. This is based on the connection between early stopping and regularization previously discussed in , which we refer the readers to for a more detailed discussion. The interpretation of MAML as imposing a Gaussian prior on the parameters is exact in the case of a likelihood that is quadratic in the parameters (such as the log-likelihood of a Gaussian in terms of its mean). For any non-quadratic likelihood, this is an approximation in a local neighborhood around θ (i.e. up to convex quadratic approximation). In the case of very complex parameterizations, such as deep function approximators, this is a coarse approximation and unlikely to be the mode of a posterior. However, we can still frame the effect of early stopping and initialization as serving as a prior in a similar way as prior work BID45 BID10 BID14. More importantly, this interpretation hints at future extensions to our approach that could benefit from employing more fully Bayesian approaches to reward and goal inference. Our evaluation seeks to answer two questions. First, we aim to test our core hypothesis that leveraging prior task experience enables reward learning for new tasks with just a few demonstrations. Second, we compare our method with alternative algorithms that make use of multi-task experience. We test our core hypothesis by comparing learning performance on a new task starting from the learned initialization produced by MandRIL, compared to starting from scratch with a random initialization. This comparison is meant to evaluate whether prior experience on other tasks can in fact make inverse RL more efficient. To our knowledge, there is no prior work that addresses the meta-inverse reinforcement learning problem introduced in this paper. Thus, to provide a point of comparison and calibrate the difficulty of the tasks, we adapt two alternative black-box meta-learning methods to the IRL setting. The comparisons to both of the black-box methods described below evaluate the importance of incorporating the IRL gradient into the meta-learning process, rather than learning the adaptation process entirely from scratch.• Demo conditional model: Our method implicitly conditions on the demonstrations through the gradient update. In principle, a conditional deep model with sufficient capacity could implicitly implement a similar learning rule. Thus, we consider a conditional model (often referred to as a "contextual model" BID12), which receives the demonstration as an additional input.• Recurrent meta-learner: We additionally compare to an RNN-based meta-learner BID38 BID9. Specifically, we implement a conditional model by feeding both images and sequences of states visited by the demonstrations to an LSTM.Our approach can be understood as explicitly optimizing for an effective parameter initialization for the IRL problem. In order to test the benefits of our proposed formulation, we also compare with finetuning an initialization obtained with the same set of prior tasks, but with supervised pretraining as follows:• Supervised pre-training: We compare to following the average gradient during meta-training, averaged across tasks, and fine-tuning at meta-test time (as discussed in Section 4). This comparison evaluates the benefits of optimizing explicitly for weights that perform well under fine-tuning. We compare to pre-training on a single task as well as all the meta-training tasks. Figure 3: An example task: When learning a task, the agent has access to the image (left) and demonstrations (red arrows). To evaluate the agent's learning (right), the agent is tested for its ability to recover the reward for the task when the objects have been rearranged. The reward structure we wish to capture can be illustrated by considering the initial state in blue. An policy acting optimally under a correctly inferred reward should interpret the other objects as obstacles, and prefer a path on the dirt in between them. Next, we describe our environment and evaluation. Spriteworld navigation domain. Since most prior IRL works (and multi-task IRL works) have studied settings where linear reward function approximators suffice (i.e. low-dimensional state spaces made up from hand-designed features), we design an experiment that is significantly more challenging-that requires learning rewards on raw pixels-while still exhibiting multi-task structure needed to test our core hypothesis. We consider a navigation problem where we aim to learn a convolutional neural network that directly maps image pixels to rewards. To do so, we introduce "SpriteWorld," which is a synthetically generated task, some examples of which are shown in Fig. 3. The task visuals are inspired by Starcraft and work applying learning algorithms to perform micromanagement (e.g. BID47), although we do not use the game engine. Tasks involve navigating to goal objects while exhibiting preference over terrain types (e.g. the agent prefers to traverse dirt tiles over traversing grass tiles). At meta-test time, we provide one or a few demonstrations in a single training environment and evaluate the reward learned using these demonstrations in a new, test environment that contains the same objects as the training environment, but arranged differently. Evaluating in a new test environment is critical to measure that the reward learned the correct visual cues, rather than simply memorizing the demonstration trajectory. held-out tasks test performance (top), and test performance on held-out tasks with novel sprites (bottom). All methods are capable are overfitting to the training environment (See Appendix A). However, in both test settings, MandRIL achieves comparable performance to the training environment, while the other methods overfit until they receive at least 10 demonstrations. The recurrent meta-learner has a value difference larger than 60 in both test settings. Shaded regions show 95% confidence intervals. The underlying MDP structure of SpriteWorld is a grid, where the states are each of the grid cells, and the actions enable the agent to move to any one of its 8-connected neighbors. We generate unique tasks from this domain as follows. First, we randomly choose a set of 3 sprites from a total of 100 sprites from the original game (creating a total of 161,700 unique tasks). We randomly place these three sprites within a randomly generated terrain tiling; we designate one of the sprites to be the goal of the navigation task, i.e. the object to which the agent must navigate. The other two objects are treated as obstacles for which the agent incurs a large negative reward for not avoiding. In each task, we optimize our model on a meta-training set and evaluate the ability of the reward function to the generalize to a rearrangement of the same objects. For example, suppose the goal of the task is to navigate to sprite A, while avoiding sprites B and C. Then, to generate an environment for evaluation, we resample the positions of the sprites, while the underlying task remains the same (i.e., navigate to A). This requires the model to make use of the visual patterns in the scene to generalize effectively, rather than simply memorizing positions of sprites. We evaluate on novel combinations of units seen in meta-training, as well as the ability to generalize to new unseen units. We provide further details on our setup in Appendices A and B.We measure performance using the expected value difference, which measures the sub-optimality of a policy learned under the learned reward; this is a standard performance metric used in prior IRL work BID22 BID52. The metric is computed by taking the difference between the value of the optimal policy under the learned reward and the value of the optimal policy under the true reward. Evaluation protocol. We evaluate on held-out tasks that were unseen during meta-training. We consider two settings: tasks involving new combinations and placements of sprites, with individual sprites that were present during meta-training, and tasks with combinations of entirely new sprites which we refer to as "out of domain objects." For each task, we generate one environment (set of sprite positions) along with one or a few demonstrations for adapting the reward, and generate a second environment (with new sprite positions) where we evaluate the adapted reward. In these meta-test time evaluations, we refer to the performance on the first environment as "training performance" (not to be confused with meta-training) and to performance on the second as "testing performance". We evaluate on 32 meta-test randomly generated tasks. Results. The are shown in FIG0, which illustrate test performance with in-distribution and out-of-distribution sprites. Our approach, MandRIL, achieves consistently better performance in both settings. Most significantly, our approach performs well even with single-digit numbers of demonstrations. By comparison, alternative meta-learning methods generally overfit considerably, attaining good training performance (see Appendix. A for curves) but poor test performance. Learning the reward function from scratch is in fact the most competitive baseline -as the number of demonstrations increases, simply training the fully convolutional reward function from scratch on the new task is the only method that matches the performance of MandRIL when provided 20 or more demonstrations. However, with only a few demonstrations, MandRIL has substantially lower value difference. It is worth noting the performance of MandRIL on the out of distribution test setting FIG0: although the evaluation is on new sprites, MandRIL is still able to adapt via gradient descent and exceed the performance of learning from scratch and all other methods. Figure 5: Meta-test performance comparison with varying number of demonstrations using pre-trained weights fine-tuned over either a single task or all tasks. We find that pre-training on the full set of tasks hurts adaptation, while pre-training on a single task does not improve performance (comparable to random initialization). Our approach ManDRIL outperforms all these methods, which shows that explicitly optimizing for initial weights for fine-tuning robustly improves performance. Shaded regions show 95% confidence intervals. Finally, as it is common practice to fine-tune representations obtained from a supervised pre-training phase, we perform this comparison in Figure 5. We compare against an approach that follows the mean gradient across the tasks at meta-training time and is fine-tuned at meta-test time which we find is less effective than learning from a random initialization. We conclude that fine tuning reward functions learned in this manner is not an effective way of using prior task information. When using a single task for pre-training, we find that it performs comparable to random initialization. In contrast, we find that our approach, which explicitly optimizes for initial weights for fine-tuning, robustly improves performance. In this work, we present an approach that enables few-shot learning for reward functions of new tasks. We achieve this through a novel formulation of inverse reinforcement learning that learns to encode common structure across tasks. Using our meta-IRL approach, we show that we can leverage data from previous tasks to effectively learn deep neural network reward functions from raw pixel observations for new tasks, from only a handful of demonstrations. Our work paves the way for futures work that considers environments with unknown dynamics, or more fully probabilistic approaches to reward and goal inference. The input to our reward function for all experiments is a resized 80 × 80 RGB image, with an output space of 20×20 in the underlying MDP state space s. In order to compute the optimal policy, we use Q-iteration. In our experiments, we parameterize the reward function for all our reward functions starting from the same base learner. The first layer is a 8 × 8 convolution with a stride of 2, 256 filters and symmetric padding of 4. The second layer is a 4 × 4 with a stride of 2, 128 filters and symmetric of 1. The third and fourth layer are 3 × 3 convolutions with a stride of 1, 64 filters and symmetric padding of 1. The final layer is a 1 × 1 convolution. Our LSTM BID17 implementation is based on the variant used by Zaremba et al. BID54. The input to the LSTM at each time step is the location of the agent in the, we separately embed the (x, y)-coordinates. This is then used to predict an additional channel in to the base CNN architecture described above. We also experimented with conditioning the initial hidden state on image features from a separate CNN, but found that this did not improve performance. In our demo conditional model, we preserve the spatial information of the demonstrations by feeding in the state visitation map as a image-grid, upsampled with bi-linear interpolation, as an additional channel to the image. In our setup, both the demo-conditional models share the same convolutional architecture, but differ only in how they encode condition on the demonstrations. For all our methods, we optimzed our model with Adam . We turned over the learning rate α, the inner learning rate β of our approach and 2 weight decay on the initial parameters. In our LSTM learner, we experimented with different embedding sizes, as well as the dimensionality of the LSTM although we found that these hyperparameters did not impact performance. A negative we found was that bias transformation BID12 did not help in our experimental setting. The sprites in our environment are extracted directly from the StarCraft files. We used in total 100 random units for meta-training. Evaluation on new objects was performed with 5 randomly selected sprites. For computational efficiency, we create a meta-training set of 1000 tasks and cache the optimal policy and state visitations under the true cost. Our evaluation is over 32 tasks. Our set of sprites was divided into two categories: buildings and characters. Each characters had multiple poses (taken from different frames of animation, such as walking/running/flying), whereas buildings only had a single pose. During meta-training the units were randomly placed, but to avoid the possibility that the agent would not need to actively avoid obstacles, the units were placed away from the boundary of the image in both the meta-validation and meta-test set. The terrain in each environment was randomly generated using a set of tiles, each belonging to a specific category (e.g. grass, dirt, water). For each tile, we also specified a set of possible tiles for each of the 4-neighbors. Using these constraints on the neighbors, we generated random environment terrains using a graph traversal algorithm, where successor tiles were sampled randomly from this set of possible tiles. This process ed in randomly generated, seamless environments. The names of the units used in our experiments are as follows (names are from the original game files):The list of buildings used is: academy, assim, barrack, beacon, cerebrat, chemlab, chrysal, cocoon, comsat, control, depot, drydock, egg, extract, factory, fcolony, forge, gateway, genelab, geyser, hatchery, hive, infest, lair, larva, mutapit, nest, nexus, nukesilo, nydustpit, overlord, physics, probe, pylon, prism, pillbox, queen, rcluster, refinery, research, robotic, sbattery, scolony, spire, starbase, stargate, starport, temple, warm, weaponpl, wessel. The list of characters used is: acritter, arbiter, archives, archon, avenger, battlecr, brood, bugguy, carrier, civilian, defiler, dragoon, drone, dropship, firebat, gencore, ghost, guardian, hydra, intercep, jcritter, lurker, marine, missile, mutacham, mutalid, sapper, scout, scv, shuttle, snakey, spider, stank, tank, templar, trilob, ucereb, uikerr, ultra, vulture, witness, zealot, zergling. C META-TEST TRAINING PERFORMANCE Figure 6: Meta-test "training" performance with varying numbers of demonstrations (lower is better). This is the performance on the environment for which demonstrations are provided for adaptation. As the number of demonstrations increase, all methods are able to perform well in terms of training performance as they can simply overfit to the training environment without acquiring the right visual cues that allow them to generalize. However, we find comes at the cost comes of considerable overfitting as we discuss in Section. 5. We define the quality of reward function r θ parameterized by θ ∈ R k on task T with the MaxEnt IRL loss, L T IRL (θ), described in Section 4. The corresponding gradient is DISPLAYFORM0 where ∂ r θ /∂ θ is the k ×|S||A|-dimensional Jacobian matrix of the reward function r θ with respect to the parameters θ. Here, µ τ ∈ R |S||A| is the vector of state visitations under the trajectory τ (i.e. the vector whose elements are 1 if the corresponding state-action pair has been visited by the trajectory τ, and 0 otherwise), and µ D T = 1 |D T | τ ∈D T µ τ is the mean state visitations over all demonstrated trajectories in D T. Let φ T ∈ R k be the updated parameters after a single gradient step. Then DISPLAYFORM1 Let L test T be the MaxEnt IRL loss, where the expectation over trajectories is computed with respect to a test set that is disjoint from the set of demonstrations used to compute L test T (θ) in Eq. 10. We seek to minimize DISPLAYFORM2 over the parameters θ. To do so, we first compute the gradient of Eq. 11, which we derive here. Applying the chain rule DISPLAYFORM3 where in the last line we substitute in the gradient of the MaxEnt IRL loss in Eq. 9. In Eq. 12, we use the following notation:• ∂ φ T /∂ θ denotes the k × k-dimensional vector of partial derivatives ∂ φ T,i /∂ θ j,• ∂ r φ T /∂ φ T denotes the k × |S||A|-dimensional matrix of partial derivatives ∂ r φ T,i /∂ φ T,j,• and, ∂ L test T /∂ r φ T denotes the k-dimensional gradient vector of L test T with respect to r φ T.We will now focus on the term inside of the parentheses in Eq. 12, which is a k × k-dimensional matrix of partial derivatives. DISPLAYFORM4 where between the first and second lines, we apply the chain rule to expand the second term. In this expression, we make use of the following notation:• ∂ 2 r θ /∂ θ 2 denotes the k × |S||A|-dimensional matrix of second-order partial derivatives of the form ∂ 2 r θ,i /∂ θ 2 j, DISPLAYFORM5 • ∂ r θ /∂ θ denotes the k × |S||A|-dimensional matrix of partial derivatives of the form ∂ r θ,i /∂ θ j,• and, DISPLAYFORM6 ] is the |S||A|-dimensional Jacobian matrix of E τ [µ τ] with respect to the reward function r θ (we will examine in more detail exactly what this is below).When substituted back into Eq. 12, the ing gradient is equivalent to that in Eq. 8 in Section 4. In order to compute this gradient, however, we must take the gradient of the expectation E τ [µ τ] with respect to the reward function r θ. This can be done by expanding the expectation as follows DISPLAYFORM7 τ exp(µ τ r θ)(µ τ µ τ) − exp(µ τ r θ) (τ exp(µ τ r θ)) 2 τ (µ τ µ τ) exp(µ τ r θ) DISPLAYFORM8
The applicability of inverse reinforcement learning is often hampered by the expense of collecting expert demonstrations; this paper seeks to broaden its applicability by incorporating prior task information through meta-learning.
1,386
scitldr
Recent work has focused on combining kernel methods and deep learning. With this in mind, we introduce Deepström networks -- a new architecture of neural networks which we use to replace top dense layers of standard convolutional architectures with an approximation of a kernel function by relying on the Nyström approximation. Our approach is easy highly flexible. It is compatible with any kernel function and it allows exploiting multiple kernels. We show that Deepström networks reach state-of-the-art performance on standard datasets like SVHN and CIFAR100. One benefit of the method lies in its limited number of learnable parameters which make it particularly suited for small training set sizes, e.g. from 5 to 20 samples per class. Finally we illustrate two ways of using multiple kernels, including a multiple Deepström setting, that exploits a kernel on each feature map output by the convolutional part of the model. Kernel machines and deep learning have mostly been investigated separately. Both have strengths and weaknesses and appear as complementary family of methods with respect to the settings where they are most relevant. Deep learning methods may learn from scratch relevant features from data and may work with huge quantities of data. Yet they actually require large amount of data to fully exploit their potential and may not perform well with limited training datasets. Moreover deep networks are complex and difficult to design and require lots of computing and memory resources both for training and for inference. Kernel machines are powerful tools for learning nonlinear relations in data and are well suited for problems with limited training sets. Their power comes from their ability to extend linear methods to nonlinear ones with theoretical guarantees. However, they do not scale well to the size of the training datasets and do not learn features from the data. They usually require a prior choice of a relevant kernel amongst the well known ones, or even require defining an appropriate kernel for the data at hand. Although most research in the field of deep learning seems to have evolved as a "parallel learning strategy" to the field of kernel methods, there are a number of studies at the interface of the two domains which investigated how some concepts can be transferred from one field to another. Mainly, there are two types of approaches that have been investigated to mix deep learning and kernels. Few works explored the design of deep kernels that would allow working with a hierarchy of representations as the one that has been popularized with deep learning (2; 14; 7; 6; 20; 23). Other studies focused on various ways to plug kernels into deep networks (13; 24; 5; 12; 25). This paper follows this latter line of research, it focuses on convolutional networks. Specifically, we propose Deepström networks which are built by replacing dense layers of a convolutional neural network by an adaptive approximation of a kernel function. Our work is inspired from Deep Fried Convnets which brings together convolutional neural networks and kernels via Fastfood, a kernel approximation technique based on random feature maps. We revisit this concept in the context of Nyström kernel approximation BID21. One key advantage of our method is its flexibility that enables the use of any kernel function. Indeed, since the Nyström approximation uses an explicit feature map from the data kernel matrix, it is not restricted to a specific kernel function and not limited only to RBF kernels, as in Fastfood approximation. This is particularly useful when one wants to use or learn multiple different kernels instead of a single kernel function, as we demonstrate here. In particular we investigate two different ways of using multiple kernels, one is a straightforward extension to using multiple kernels while the second is a multiple Deepström variant that exploits a Nyström kernel approximation for each of the feature map output by the convolutional part of the neural network. Furthermore the specific nature of our architecture makes it use only a limited number of parameters, which favours learning with small training sets as we demonstrate on targeted experiments. Our experiments on four datasets (MNIST, SVHN, CIFAR10 and CIFAR100) highlight three important features of our method. First our approach compares well to standard approaches in standard settings (using ful training sets) while requiring a reduced number of parameters compared to full deep networks and of the same order of magnitude as Deep Fried Convnets. This specific feature of our proposal makes it suitable for dealing with limited training set sizes as we show by considering experiments with tens or even fewer training samples per class. Finally the method may exploit multiple kernels, providing a new tool with which to approach the problem of multiple kernel learning (MKL), and enabling taking into account the rich information in multiple feature maps of convolution networks through multiple Deepström layers. The rest of the paper is organized as follows. We provide on kernel approximation via the Nyström and the random Fourier features methods and describe the Deep Fried Convnet architecture in Section 2. The detailed configuration of the proposed Deepström network is described in Section 3. We also show in Section 3 how Deepström networks can be used with multiple kernels. Section 4 reports experimental on MNIST, SVHN, CIFAR10 and CIFAR100 datasets to first provide a deeper understanding of the behaviour of our method with respect to the choice of the kernels and the combination of these, and second to compare it to state of the art baselines on classification tasks with respect to accuracy and to complexity issues, in particular in the small training set size setting. Kernel approximation methods have been proposed to make kernel methods scalable. Two popular methods are Nyström approximation BID21 and random features approximation BID15. The former approximates the kernel matrix by an efficient low-rank decomposition, while the latter is based on mapping input features into a low-dimensional feature space where dot products between features approximate well the kernel function. Nyström approximation BID21 It computes a low-rank approximation of the kernel matrix by randomly subsampling a subset of instances. Let consider a training set of n training samples, DISPLAYFORM0 which is selected from the training set. Assuming the subset includes the first samples, or rearranging the training samples this way, K may be rewritten as: DISPLAYFORM1 where K 11 is the Gram matrix on subset L. Nyström approximation is obtained as follows DISPLAYFORM2 From this approximation the Nyström nonlinear representation of a single example x is given by DISPLAYFORM3 where DISPLAYFORM4 Random features approximation It computes a low-dimensional feature mapφ of dimension q such that φ (·),φ(·) =k(·, ·) k(·, ·). Two instances of this method are Random Kitchen Sinks (RKS) and Fastfood (17; 9). RKS approximates a Radial Basis Function (RBF) kernel using a random feature map defined as DISPLAYFORM5 where z ∈ R p, Q ∈ R q×p and Q i,j are drawn randomly. If Q i,j are drawn according to a Gaussian distribution then the method is shown to approximate the Gaussian kernel, i.e. φ rks (DISPLAYFORM6 where σ is the hyper-parameter of the kernel. Note that σ is related to the parameters of the Gaussian distribution that generate the random features. The Fastfood method is a variant of RKS with reduced computational cost for the Gaussian kernel. It is based on approximating the matrix Q in Eq. 2, when q = p, by a product of diagonal and hadamard matrices according to DISPLAYFORM7 where S,G and B are diagonal matrices of size p × p, Π ∈ {0, 1} p×p is a random permutation matrix, H is a Hadamard matrix which does not requite to be stored, and σ is an hyperparameter. Matrix V may be used in place of Q in Eq. 2 to define the Fastfood nonlinear representation map DISPLAYFORM8 Note that this definition requires p to be a power of 2 to take advantage of the recursive structure of the Hadamard matrix. Note also that to reach a representation dimension q > p one may compute multiple V and concatenate the corresponding φ f f.Deep Fried Convnets Our attention in this work is especially focused on combining kernel approximation with deep learning architecture. Deep Fried Convnets is a deep learning architecture that replaces dense layers of a convolutional neural architecture by a Fastfood approximation of a kernel. This allows to take advantage of the low complexity cost in terms of computation and memory of Fastfood to reduce significantly the computation cost and the number of parameters of the fully-connected layers of the deep convolutional neural network. More formally, let conv(x) be the representation of the data sample x learned by a convolutional neural network. It may include a number of convolution blocks, each including convolution and pooling layers, batch normalization and nonlinear activation. In Deep Fried Convnets, an input x ∈ R d is mapped to the representation spaces conv(x) ∈ R p and then the Fastfood feature map φ f f is applied to the convolutional representation conv(x) instead of the fully-connected layers. The feature representation of x with Deep Fried Convnets is then (φ f f •conv)(x) ∈ R q. It is of note that this method is dedicated to RBF kernels. In, wo architectures have been proposed. The first one relies on the Fastfood kernel approximation method as described above. The second one is a variant of Fastfood called Adaptive-Fastfood. It involves learning the weights of matrices S, G and B through gradient descent rather than setting them randomly, while matrices Π and H are kept unchanged. In the next section we introduce Deepström Networks as an alternative to Deep Fried Convnets. They are based on Nyström approximation and are not limited to RBF kernels. They also allow the use of multiple different kernels and find an appropriate kernel function.3 Deepström NETWORKS In this section, we describe our new Deepström model which combines the desirable characteristics of Nyström approximation and convolutional neural networks. First, we start by revisiting the concept of Nyström kernel approximation from a feature map perspective. Nyström approximation from an empirical kernel map perspective The empirical kernel map is an explicit n-dimensional feature map that is obtained by applying the kernel function on the training data x i. It is defined as DISPLAYFORM9 An interesting feature of the empirical kernel map is that if we consider the inner product in R n ·, · M = ·, M · with a positive semi-definite (psd) matrix M, we can recover the kernel matrix K using the empirical kernel map by setting M equals to the inverse of the kernel matrix. In other words, DISPLAYFORM10 = K. Since K is a psd matrix, one can consider the feature φ emp: DISPLAYFORM11 as an explicit feature map that allows to reconstruct the DISPLAYFORM12 The Deepström network architecture involves a usual convolutional part, conv, including multiple convolutional blocks, and a Deepström layer which is then fed to (eventually multiple) standard dense layers up to the classification layer. The Deepström layer computes the kernel between the output of the conv block for a given input and the corrsponding representations of the trains samples in the subsample L.kernel matrix. This feature map is of dimension n and then is not interesting when the number of example is large. The feature map of the Nyström approximation is given by DISPLAYFORM13 T with x i ∈ L. From an empirical kernel map point of view,φ nys (x) can be seen as an "empirical kernel map" and K DISPLAYFORM14 11 as a metric in the "empirical feature space". From this viewpoint, we think that it could be useful to learn a metric W in the empirical feature space instead of assuming it to be equal to K 11. In a sense, this should allow to learn a kernel by learning its Nyström feature representation. In the following, we call the setting where W is learned by the network Adapative Deepström Network. Principle Deepström networks we propose are an alternative to Deep Fried Convnets. They are based on using the Nyström approximation rather than the Fastfood one to integrate any kernel function on top of convolutional layers of a deep net. Indeed, although Deep Fried Convnets yield state-of-the-art with a significant gain with respect to memory resource and to inference complexity, it is restricted to the Gaussian kernel Fastfood, which may not be always the best choice in practice. In addition, our method can deal with multiple kernels. Deepström nets are Neural Networks that make use of a nonlinear representation function computed with the Nyström approximation (see Figure 1). Starting from Deep Fried Convnets we replace φ f f with φ nys so that a Deepström net implements a function f (x) = (lc • φ nys • conv)(x). In order to compute the above Nyström representation of a sample x one must consider a subsample L of training instances. Since the kernel k is computed on the representations given by convolutional layers, the samples in L must be represented in the same space, and hence must be processed by the convolutional layers as well. Once convolutional representations are calculated, the kernel function may be computed with an input sample and each instance in L in order to get the k x,L, which is then linearly transformed by W before the linear classification layer (see Figure 1).Two main structural differences between Deep Fried Convnets and our Deepström nets: (I) Nyström has the flexibility to use different kernel functions and to combine multiple kernels, and (II) in contrast to Fastfood the Nyström approximation is data dependent. However, one problem arises with the computation of K We present a series of experimental that explore the potential of Deepström networks with respect to various classification settings. First we consider a rather standard setting and compare our approach with standard models on image classification tasks. We explore in particular the behaviour of Deepström networks with various kernels and stress the very limited subsample size needed to reach state-of-the-art accuracy. Next we investigate the use of Deepström networks in a small training set setting, which shows that our approach may allow to learn new classes with only very few training samples, taking advantage of the reduced number of parameters learned by our model. Before describing all these we detail the datasets used. Finally we investigate first the multiple kernel architecture and illustrate its interest when learning with RBF kernel to overcome the hyperparameter selection, and second we demonstrate the benefit of a multiple Deepström approach, combining kernels computed from individual feature maps. We conducted experiments on four well known image classification datasets: MNIST, SVHN, CIFAR10 and CIFAR100 BID7, details on these datasets are provided in Table 1. We pretrained the convolutional layers using standard architectures on both datasets: Lenet for MNIST and VGG19 BID18 for SVHN, CIFAR10 and CIFAR100. We slightly modified the filters' sizes in Lenet network to ensure that the dimension of data after the convolution blocks is a power of 2 (needed for the Deep Fried Convnets architecture).We compare three convolutional architectures in all conducted experiments. Pretrained convolutional parts are shared by the three architectures, which differ from the layers on top of it: Dense architectures use dense hidden layers, i.e. these are classical convnets architectures; Deep Fried implements the Fastfood approximation (Equation 3); Deepstrom stands for our proposal. For Dense architectures, we considered one hidden layer with relu activation function, and varied the output dimension as {2, 4, 8, 16, 32, 64, 128, 1024} in order to highlight accuracies as a function of the number of parameters. For the Fastfood approximation in Deep Fried Convnets we consider that φ f f is gained with one stack of random features to form V in equation 3, except in the experiments of section 4.3 which yields a representation dimension up to 5 times larger. Regarding our approach φ nys, we varied the subset size L ∈ {2, 4, 8, 16, 32, 64, 128}, we tested with the linear, the RBF, and the Chi2 kernels, and we chose as output dimension the same size as the subset sample size. Finally we explored the adaptive as well as non-adaptive variants. Models were learned to optimize the cross entropy criterion with Adam optimizer and a gradient step fixed to 1e −4. Dropout was used on representation layers with probability equal to 0.5. By default the RBF bandwidth was set to the inverse of the mean distance between the representations, after the convolutional part, of pairs of training samples. All experiments were performed with Keras and Tensorflow. Note that the aim of all the experiments below is to investigate the potential of out architecture, not to reach or beat state of the art on the datasets considered. We then compare gained with our architecture and with state-of-the-art models, given a shared convolutional model. Consequently, we did not use tricks such as data augmentation and extensive tuning and, in particular, we did not use the best known convolutional architecture for each of the dataset, we rather used a reasonable deep architecture, VGG19, for the three datasets CIFAR10, CIFAR100 and SVHN. We compare now Deepström networks to two similar architectures, Deep Fried Convnets and classical convolutional networks (inspired from VGG19 and Lenet depending on the dataset). We vary the number of parameters of each architecture in order to highlight classification accuracy with respect to needed memory space. FIG2 shows the compared networks accuracy with respect to the number of parameters, and ignore parameters for convolutions layers to ease the readability. We repeated each experiments 10 times and plot average scores with standard deviations. Deepström models of increasing complexity (number of parameters) correspond to the use of subsample of increasing size from 2 (leftmost point) to 128 (rightmost point). One may see that there is no need of a large subsample here. This may be explained since the convolutional part of the network has been learned to yield quite robust and stable representations of input images. We provide a figure in the Appendix that illustrates this. The Deepström network is able to reach state-of-the-art performance using much fewer parameters than both classical networks and Deep Fried Convnets. Moreover, we also observe smaller variations that points out the robustness of our model. The flexibility in the choice of the kernel function is a clear advantage of out method. The best kernel is clearly dependent on the dataset (linear on MNIST, Chi2 on SVHN and CIFAR100, RBF on CIFAR10). While Random Features in DeepFried are restricted to RBF kernels, we show for instance a gain by using the Chi2 Kernel (k(x 1, x 2) = ||x 1 −x 2 || 2 /(x 1 +x 2)) that had been used for image classification BID20. We also notice the benefit of adaptive variants of Deepström model, suggesting that our model is able to learn and adapt useful Kernel function. Finally, note that we obtained very similar with neural architectures exploiting two hidden layers instead of one after the convolution module conv. Here we explore the ability of our model to work with few training samples, from very few to tens of samples per class. It is an expected benefit of the method since the use of kernels could take advantage of small training samples. Note that we do not exactly deal with a real small training set setting. These preliminary experiments aim to show how the final layers of a convolutional model may be learned from very few samples, given a frozen convolutional model. We actually performed the following experiments by exploiting a trained convolution model conv that has been learned on the full CIFAR100 training set and investigate the performance of Deepström architectures as a function of the training set used to learn the classification layers. One perspective of this work is to exploit such a strategy for domain adaptation settings where the convolutional model is trained on a training set within a different domain as the classes to be recognized. Having at our disposal such a trained convolution model conv, we leverage on the additional information that one may easily include in our models, which is brought by the subsample set. Notice that this subsample may include unlabeled samples since their labels are not used for optimizing the model. Table 2 reports the comparison of network architectures on four datasets. We consider Adaptive Deepström using Linear, RBF or Chi2 kernels and compare with Dense and Adaptive Deepfried for training set sizes of 5, 10 and 20 samples per class. We only consider here adaptive variants since they brought better than their non adaptive counterparts. We obtain models with different complexities: by increasing the hidden layer size in standard convolutional models, or by stacking the number of matrices V in DeepFried (up to 8 times, more was untractable on our machines), and by increasing the subset size in Deepström. Reported are averaged over 30 runs. One may see first that Deepstrom architectures outperfom baselines on every setting except for 5 training samples per class on MNIST. The linear kernel performs well on MNIST but is significantly worse than baselines on harder datasets. At the opposite, both ADSR and ADSC significantly outperfom Adaptive DeepFried for any dataset and perform on par or significantly better than Dense architectures on the hardest CIFAR100 dataset. Moreover one sees that no single kernel based Deepstrom architecture dominate on all settings, showing the potential interest of combining multiple kernels as following experiments will show. We report here gained using multiple kernels in two different ways. First we exploited the Multiple Kernels strategy that we described in section 3 for exploiting multiple kernels in the output of the convolutional blocks, conv. FIG4 reports gained when using a combination of RBF kernels with various bandwidths and for different subsample sizes. Our multiple kernel strategy, exploiting kernels defined with various values of the hyperparameter allows automatically handling this hyper-parameter which usually requires to be tuned either through cross validation or to be manually chosen. The plots show the accuracy on the CIFAR10 dataset as a function of σ value, where the performance of the multiple kernel Deepström is shown as a horizontal line. Plots report for various subsample size equal to 2 (left), 4 (middle) and 8 (right), averaged over 10 runs. As may be seen, using our Multiple kernel strategy allows adapting the kernel combination optimally from the data without requiring any prior choice on the RBF bandwith hyper-parameter. Second, we investigated another architecture that exploits Multiple Deepström approximations as presented in section 3. Here we use in parallel multiple Nyström approximations where kernels are dedicated to deal each with the output of a single feature map of the conv part. TAB4 reports on CIFAR100. We show the best performances obtained for each method by grid-searching on various hyper-parameters depending on the models, within a similar range of number of parameters. For Dense model, we considered one or two hidden layers of 16, 64, 128, 1024, 2048 or 4096 neurons. Deepfried is the adaptive variant where we varied the number of stacks in 1, 3, 5, 7. Deepström is also the adaptive variant where the subsample size is in 16, 64, 128, 256, 512. We observe that both Deepström models outperform the considered baselines, demonstrating the interest in combining Deepström approximations. We proposed Deepström, a new hybrid architecture that mixes deep networks and kernel methods. It is based on the Nyström approximation that allow considering any kind of kernel function in contrast to Deep Fried Convnets. Our proposal allows reaching state of the art while significantly reducing the number of parameters on various datasets, enabling in particular learning from few samples. Moreover the method allows to easily deal with multiple kernels and with multiple Deepström architectures. FIG5 plots the 2-dimensional φ nys representations of some CIFAR10 test samples obtained with a subsample of size equal to 2 (while the number of classes is 10) and two different kernels. One may see here that the 10 classes are already significantly well separated in this low dimensional representation space, illustrating that a very small sized subsammple is already powerfull. Beside, we experienced that designing Deepström Convnets on lower level features output by lower level convolution blocks may yield state-of-the-art performance as well while requiring larger subsamples.
A new neural architecture where top dense layers of standard convolutional architectures are replaced with an approximation of a kernel function by relying on the Nyström approximation.
1,387
scitldr
The main goal of this short paper is to inform the neural art community at large on the ethical ramifications of using models trained on the imagenet dataset, or using seed images from classes 445 -n02892767- [’bikini, two-piece’] and 459- n02837789- [’brassiere, bra, bandeau’] of the same. We discovered that many of the images belong to these classes were verifiably pornographic, shot in a non-consensual setting, voyeuristic and also entailed underage nudity. Akin to the \textit{ivory carving-illegal poaching} and \textit{diamond jewelry art-blood diamond} nexuses, we posit there is a similar moral conundrum at play here and would like to instigate a conversation amongst the neural artists in the community. The emergence of tools such as BigGAN and GAN-breeder has ushered in an exciting new flavor of generative digital art, generated using deep neural networks (See for a survey). A cursory search on twitter 1 reveals hundreds of interesting art-works created using BigGANs. There are many detailed blog-posts 2 on generating neural art by beginning with seed images and performing nifty experiments in the latent space of BigGANs. At the point of authoring this paper, (8 September 2019, 4 :54 PM PST),users on the GanBreeder app 3 had discovered 49652500 images. Further, Christie's, the British auction house behemoth, recently hailed the selling of the neural network generated Portrait of Edmond Belamy for an incredible $432, 500 as signalling the arrival of AI art on the world auction stage. Given the rapid growth of this field, we believe this to be the right time to have a conversation about a particularly dark ethical consequence of using such frameworks that entail models trained on the ImageNet dataset which has many images that are pornographic, non-consensual, voyeuristic and also entail underage nudity. We argue that this lack of consent in the seed images used to train the models trickles down to the final artform in a way similar to the blood-diamond syndrome in jewelry art. An example: Consider the neural art image in Fig 1 we generated using the GanBreeder app. On first appearance, it is not very evident as to what the constituent seed classes are that went into the creation of this neural artwork image. When we solicited volunteers online to critique the artwork (See the collection of responses (Table 2) in the supplementary material), none had an inkling regarding a rather sinister trickle down effect at play here. As it turns out, we craftily generated this image using hand-picked specific instances of children images emanating from what we will showcase are two problematic seed image classes: Bikini and Brassiere. More specifically, for this particular image, we set the Gene weights to be: [Bikini: 42.35, Brassiere: 31.66, Comic Book -84.84]. We'd like to strongly emphasize at this juncture that the problem does not emanate from a visual patriarchal mindset, whereby we associate female undergarment imagery to be somehow unethical, but the root cause lies in the fact that many of the images were curated into the dataset (at least with regards to the 2 above mentioned classes) were voyeuristic, pornographic, non-consensual and also entailed underage nudity. 2 Root cause: Absence of referencing consent during the curation of the imagenet dataset The emergence of the ImageNet dataset is widely considered to be a pivotal moment 4 in the deep learning revolution that transformed the domain computer vision. Two highly cited papers (with more than 10000 citations each), authored by Deng et al in 2009 and authored by Russakovsky et al in 2015, provide deep insights into the procedure used to curate the dataset. In the 2009 paper, subsections 3.1-Collecting Candidate Images and 3.2-Cleaning Candidate Images are dedicated towards the algorithms used to collect and clean the dataset and also to elucidate the specific ways in which the Amazon Mechanical Turk (AMT) platform was harnessed to scale the dataset. Similarly the entirety of Section-3-Dataset construction at large scale in is dedicated towards extending the procedures for the 2015 release. It is indeed disappointing that neither the 2009 nor the 2015 versions of the endeavors required the AMT workers to check if the images they were asked to categorize and draw bounding boxes over, were ethically viable for usage. More specifically, in imagery pertaining to anthropocentric content, such as undergarment clothing, there was no attempt made towards assessing if the images entailed explicit consent given by the people in the images. In fact, none of the following words in the set [ethics, permission, voyeurism, consent] are mentioned in either of the two papers. As such, we have a plethora of images specifically belonging to the two categories detailed in Table 1, that have serious ethical shortcomings. In Fig 2, we showcase the gallery of images from the two classes categorized into four sub-categories: Non-consensual/Voyeuristic, Personal, Verifiably pornographic and Underage / Children. In Fig 2, we also include images that were also incorrectly categorized (Specifically there was no brassieres being sported by the subjects in the images) (Sub-figure (a) ) and those that involved male subjects indulging in lecherous tomfoolery (Sub-figure (e) ). In this paper, we expose a certain unethical facet of neural art that emerges from usage of nonconsensual images that are present in certain specific classes of the imagenet dataset. These images born out of an unethical (and in some cases, illegal) act of voyeuristic non-consensual photography predominantly targeting women (as well as children), might implicitly poison the sanctity of the artworks that eventually emerge. This work is complementary to works such as [10; 11] that have explored the unspoken ethical dimensions of harnessing cheap crowd-sourced platforms such as AMT for scientific research in the first place, which we firmly believe is also an important issue to be considered. We'd like to begin by admitting the self-contradictory facet of raising this specific issue by using a flagship example image that was generated using the very same ethically dubious procedure that we are targeting to root out. Secondly, we'd like to inform the reader that we have indeed raised the issue with curators of the imagenet dataset, but to no avail and plan to update this dissemination if and when they respond. This one reminds me of a mix between graffiti and paper mache using newspaper with color images or magazines. My attention is immediately drawn to near the top of the image which, at first glance, appears to be a red halo of sorts, but upon further consideration, looks to be long black branching horns on a glowing red . My attention then went to the center top portion, where the "horns" were coming from, which appeared to be the head or skull of a moose or something similar. The body of the creature appears to be of human-like form in a crucifix position, of sorts. The image appears more and more chaotic the further down one looks. Antisymmetric: left side is very artistic, rich in flavor and shades; right is more monotonic but has more texture. Reminds me of the two different sides of the brain through the anti-symmetry C-Data Scientist, Facebook Inc Futurism It's visually confusing in the sense that I couldn't tell if I was looking at a 3D object with a colorful or a painting. It's not just abstract, but also mysteriously detailed in areas to the point that I doubt that a human created these E -Senior software engineer, Mt View The symmetry implies a sort of intentionally. I get a sense of Picasso mixed with Frieda Callo here. F-Data Scientist, SF Reminds me of a bee and very colorful flowers, but with some nightmarish masks hidden in some places. Very tropical Table 2: Responses received for the neural art image in Fig 1
There's non-consensual and pornographic images in the ImageNet dataset
1,388
scitldr
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain. It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable. In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph. By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism. Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods. Representation learning has been the core problem of machine learning tasks on graphs. Given a graph structured object, the goal is to represent the input graph as a dense low-dimensional vector so that we are able to feed this vector into off-the-shelf machine learning or data management techniques for a wide spectrum of downstream tasks, such as classification , anomaly detection , information retrieval , and many others (b;). In this paper, our work focuses on learning graph representations in an inductive and unsupervised manner. As inductive methods provide high efficiency and generalization for making inference over unseen data, they are desired in critical applications. For example, we could train a model that encodes graphs generated from computer program execution traces into vectors so that we can perform malware detection in a vector space. During real-time inference, efficient encoding and the capability of processing unseen programs are expected for practical usage. Meanwhile, for real-life applications where labels are expensive or difficult to obtain, such as anomaly detection and information retrieval , unsupervised methods could provide effective feature representations shared among different tasks. Inductive and unsupervised graph learning is challenging, even compared with its transductive or supervised counterparts. First, when inductive capability is required, it is inevitable to deal with the problem of node alignment such that we can discover common patterns across graphs. Second, in the case of unsupervised learning, we have limited options to design objectives that guide learning processes. To evaluate the quality of learned latent representations, reconstruction errors are commonly adopted. When node alignment meets reconstruction error, we have to answer a basic question: Given two graphs G 1 and G 2, are they identical or isomorphic ? To this end, it could be computationally intractable to compute reconstruction errors (e.g., using graph edit distance as the metric) in order to capture detailed structural information. Given an input graph, its vector representation can be obtained by going through the components. Previous deep graph learning techniques mainly focus on transductive or supervised settings . A few recent studies focus on autoencoding specific structures, such as directed acyclic graphs , trees or graphs that can be decomposed into trees , and so on. From the perspective of graph generation, propose to generate graphs of similar graph statistics (e.g., degree distribution), and provide a GAN based method to generate graphs of similar random walks. In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. As shown in Figure 1, SEED consists of three major components: subgraph sampling, subgraph encoding, and embedding subgraph distributions. SEED takes arbitrary graphs as input, where nodes and edges could have rich features, or have no features at all. By sequentially going through the three components, SEED outputs a vector representation for an input graph. One can further feed such vector representations to off-the-shelf machine learning or data management tools for downstream learning or retrieval tasks. Instead of directly addressing the computational challenge raised by evaluation of graph reconstruction errors, SEED decomposes the reconstruction problem into the following two sub-problems. Q1: How to efficiently autoencode and compare structural data in an unsupervised fashion? SEED focuses on a class of subgraphs whose encoding, decoding, and reconstruction errors can be evaluated in polynomial time. In particular, we propose random walks with earliest visiting time (WEAVE) serving as the subgraph class, and utilize deep architectures to efficiently autoencode WEAVEs. Note that reconstruction errors with respect to WEAVEs are evaluated in linear time. Q2: How to measure the difference of two graphs in a tractable way? As one subgraph only covers partial information of an input graph, SEED samples a number of subgraphs to enhance information coverage. With each subgraph encoded as a vector, an input graph is represented by a collection of vectors. If two graphs are similar, their subgraph distribution will also be similar. Based on this intuition, we evaluate graph similarity by computing distribution distance between two collections of vectors. By embedding distribution of subgraph representations, SEED outputs a vector representation for an input graph, where distance between two graphs' vector representations reflects the distance between their subgraph distributions. Unlike existing message-passing based graph learning techniques whose expressive power is upper bounded by Weisfeiler-Lehman graph kernels , we show the direct relationship between SEED and graph isomorphism in Section 3.5. We empirically evaluate the effectiveness of the SEED framework via classification and clustering tasks on public benchmark datasets. We observe that graph representations generated by SEED are able to effectively capture structural information, and maintain stable performance even when the node attributes are not available. Compared with competitive baseline methods, the proposed SEED framework could achieve up to 10% improvement in prediction accuracy. In addition, SEED achieves high-quality representations when a reasonable number of small subgraph are sampled. By adjusting sample size, we are able to make trade-off between effectiveness and efficiency. Kernel methods. Similarity evaluation is one of the key operations in graph learning. Conventional graph kernels rely on handcrafted substructures or graph statistics to build vector representations for graphs (; ; ; Horváth et al., 2004; ;). Although kernel methods are potentially unsupervised and inductive, it is difficult to make them handle rich node and edge attributes in many applications, because of the rigid definition of substructures. Deep learning. Deep graph representation learning suggests a promising direction where one can learn unified vector representations for graphs by jointly considering both structural and attribute information. While most of existing works are either transductive (; ;) or supervised settings (; ; ; ; ; Veličković et al., 2018; a; ;), a few recent studies focus on autoencoding specific structures, such as directed acyclic graphs , trees or graphs that can be decomposed into trees , and so on. In the case of graph generation, propose to generate graphs of similar graph statistics (e.g., degree distribution), and provide a method to generate graphs of similar random walks. In addition, propose a supervised method to learn graph similarity, and theoretically analyses the expressive power of existing message-passing based graph neural networks. Unlike existing kernel or deep learning methods, our SEED framework is unsupervised with inductive capability, and naturally supports complex attributes on nodes and edges. Moreover, it works for arbitrary graphs, and provides graph representations that simultaneously capture both structural and attribute information. The core idea of SEED is to efficiently encode subgraphs as vectors so that we can utilize subgraph distribution distance to reflect graph similarity. We first give an abstract overview on the SEED framework in Section 3.1, and then discuss concrete implementations for each component in Section 3.2, 3.3, and 3.4, respectively. In Section 3.5, we share the theoretical insights in SEED. For the ease of presentation, we focus on undirected graphs with rich node attributes in the following discussion. With minor modification, our technique can also handle directed graphs with rich node and edge attributes. SEED encodes an arbitrary graph into a vector by the following three major components, as shown in Figure 1. • Sampling. A number of subgraphs are sampled from an input graph in this component. The design goal of this component is to find a class of subgraphs that can be efficiently encoded and decoded so that we are able to evaluate their reconstruction errors in a tractable way. • Encoding. Each sampled subgraph is encoded into a vector in this component. Intuitively, if a subgraph vector representation has good quality, we should be able to reconstruct the original subgraph well based on the vector representation. Therefore, the design goal of this component is to find an autoencoding system that provides such encoding functionality. • Embedding distribution. A collection of subgraph vector representations are aggregated into one vector serving as the input graph's representation. For two graphs, their distance in the output vector space approximates their subgraph distribution distance. The design goal of this component is to find such a aggregation function that preserves a pre-defined distribution distance. Although there could be many possible implementations for the above three components, we propose a competitive implementation in this paper, and discuss them in details in the rest of this section. In this paper, we propose to sample a class of subgraphs called WEAVE (random Walk with EArliest Visit timE). Let G be an input graph of a node set V (G) and an edge set E(G). A WEAVE of length k is sampled from G as follows. Figure 2: Expressive power comparison between WEAVEs and vanilla random walks: while blue and orange walks cannot be differentiated in terms of vanilla random walks, the difference under WEAVEs is outstanding. • Initialization. A starting node v is randomly drawn from V (G) at timestamp 0, and its earliest visiting time is set to 0. • Next-hop selection. Without loss of generality, assume v (p) is the node visited at timestamp p (0 ≤ p < k). We randomly draw a node v (p+1) from v (p)'s one-hop neighborhood as the node to be visited at timestamp p + 1. If v (p+1) is a node that we have not visited before, its earliest visiting time is set to p + 1; otherwise, its earliest visiting is unchanged. We hop to v (p+1). • Termination. The sampling process ends when timestamp reaches k. In practical computation, a WEAVE is denoted as a matrix In particular, t ] is a concatenation of two vectors, where a includes attribute information for the node visited at timestamp p, and x (p) t contains its earliest visit time. As earliest visit time is discrete, we use one-hot scheme to represent such information, where x Difference between WEAVEs and vanilla random walks. The key distinction comes from the information of earliest visit time. Vanilla random walks include coarser-granularity structural information, such as neighborhood density and neighborhood attribute distribution . As vanilla random walks have no memory on visit history, detailed structural information related to loops or circles is ignored. While it is also efficient to encode and decode vanilla random walk, it is difficult to evaluate finer-granularity structural difference between graphs. Unlike vanilla random walks, WEAVEs utilize earliest visit time to preserve loop information in sampled subgraphs. As shown in Figure 2, while we cannot tell the difference between walk w 1 and walk w 2 using vanilla random walk, the distinction is outstanding under WEAVEs. Note that it is equally efficient to encode and decode WEAVEs, compared with vanilla random walks. Given a set of sampled WEAVEs of length k {X 1, X 2, ..., X s}, the goal is to encode each sampled WEAVE into a dense low-dimensional vector. As sampled WEAVEs share same length, their matrix representations also have identical shapes. Given a WEAVE X, one could encode it by an autoencoder as follows. where z is the dense low-dimensional representation for the input WEAVE, f (·) is the encoding function implemented by an MLP with parameters θ e, and g(·) is the decoding function implemented by another MLP with parameters θ d. The quality of z is evaluated through reconstruction errors as follows, By conventional gradient descent based backpropagation , one could optimize θ e and θ d via minimizing reconstruction error L. After such an autoencoder is well trained, the latent representation z includes both node attribute information and finer-granularity structural information simultaneously. Given s sampled WEAVEs of an input graph, the output of this component is s dense low-dimensional vectors {z 1, z 2, · · ·, z s}. Let G and H be two arbitrary graphs. Suppose subgraph (e.g., WEAVE) distributions for G and H are P G and P H, respectively. In this component, we are interested in evaluating the distance between P G and P H. In this work, we investigate the feasibility of employing empirical estimate of the maximum mean discrepancy (MMD) to evaluate subgraph distribution distances, without assumptions on prior distributions, while there are multiple candidate metrics for distribution distance evaluation, such as KL-divergence and Wasserstein distance . We leave the detailed comparison among different choices of distance metrics in our future work. Given s subgraphs sampled from G as {z 1, · · ·, z s} and s subgraphs sampled from H as {h 1, · · ·, h s}, we can estimate the distance between P G and P H under the MMD framework: µ G andμ H are empirical kernel embeddings of P G and P H, respectively, and are defined as follows, where φ(·) is the implicit feature mapping function with respect to the kernel function k(·, ·). To this end,μ G andμ H are the output vector representation for G and H, respectively. In terms of kernel selection, we find the following options are effective in practice. Identity kernel. Under this kernel, pairwise similarity evaluation is performed in original input space. Its implementation is simple, but surprisingly effective in real-life datasets, where output representations are obtained by average aggregation over subgraph representations. Commonly adopted kernels. For popular kernels (e.g., RBF kernel, inverse multi-quadratics kernel, and so on), it could be difficult to find and adopt their feature mapping functions. While approximation methods could be developed for individual kernels , we could train a deep neural network that approximates such feature mapping functions. In particular, whereφ(·; θ m) is an MLP with parameters θ m, and D(·, ·) is the approximation to the empirical estimate of MMD. Note thatμ G andμ H are output representations for G and H, respectively. To train the functionφ(·; θ m), we evaluate the approximation error by where θ m is optimized by minimizing J(θ m). In this section, we sketch the theoretical connection between SEED and well-known graph isomorphism , and show how walk length in WEAVE impacts the effectiveness in graph isomorphism tests. The full proof of theorems and lemmas is detailed in Appendix. To make the discussion self-contained, we define graph isomorphism and its variant with node attributes as follows. Graph isomorphism with node attributes. be two attributed graphs, where l 1, l 2 are attribute mapping functions l 1:, and node attributes are denoted as d-dimensional vectors. Then G and H are isomorphic with node attributes if there is a bijection f: Identical distributions. Two distributions P and Q are identical if and only if their 1st order Wasserstein distance (Rüschendorf, 1985) is W 1 (P, Q) = 0. The following theory suggests the minimum walk length for WEAVEs, if every edge in a graph is expected to be visited. ) be a connected graph, then there exists a walk of length k which can visit all the edges of G, where k ≥ 2|E(G)| − 1. Now, we are ready to present the connection between SEED and graph isomorphism. ) and H = (V (H), E(H)) be two connected graphs. Suppose we can enumerate all possible WEAVEs from G and H with a fixed-length k ≥ 2 max{|E(G)|, |E(H)|}−1, where each WEAVE has a unique vector representation generated from a well-trained autoencoder. The Wasserstein distance between G's and H's WEAVE distributions is 0 if and only if G and H are isomorphic. The following theory shows the connection in the case of graphs with nodes attributes. be two connected graphs with node attributes. Suppose we can enumerate all possible WEAVEs on G and H with a fixed-length k ≥ 2 max{|E(G)|, |E(H)|}−1, where each WEAVE has a unique vector representation generated from a well-trained autoencoder. The Wasserstein distance between G's and H's WEAVE distributions is 0 if and only if G and H are isomorphic with node attributes. Note that similar can be easily extended to the cases with both node and edge attributes, and the details can be found in Appendix E. The theoretical suggest the potential power of the SEED framework in capturing structural difference in graph data. As shown above, in order to achieve the same expressive power of graph isomorphism, we need to sample a large number of WEAVEs with a long walk length so that all possible WEAVEs can be enumerated. The resource demand is impractical. However, in the empirical study in Section 4, we show that SEED can achieve state-of-the-art performance, when we sample a small number of WEAVEs with a reasonably short walk length. We employ 7 public benchmark datasets to evaluate the effectiveness of SEED. In this section, we mainly report the for two representative datasets Deezer and MUTAG from online social network and chemistry domain. The detailed descriptions for all the 7 datasets are presented in Appendix F. • Deezer User-User Friendship Networks (Deezer) is a social network dataset collected from the music streaming service Deezer. In this network, nodes are users, edges denote mutual friendships between users, and genre notations are extracted as node features. In particular, to align this dataset with the SEED framework, for each node u, we generate its ego-graph which consists of the nodes and edges within its 3-hop neighborhood, and the egograph's label is assigned as node u's label (this user's nationality). • Three state-of-the-art representative techniques are implemented as baselines in the experiments. • Graph Sample and Aggregate (GraphSAGE) is an inductive graph representation learning approach which can learn the structural information of the nodes. We evaluate GraphSAGE in its unsupervised setting. • Graph Matching Network (GMN) utilizes graph neural networks to obtain graph representations for graph matching applications. In particular, we employ its Graph Embedding Networks and deploy graph-based loss functions for unsupervised learning. • Graph Isomorphism Network (GIN) provides an effective sum-based aggregator for graph representation learning. We modify its objective for unsupervised learning. In addition, we focus on downstream tasks, including classification and clustering. Using the downstream tasks, we evaluate the quality of learned graph representations. For classification tasks, a simple multi-layer fully connected neural network is built as a classifier, and the prediction accuracy (ACC) is used as the evaluation metric. For clustering tasks, an effective conventional clustering approach, Normalized Cuts (NCut) , is used to cluster graph representations. Prediction accuracy (ACC) and Normalized Mutual Information (NMI) are used as its evaluation metrics. More details of the baselines and downstream tasks are discussed in Appendix G. In this section, we discuss the performance of SEED and its baselines in the downstream tasks with respect to Deezer and MUTAG. The full evaluation on the 7 datasets is detailed in Appendix G. In this set of experiments, SEED adopts identity kernel in the component of embedding distributions. Table 1: Evaluating graph representation quality by classification and clustering tasks As shown in Table 1, SEED consistently outperforms the baseline methods in both classification and clustering tasks with up to 0.18 absolute improvement in terms of classification accuracy. For GIN and GMN, supervision information could be crucial in order to differentiate structural variations. As GraphSAGE mainly focuses on aggregating feature information from neighbor nodes, it could be difficult for GraphSAGE to extract effective structural information from an unsupervised manner. In the unsupervised setting, SEED is able to differentiate structural difference at finer granularity and capture rich attribute information, leading to high-quality graph representations with superior performance in downstream tasks. Walk length and sample numbers are two meta-parameters in the SEED framework. By adjusting these two meta-parameters, we can make trade-off between effectiveness and computational efficiency. In the experiment, we empirically evaluate the impact of the two meta-parameters on the Table 3: Representation quality with different walk lengths MUTAG dataset. In Table 2, each row denotes the performance with different sampling numbers (from 25 to 800) while the walk length is fixed to 10. Moreover, we adjust the walk length from 5 to 25 while sampling number is fixed to 200 in Table 3. We can see that the performance of SEED in both classification and clustering tasks increases as there are more subgraphs sampled, especially for the changes from 25 to 200. Meanwhile, we observe the increasing rates diminish dramatically when sampling number ranges from 200 to 800. Similarly, the performance of SEED increase as the walk length grows from 5 to 20, and the performance starts to converge when the length goes beyond 20. Red and blue colors indicate two labels. We observe that the boundary becomes clearer when sample number or walk length increases. In this paper, we propose a novel framework SEED (Sampling, Encoding, and Embedding distribution) framework for unsupervised and inductive graph learning. Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph. By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism. Our experimental suggest the SEED framework is effective, and achieves state-of-the-art predictive performance on public benchmark datasets. Proof. We will use induction on |E(G)| to complete the proof. Basic case: Let |E(G)| = 1, the only possible graph is a line graph of length 1. For such a graph, the walk from one node to another can cover the only edge on the graph, which has length 1 ≥ 2 · 1 − 1. Induction: Suppose that for all the connected graphs on less than m edges (i.e., |E(G)| ≤ m − 1), there exist a walk of length k which can visit all the edges if k ≥ 2|E(G)| − 1. Then we will show for any connected graph with m edges, there also exists a walk which can cover all the edges on the graph with length k ≥ 2|E(G)| − 1. Let G = (V (G), E(G)) be a connected graph with |E(G)| = m. Firstly, we assume G is not a tree, which means there exist a cycle on G. By removing an edge e = (v i, v j) from the cycle, we can get a graph G on m − 1 edges which is still connected. This is because any edge on a cycle is not bridge. Then according to the induction hypothesis, there exists a walk w = v 1 v 2... v i... v j... v t of length k ≥ 2(m − 1) + 1 which can visit all the edges on G (The walk does not necessarily start from node 1, v 1 just represents the first node appears in this walk). Next, we will go back to our graph G, as G is a subgraph of G, w is also a walk on G. By replacing the first appeared node v i on walk w with a walk v i v j v i, we can obtain a new walk As w can cover all the edges on G and the edge e with length k = k + 2 ≥ 2(m − 1) − 1 + 2 = 2m − 1, which means it can cover all the edges on G with length k ≥ 2|E(G)| − 1. Next, consider graph G which is a tree. In this case, we can remove a leaf v j and its incident edge e = (v i, v j) from G, then we can also obtain a connected graph G with |E(G)| = m − 1. Similarly, according to the induction hypothesis, we can find a walk w = v 1 v 2... v i... v t on G which can visit all the m − 1 edges of G of length k, where k ≥ 2(m − 1) − 1. As G is a subgraph of G, any walk on G is also a walk on G including walk w. Then we can also extend walk w on G by replacing the first appeared v i with a walk v i v j v i, which produce a new walk w can visit all the edges of G as well as the edge e with length k = k + 2 ≥ 2(m − 1) − 1 + 2 = 2m − 1. In other words, w can visit all the edges on G with length k ≥ 2|E(G)| − 1. Now, we have verified our assumption works for all the connected graphs with m edges, hence we complete our proof. (To give an intuition for our proof of lemma 1, we provide an example of 5 edges in Figure 5 Figure 5 (b1) shows an example graph G which is a tree on 5 edges. By removing the leaf v 4 and its incident edge (v 4, v 3), we can get a tree G with 4 edges (Figure 5 (b2) ). G has a walk w = v 1 v 2 v 3 v 5 which covers all the edges of G, as w is also a walk on G, by replacing v 3 with v 3 v 4 v 3 in w we can get a walk w = v 1 v 2 v 3 v 4 v 3 v 5 which can cover all the edges of G. The following lemma is crucial for the proof of Theorem 1. Lemma 2. Suppose that w, w are two random walks on graph G and graph H respectively, if the representation of w and w are the same, i.e., r w = r w, the number of the distinct edges on w and w are the same, as well as the number of the distinct nodes on w and w. Proof. Let n 1, n 2 be the number of distinct nodes on w, w respectively, let m 1, m 2 be the number of distinct edges on w and w respectively. First, let's prove n 1 = n 2. We will prove this by contradiction. Assume n 1 = n 2, without loss of generality, let n 1 > n 2. According to our encoding rule, the largest number appears in a representation vector is the number of the distinct nodes in the corresponding walk. Hence, the largest element in vector r w is n 1 while the largest element in vector r w is n 2. Thus, r w = r w, which contradicts our assumption. Therefore, we have n 1 = n 2. Next, we will show m 1 = m 2. We will also prove this point by contradiction. Assume m 1 = m 2, without loss of generality, let m 1 > m 2. As we have proved n 1 = n 2, each edge on w and w will be encoded as a vector like [k 1, k 2], where k 1, k 2 ∈ [n 1]. A walk consists of edges, hence the representation of a walk is formed by the representation of edges. Since m 1 > m 2, which means there exists at least two consecutive element [k 1, k 2] in r w which will not appear in r w, thus r w = r w, which is a contradiction of our assumption. As a , we can prove m 1 = m 2. Proof. We will first prove the sufficiency of the theorem, i.e., suppose graphs G = (V (G), E(G)) and H = (V (H), E(H)) are two isomorphic graphs, we will show that the WEAVE's distribution on G and H are the same. Let A be the set of all the possible walks with length k on G, B be the set of all the possible walks with length k on H. Each element of A and B represents one unique walk on G and H respectively. As we have assumed a WEAVE is a class of subgraphs, which means a WEAVE may corresponds to multiple unique walks in A or B. Consider a walk w = v 1 v 2... v i... v t ∈ A (v i represent the ith node appears in the walk), for any edge e = (v i, v j) on w i, as e ∈ E(G), according to the definition of isomorphism, there exists a mapping f: If we map each node on w i to graph H, we can get a new walk, besides, as the length of w i is also k, we have w i ∈ B. Hence, we can define a new mapping g: A → B, s.t. Next, we will show that g is a bijective mapping. Firstly, we will show that f is injective. Suppose g(w 1) = g(w 2), we want to show w 1 = w 2. Assume w 1 = w 2, there must exists one step i such that is the ith step of g(w 2), thus the walk g(w 1) = g(w 2), which contradicts our assumption. Therefore, the assumption is false, we have w 1 = w 2. Then we will show that g is surjective, i.e., for any w ∈ B, there exists a w ∈ A such that g(w) = w. We will also prove this by contradiction, suppose there exists a walk w ∈ B such that we can't find any w ∈ A to make g(w) = w. Let w = v 1 v 2... v t, according to the definition of isomorphism, for any edge, where f −1 represents the inverse mapping of f. Hence as w is a walk on graph H with length k. Now consider g(w), based on the mapping rule of g, we need map each node on w via f, i.e., which is contradiction to our assumption. Thus we have proved g is an injective mapping as well as a surgective mapping, then we can conclude that g is a bijective mapping. Then we will show the WEAVEs' distribution of G and H are the same. Since in our assumption, |E(G)| is limited, then |A| and |B| are limited, besides, according to our encoding rule, different walks may correspond to one specific WEAVE while each WEAVE corresponds a unique representation vector, thus the number of all the possible representation vectors is limited for both G and H. Thus, the representation vector's distributions P G for graph G and representation's distributions P H for graph H are both discrete distributions. To compare the similarity of two discrete probability distributions, we can adopt the following equation to compute the Wasserstein distance and check if it is 0. where W 1 (P, Q) is the Wasserstein distance of probability distribution P and Q, π(i, j) is the cost function and s(i, j) is a distance function, w qj and w pj are the probabilities of q j and p j respectively. Since we have proved g: A → B is a bijection, besides, according to our encoding rule, g(w) and w will corresponds to the same WEAVE, hence they will share the same representation vector. As a consequence, for each point (g i, w gi) (g i corresponds to a representation vector, w gi represents the probability of g i) in the distribution P G, we can find a point (h i, w hi) in P H such that g i = h i, and w gi = w hi. Then consider, for P G and P H, if we let π be a diagonal matrix with [w p1, w p2, . . ., w pm] on the diagonal and all the other elements be 0, we can make each element in the sum m i=1 n j=1 π(i, j)s(i, j) be 0, as this sum is supposed to be nonegative, its minimum is 0, hence W 1 (P G, P H) = 0, which means for two isomorphic graphs G and H, their WEAVE's distributions P G and P H are the same. Next we will prove the necessity of this theorem. Suppose that the Wasserstein distance between the walk representation distributions P G and P H is 0, we will show that graph G and H are isomorphic. Let the number of the nodes of graph G is n 1, the number of the nodes of graph H is n 2, let the number of the edges on graph G is m 1, the number if the edges on graph H is m 2. Let k = 2 max{m 1, m 2} − 1. Now, we will give a bijective mapping f: V (G) → v(H). First, consider the walks on graph G, as k = 2 max{m 1, m 2} − 1 ≥ 2m 1 − 1, according to Lemma 1, there exists at least one walk of length k on graph G which can cover all the edges of G. Consider such a walk w G, let r G = [1, 2, 3, ..., t] be the representation vector (corresponds to a WEAVE) we obtained according to our encoding rule. Now, we will use this representation to mark the nodes on graph G. Mark the first node in this walk as u 1 (corresponds to 1 in the representation), the second node as u 2, the ith appearing node in w G is u i, continue this process untill we marked all the new appearing nodes in this walk. Since w G can visit all the edges of graph G, all the nodes on this graph will definitely be marked, hence the last new appearing node will be marked as u n1. Now, let's consider the walks on graph H. As we have assumed that W 1 (P G, P H) = 0, which means that for each point (g i, w gi) on P G, we can find a point (h i, w hi) in P H such that g i = h i, and w gi = w hi. As a consequence, as r g is a point on P G, there must be a point r h on H such that r h = r g = [1, 2, 3, ..., t]. Then choose any walk w h on H which produce r h, and apply the same method to mark the nodes in this walk in order as v 1, v 2,..., v n1. Now we can define the mapping f, let f:, which is exactly the mapping we are looking for. Next, we just need show for each edge (u i, u j) ∈ E(G), we have (f (u i), f (u j)) ∈ E(H), and vice versa, then we can prove G and H are isomorphic. The first direction is obviously true as w G covers all the edges on G, for any edge (u i, u j) in w G, we have (f (u i), f (u j)) = (v i, v j) which belongs to w h, since w h is walk on H, we have (v i, v j) ∈ E(H). Then we will prove the reverse direction, i.e., for any To prove this, we will first show that the number of edges of graph G and H are the same, i.e., m 1 = m 2. Suppose this is not true, without loss of generality, let m 1 > m 2. Since P G and P H are the of random walks for infinite times. Then there must exists some walks which can visit the additional edges on G, as a consequence, we can obtain some representation vector which will not appear Figure 6: The walk representation distributions of graphs without attributes, with discrete attributes and with continuous attributes in P H, which contradicts our assumption. Hence, we have m 1 = m 2. Besides, since we have r g = r h, according to Lemma 2, we can derive that the number of distinct edges on w g and w h are the same. As w g covers all the edges on G, hence the number of distinct edges on w g is m 1. Therefore, the number of distinct edges on w h is also m 1, which means w h also has visited all the edges on H. As for any edge (v i, v j) on w h, we have (u i, u j) on w h, in other words, we have Hence we complete the proof. Figure 6 shows the walk representation distributions for a 4 nodes ring with walk length k = 2 in three different cases: without node attributes, with discrete node attributes, and with continuous node attributes. We can see the attributes will have an influence to the distributions, more specifically, the probability of each unique walk keeps the same no matter what the attributes are, however, the probability of each representation vector may vary as different unique walks may correspond to one representation vector, and the attributes may influence how many representation vectors there will be and how many unique walks correspond to a representation vector. To clarify, in Figure 6 (a), the ring graph does not have nodes attributes, there exists 16 unique walks in total, among them walk ABD, BDC, DCA, CAB, DBA, CDB, ACD, BAC will all be encoded as r 1 = , walk ABA, BAB, BDB, DBD, CDC, DCD, CAC, ACA will be encoded as r 2 = . Hence, for a graph in Figure 6 (a), we have P r(r 1) = 8 16, P r(r 2) = 8 16. In Figure 6 (b), each node has a discrete attribute, i.e., red or green, there are still 16 unique walks in total. However, in this case, there exits four different representation vectors, walk ABC, CBA, ADC, CDA will be encoded as r 1 = [1R 2G 3R], where R represents Red while G represents Green; walk BCD, DCB, DAB, DCB correspond to r 2 = [1G 2R 3G]; walk ABA, ADA, CDC, CBC correspond to r 3 = [1R 2G 3R]; walk BAB, BCB, DCD, DAD correspond to r 3 = [1R 2G 3R]. In this case, we have P r(r 1) = P r(r 2) = P r(r 3) = P r(r 4) = 4 16. In the last, let's consider the case when there exists continuous nodes attributes, for such a graph, the value of nodes attributes has infinite choices, hence, it is very likely that each node may have different attribute. As a consequence, each unique walk will correspond to a unique representation vector. In our example Figure 6 (c), there also exists 16 unique walks, each walk has a particular representation vector, hence, the probability of each representation vector is Proof. The proof for Theorem 2 is quite similar as the proof of Theorem 1, this is because the attributes just influence the representation vector form and how many unique walks correspond to a representation vector, however, the probability of each unique walk keeps same. Hence, we can use a similar method to complete the proof. Similarly, we will first prove the sufficiency. Let G and H be two isomorphic graphs with attributes, we will prove that the walk representations distribution of G and H are the same. Suppose that A and B are the sets of possible walks of length k on G and H respectively. By applying the same analysis method as in the proof of Theorem 1, we can show that there exists a bijective mapping g: A → B such that for where f: ) ∈ E(H) and for ∀v i ∈ V (G), the attribute of v i and f (v i) are the same. Hence, according to our encoding rule, w i and f (w i) will be encoded as the same representation vector, which means for each point (r gi, P r(r gi)) in the representation distribution of G, we can find a point (r hi, P r(r hi)) in the distribution of H such that r gi = r hi, P r(r gi) = P r(r hi). Thus, we can obtain the Wasserstein distance of distribution P G and the distribution P H is W 1 (P G, P H) = 0 via a similar approach as in Theorem 1. In other words, we have P G = P H. In addition, the necessity proof of Theorem 2 is the same as Theorem 1. If both the nodes and edges in a graph have attributes, the graph is an attributed graph denoted by G = (V, E, α, β), where α: V → L N and β: E → L E are nodes and edges labeling functions, L N, L E are sets of labels for nodes and edges. In this case, the graph isomorphism are defined as: G and H are isomorphic with node attributes as well as edge attributes if there is a bijection f: Corollary 1. Let G = (V (G), E(G)) and H = (V (H), E(H)) be two connected graphs with node attributes. Suppose we can enumerate all possible WEAVEs on G and H with a fixed-length k ≥ 2 max{|E(G)|, |E(H)|}−1, where each WEAVE has a unique vector representation generated from a well-trained autoencoder. The Wasserstein distance between G's and H's WEAVE distributions is 0 if and only if G and H are isomorphic with both node attributes and edge attributes. Proof. When both nodes and edges of a graph are given attributes, the representation vectors of random walks will be different. However, just like the cases with only nodes attributes, the probability of each unique walk on the graph keeps same. Hence, we can follow a similar analysis method as Theorem 2 to complete this proof. We include the following 7 datasets in our experimental study. • Deezer User-User Friendship Networks (Deezer) is a social network dataset which is collected from the music streaming service Deezer. It represents friendship network of users from three European countries (i.e., Romania, Croatia and Hungary). There are three graphs which corresponds to the three countries. Nodes represent the users and edges are the mutual friendships. For the three graphs, the numbers of nodes are 41, 773, 54, 573, and 47, 538, respectively ) is a bioinformatics dataset. The proteins in the dataset are converted to graphs based on the sub-structures and physical connections of the proteins. Specifically, nodes are secondary structure elements (SSEs), and edges represent the amino-acid sequence between the two neighbors. PROTEINS has 3 discrete labels (i.e., helix, sheet, and turn). There are 1, 113 graphs in total with 43, 471 edges. • COLLAB is a scientific collaboration dataset. It belongs to a social connection network in general. COLLAB is collected from 3 public collaboration datasets (i.e., High Energy Physics, Condensed Matter Physics, and Astro Physics). The ego-networks are generated for individual researchers. The label of each graph represents the field which this researcher belongs to. There are 5, 000 graphs with 24, 574, 995 edges. • IMDB-BINARY is a movie collaboration dataset. Each graph corresponds to an ego-network for each actor/actress, where nodes correspond to actors/actresses and an edge is drawn between two actors/actresses if they appear in the same movie. Each graph is derived from a pre-specified genre of movies. IMDB-BINARY has 1, 000 graphs associated with 19, 773 edges in total. • IMDB-MULTI is multi-class version of IMDB-BINARY and contains a balanced set of egonetworks derived from Comedy, Romance, and Sci-Fi genres. Specifically, there are 1, 500 graphs with 19, 502 edges in total. • Graph Sample and Aggregate (GraphSAGE) is an inductive graph representation learning approach in either supervised or unsupervised manner. GraphSAGE explores node and structure information by sampling and aggregating features from the local neighborhood of each node. A forward propagation algorithm is specifically designed to aggregates the information together. We evaluate GraphSAGE in its unsupervised setting. • Graph Matching Network (GMN) utilizes graph neural networks to obtain graph representations for graph matching applications. A novel Graph Embedding Network is designed for better preserving node features and graph structures. In particular, Graph Matching Network is proposed to directly obtain the similarity score of each pair of graphs. In our implementation, we utilize the Graph Embedding Networks and deploy the graph-based loss function proposed in for unsupervised learning fashion. • Graph Isomorphism Network (GIN) provides a simple yet effective neural network architecture for graph representation learning. It deploys the sum aggregator to achieve more comprehensive representations. The original GIN is a supervised learning method. Thus, we follow the GraphSAGE approach, and modify its objective to fit an unsupervised setting. Two downstream tasks, classification and clustering, are deployed to evaluate the quality of learned graph representations. For classification task, a simple multi-layer fully connected neural network is built as a classifier. Using 10-fold cross-validation, we report the average accuracy. For clustering task, an effective conventional clustering approach, Normalized Cuts (NCut) , is used to cluster graph representations. We consider two widely used metrics for clustering performance, including Accuracy (ACC) and Normalized Mutual Information (NMI) . ACC comes from classification with the best mapping, and NMI evaluates the mutual information across the ground truth and the recovered cluster labels based on a normalization operation. Both ACC and NMI are positive measurements (i.e., the higher the metric is, the better the performance will be). Table 4 (for NCI1 and PROTEINS) and in Table 5 (for COLLAB, IMDB-BINARY, and IMDB-MULTI). We observe that SEED outperforms the baseline methods in 12 out of 15 cases, with up to 0.06 absolute improvement in classification and clustering accuracy. In the rest 3 cases, SEED also achieves competitive performance. Interestingly, for NCI and PROTEINS datasets, we see node features bring little improvement in the unsupervised setting. One possible reason could be node feature information has high correlation with structural information in these cases. H EMBEDDING DISTRIBUTION Identity kernels or commonly adopted kernels could be deployed in the component of embedding subgraph distributions. In our implementation, we utilize a multi-layer deep neural network to approximate a feature mapping function, for kernels whose feature mapping function is difficult to obtain. Figure 7 shows the t-SNE visualization of learned graph representations based on identity kernel and RBF kernel. As shown in Table 6, SEED variants with different kernels for distribution embedding could distinguish different classes with similar performance on the MUTAG dataset. In this section, we investigate whether DeepSet is an effective technique for distribution embedding. In particular, we employ DeepSet to replace the multi-layer neural network for feature mapping function approximation, and similarity values generated by MMD serve Table 7: Representation evaluation based on classification and clustering down-stream tasks as supervision signals to guide DeepSet training. In our experiments, we compare the SEED implementation based on DeepSet with MMD (DeepSet in Table 7) with the SEED implementation based on the identity kernel (Identity Kernel in Table 7). We also observe that the MMD does not have significant performance different. In this section, we investigate the impact of node features and earliest visit time in WEAVE. In Table 8, Node feature means only node features in WEAVE are utilized for subgraph encoding (which is equivalent to vanilla random walks), earliest visit time means only earliest visit time information in WEAVE is used for subgraph encoding, and Node feature + earliest visit time means both information is employed. We evaluate the impact on the MUTAG dataset. As shown above, it is crucial to use both node feature and earliest visit time information in order to achieve the best performance. Interestingly, on the MUTAG dataset, we observe that clustering could be easier if we only consider earliest visit time information. On the MUTAG dataset, node features seem to be noisy for the clustering task. As the clustering task is unsupervised, noisy node features could negatively impact its performance when both node features and earliest visit time information are considered. In this section, we evaluate the impact of Nyström based kernel approximation to the component of embedding distributions. First, we investigate the impact to the effectiveness in the downstream tasks. In this set of experiment, we implement a baseline named SEED-Nyström, where the Nyström method is applied to approximate RBF kernel based MMD during training phases with 200 sampled WEAVEs. In particular, top 30 eigenvalues and the corresponding eigenvectors are selected for the approximation. As shown in Table 9, across five datasets, SEED-Nyström achieves comparable performance, compared with the case where an identity kernel is adopted. In addition, we evaluate the response time of exact RBF kernel based MMD and its Nyström approximation. Table 9: Representation evaluation based on classification and clustering down-stream tasks approximation. As shown in Figure 8, when we range the number of WEAVE samples from 100 to 2000, the Nyström approximation scales better than the exact MMD evaluation. In summary, the Nyström method is a promising method that can further improve the scalability of the SEED framework in training phases, especially for the cases where a large number of WEAVE samples are required.
This paper proposed a novel framework for graph similarity learning in inductive and unsupervised scenario.
1,389
scitldr
Neural population responses to sensory stimuli can exhibit both nonlinear stimulus- dependence and richly structured shared variability. Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data. To account for the discrete nature of neural spike trains, we use the REBAR method to estimate unbiased gradients for adversarial optimization of neural encoding models. We illustrate our approach on population recordings from primary visual cortex. We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity. Neural population activity contains both nonlinear stimulus-dependence and richly structured neural variability. An important challenge for neural encoding models is to generate spike trains that match the statistics of experimentally measured neural population spike trains. Such synthetic spike trains can be used to explore limitations of a model, or as realistic inputs for simulation or stimulation experiments. Most encoding models either focus on modelling the relationship between stimuli and mean-firing rates e.g., or on the statistics of correlated variability ('noise correlations'), e.g.. They are typically fit with likelihood-based approaches (e.g. maximum likelihood estimation MLE, or variational methods for latent variable models). While this approach is very flexible and powerful, it has mostly been applied to simple models of variability (e.g. Gaussian inputs). Furthermore, MLE-based models are not guaranteed to yield synthetic data that matches the statistics of empirical data, particularly in the presence of latent variables. Generative adversarial networks (GANs) are an alternative to fitting the parameters of probabilistic models. In adversarial training, the objective is to find parameters which match the statistics of empirical data, using a pair of competing neural networks -a generator and discriminator. The generator maps the distribution of some input random variable onto the empirical data distribution to try and fool the discriminator. The discriminator attempts to classify input data as samples from the true data distribution or from the generator. This approach has been used extensively to produce realistic images and for text generation. Recently, Molano-Mazon et al. trained a generative model of spike trains, and Arakaki et al., rate models of neural populations, using GANs. However, to the best of our knowledge, adversarial training has not yet been used to train spiking models which produce discrete outputs and which aim to capture both the stimulusdependence of firing rates and shared variability. We propose to use conditional GANs for training neural encoding models, as an alternative to likelihood-based approaches. A key difficulty in using GANs for neural population data is the discrete nature of neural spike trains: Adversarial training requires calculation of gradients through the generative model, which is not possible for models with a discrete sampling step, and hence, requires the application of gradient estimators. While many applications of discrete GANs use biased gradient estimators based on the concrete relaxation technique, we find that unbiased gradient estimators REINFORCE and REBAR lead to better fitting performance. We demonstrate our approach by fitting a convolutional neural network model with shared noise sources to multi-electrode recordings from V1. We want to train a GAN, conditioned on the visual input, to generate multivariate binary spike counts y which match the statistics of empirical data. We model binary spike trains (i.e. each bin t, neuron n and trial i corresponds to an independent draw from a Bernoulli distribution) conditioned on the stimulus x and latent variable z which induces shared variability. We use a convolutional neural network (CNN) f with parameters θ to capture the mapping from x and z to the firing rate λ (Fig. 1), with a sigmoid nonlinearity σ in the last layer, Generator architecture The discriminator D (Fig. 1) receives both the stimulus and the corresponding (simulated or experimental) spike trains. It uses a CNN (similar in architecture to the generator) to embed the stimulus, and combines it with the spike train via fully connected layers. For each timebin and trial, D outputs the probability of the input spike train being real. The objective is to find the Nash equilibrium of a minmax game between the generator G and the discriminator D, Due to this objective, GANs are notoriously challenging to train as the training algorithm is sensitive to the gradients with respect to the discriminator parameters. We used the cross-entropy objective as in equation 3, but constrained the discriminator gradients using spectral normalisation, and employed gradient norm clipping for the generator gradients. Dealing with discrete data Obtaining the gradients for the generator requires backpropagation through both generator and discriminator networks. Most applications of GANs have been on continuous data. However, spikes are discrete and thus, the generator has a discrete sampling step which blocks backpropagation. Previous attempts to overcome this problem include using concrete relaxation, REINFORCE or REBAR to estimate gradients. Concrete relaxation approximates the binary variables as continuous values which are close to 0 and 1. This allows backpropagation through the sampling step, but leads to biased gradients. The REINFORCE gradient estimator provides unbiased but high-variance gradients using the log-derivative trick. The REBAR gradient estimator combines concrete relaxation and REINFORCE by using the relaxed outputs as a control variate to the REINFORCE gradient. For the applications and hyper-parameter settings we used, we found that both REINFORCE and REBAR performed better than concrete relaxation, and that REBAR did not have much improvement over REINFORCE (Fig. 2). Architecture and dataset We fit the GANs to a publicly available dataset, consisting of 69 cells recorded from macaque V1, while animals watched a 30s movie repeated 120 times. The movie consisted of 750 frames of size 320 x 320 pixels, which we downsampled to 27 x 27 pixels. We binned the spikes at a 40ms resolution, and binarized the ing spike trains. Since only 5% of the spike counts in each bin were non-binary after re-binning, we assumed that the binarization would not significantly alter the . For the generator, which receives 10 consecutive movie frames of size 27 x 27 pixels as input, we used a 3-layer CNN architecture similar to Kindel et al. (Fig. 1A) -layers 1+2: convolutional with 16 and 32 filters, size 7 by 7, each followed by a MaxPool layer with kernel size 3 and stride 2, followed by LeakyRELUs with slope 0.2. The final layer of the CNN was a fully connected layer with units equal to the number of neurons in the dataset. To capture the stimulus-independent variability shared between the neurons, we added Gaussian white noise to the units of the convolutional layers. The noise was shared between the units of these layers, multiplied by a separate weight for each unit. The discriminator network consisted of a CNN embedding for the stimulus, similar in structure to the generator, but without the added shared noise (Fig. 1B), and 5 fully connected ReLU layers. Training We trained the two networks in parallel for 15k epochs, each consisting of 2 discriminator and 1 generator update. With batch size 50, we used ADAM with learning rate 0.0001, β 1 = 0.9 and β 2 = 0.999 to optimise the network parameters. The first 650 timebins were used for training the networks and the last 100 timebins for validation. All hyper-parameters were set by hand. We fit a 3-layer CNN generative model of binary spike counts to neural population data recorded in the V1 area of macaque visual cortex, using adversarial training as described above. For comparison, we fit a CNN with a similar architecture to the GAN generator -but without the shared noise layersto the same dataset, using supervised learning, i.e. by optimizing the cross-entropy between predicted firing probabilities and experimentally observed spike trains. We also fit a Dichotomised Gaussian (DG) model, which explicitly represents shared variability via the covariance matrix of a multi-variate Gaussian distribution. On the training data, all approaches reproduced the gross structure in the spike train rasters (Fig. 3A) and accurately captured the firing rates (here: spike-probabilities per bin, Fig. 3B left). However, the supervised model did not accurately reproduce total pairwise correlations between the neurons (Fig. 3B center), since its noise-correlations 3C) are constrained to be 0. Thus, the histogram of population spike counts for data generated from the supervised model is also substantially different from that of (Fig. 3B right). The DG model accurately captured the total correlations and the pairwise noise-correlation matrix since it designed to match the peri-stimulus time histogram (PSTH) and the noise-correlations of the spike trains. However, it did not perfectly capture the population spike count histogram, as it only models second order correlation between neurons, while the population spike count histogram also depends on higher order correlations. In contrast, the GAN generator, with the addition of just a few shared noise parameters to the supervised model, was able to accurately capture the total correlation, the population spike histogram and pairwise noise-correlation matrix. However, we found that neither model generalised well to the held-out test-dataset, possibly because of the short training-set and high variability of this dataset. When we fit a CNN with the exact same architecture to simulated data with higher SNR and the same dimensions as the V1 dataset, we found that the CNN was able to capture the PSTH and SNR in the test data (Fig. 4). On the V1 dataset, the adversarially trained CNN and the DG model were similarly good in reproducing correlations and spike count histograms. This might occur if the spike trains of the neural population are homoscedastic (i.e. the variability does not depend on time or stimuli), as assumed by the DG model. Adversarial training is limited only by the flexibility of the network, and can also capture heteroscedastic, i.e. stimulus-dependent noise. Hence, we simulated heteroscedastic data with the same dimensionality as the V1 dataset. We added latent noise to the second layer of the CNN, with variance proportional to the squared input from the previous layer. We fit both a CNN and a DG model to this simulated dataset. We found that CNN trained with the GAN framework was able to accurately capture the covariances estimated from'low signal' and'high signal' timebins separately (Fig. 5), unlike the DG model. We here showed how adversarial training of conditional generative models that produce discrete outputs (i.e. neural spike trains) can be used to generate data that matches the distribution of spike trains recorded in-vivo, and in particular, its firing rates and correlations. We used unbiased gradient estimators to train conditional GANs on discrete spike trains and spectral normalisation to stabilise training. However, training of discrete GANs remains sensitive to the architecture of the discriminator, as well as hyper-parameter settings. We showed that we are able to successfully train adversarial models in cases where supervised and Dichotomised Gaussian models fail. In future, adversarial training could be used to capture higher-order structure in neural data, and could be combined with discriminators that target certain statistics of the data that might be of particular interest, in a spirit similar to maximum entropy models. Similarly, this approach could also be extended to capture temporal features in neural population data such as spike-history dependence or adaptation effects. Since we condition the discriminator on the input stimulus, adversarial training could be used for transfer learning across multiple datasets. Generative models trained this way to produce realistic spike trains to various input stimuli, may be used to probe the range of spiking behaviour in a neural population under different kinds of stimulus or noise perturbations.
We show how neural encoding models can be trained to capture both the signal and spiking variability of neural population data using GANs.
1,390
scitldr
A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model. In machine learning, there are two main learning tasks on two ends of scale bar: unsupervised learning and supervised learning. Generally, performance of supervised models is better than that of unsupervised models since the mapping between data and associated labels is provided explicitly in supervised learning. This performance advantage of supervised learning requires a lot of labelled data, which is expensive. Any other learning tasks reside in between these two tasks, so are their performances. Weakly supervised learning is an example of such tasks. There are three types of supervision in weakly supervised learning: incomplete, inexact and inaccurate supervision. Multiple instance learning (MIL) is a special type of weakly supervised learning and a typical example of inexact supervision . In MIL, data consists of bags of instances and their corresponding bag level labels. Although the labels are somehow related to instances inside the bags, the instances are not explicitly labeled. In traditional MIL, given the bags and corresponding bag level labels, task is to learn the mapping between bags and labels while the goal is to predict labels of unseen bags . In this paper, we explore the feasibility of finding out labels of individual instances inside the bags only given the bag level labels, i.e. there is no individual instance level labels. One important application of this task is semantic segmentation of breast cancer metastases in histological lymph node sections, which is a crucial step in staging of breast cancer . In this task, each pathology image of a lymph node section is a bag and each pixel inside that image is an instance. Then, given the bag level label that whether the image contains metastases or not, the task is to label each pixel as either metastases or normal. This task can be achieved by asking experts to exhaustively annotate each metastases region in each image. However, this exhaustive annotation process is tedious, time consuming and more importantly not a part of clinical workflow. In many complex systems, such as in many types of cancers, measurements can only be obtained at coarse level (bag level), but information at fine level (individual instance level) is of paramount importance. To achieve this, we propose a weakly supervised learning based clustering framework. Given a dataset consisting of instances with unknown labels, our ultimate objective is to cluster the instances in this dataset. To achieve this objective, we introduce a novel MIL task based on a new kind of bag level label called unique class count (ucc), which is the number of unique classes or the number of clusters among all the instances inside the bag. We organize the dataset into non-empty bags, where each bag is a subset of individual instances from this dataset. Each bag is associated with a bag level ucc label. Then, our MIL task is to learn mapping between the bags and their associated bag level ucc labels and then to predict the ucc labels of unseen bags. We mathematically show that a ucc classifier trained on this task can be used to perform unsupervised clustering on individual instances in the dataset. Intuitively, for a ucc classifier to count the number of unique classes in a bag, it has to first learn discriminant features for underlying classes. Then, it can group the features obtained from the bag and count the number of groups, so the number of unique classes. Our weakly supervised clustering framework is illustrated in Figure 1. It consists of a neural network based ucc classifier, which is called as Unique Class Count (U CC) model, and an unsupervised clustering branch. The U CC model accepts any bag of instances as input and uses ucc labels for supervised training. Then, the trained U CC model is used as a feature extractor and unsupervised clustering is performed on the extracted features of individual instances inside the bags in the clustering branch. One application of our framework is the semantic segmentation of breast cancer metastases in lymph node sections (see Figure 4). The problem can be formulated as follows. The input is a set of images. Each image (bag) has a label of ucc1 (image is fully normal or fully metastases) or ucc2 (image is a mixture of normal and metastases). Our aim is to segment the pixels (instances) in the image into normal and metastases. A U CC model can be trained to predict ucc labels of individual images in a fully supervised manner; and the trained model can be used to extract features of pixels (intances) inside the images (bags). Then, semantic segmentation masks can be obtained by unsupervised clustering of the pixels (each is represented by the extracted features) into two clusters (metastases or normal). Note that ucc does not directly provide an exact label for each individual instance. Therefore, our framework is a weakly supervised clustering framework. Finally, we have constructed ucc classifiers and experimentally shown that clustering performance of our framework with our ucc classifiers is better than the performance of unsupervised models and comparable to performance of fully supervised learning models. We have also tested the performance of our model on the real world task of semantic segmentation of breast cancer metastases in lymph node sections. We have compared the performance of our model with the performance of popular medical image segmentation architecture of U net and shown that our weakly supervised model approximates the performance of fully supervised U net model 1. Hence, there are three main contributions of this paper: 1. We have defined unique class count as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier, in principle, can be used to perfectly cluster the individual instances inside the bags. 2. We have constructed a neural network based ucc classifier by incorporating kernel density estimation (KDE) as a layer into our model architecture, which provided us with end-to-end training capability. 3. We have experimentally shown that clustering performance of our framework is better than the performance of unsupervised models and comparable to performance of fully supervised learning models. The rest of the paper is organized such that related work is in Section 2, details of our weakly supervised clustering framework are in Section 3, of the experiments on MNIST, CIFAR10 and CIFAR100 datasets are in Section 4, the of the experiments in semantic segmentation of breast cancer metastases are in Section 5, and Section 6 concludes the paper. This work is partly related to MIL which was first introduced in for drug activity prediction. Different types of MIL were derived with different assumptions (Gärtner et al., 2002; ; ; ;, which are reviewed in detail in , and they were used for many different applications such as, image annotation/categorization/retrieval , text categorization , spam detection , medical diagnosis , face/object detection and object tracking . In MIL, different types of pooling layers are used to combine extracted features of instances inside the bags, such as max-pooling and log-sum-exp pooling (; ; ;). On the other hand, our U CC model uses KDE layer in order to estimate the distribution of extracted features. The advantage of KDE over pooling layers is that it embeds the instance level features into distribution space rather than summarizing them. There are also methods modeling cardinality and set distributions (; ;). However, cardinality of a set and ucc are completely different from each other. It is also important to state that ucc is obviously different from object/crowd counting (; ; ; since the task in object/crowd counting is to count the instances of the same type of object or people. Lastly, we compare clustering accuracies of our models with clustering accuracies of unsupervised baseline models: K-means and Spectral Clustering ; state of the art unsupervised models: JULE , GMVAE , DAC , DEPICT and DEC ; and state of the art semi-supervised models: AAE , CatGAN , LN and ADGM (Maaløe et al., 2016). In this section, we state our machine learning objective and formally define our novel MIL task, which is the core of our weakly supervised clustering framework. Finally, we explain details of the two main components of our framework, namely U CC model and unsupervised clustering branch. Objective: Let X = {x 1, x 2, · · ·, x n} be a dataset such that each instance x i ∈ X belongs to a class, but its label is unknown. In this paper, we assume that total number of classes K is known. Hence, each instance x i is endowed with an underlying, but unkown, label L(x i) = l i ∈ {1, 2, · · ·, K}. Further assume that for each class k ∈ {1, 2, · · · K}, there exist at least one element Our eventual objective is to derive a predicted class labell i for each instance x i that tends towards underlying truth class l i, i.e.l i → L(x i) = l i. In this novel MIL task, unique class count is used as an inexact, weak, bag level label and is defined in Definition 1. Assume that we are given subsets σ ζ ⊂ X, ζ = 1, 2, · · ·, N and unique class counts }. Then, our MIL task is to learn the mapping between the bags and their associated bag level ucc labels while the goal is to predict the ucc labels of unseen bags. Definition 1 Given a subset σ ζ ⊂ X, unique class count, η σ ζ, is defined as the number of unique classes that all instances in the subset σ ζ belong to, i.e. η σ ζ = |{L(x i)|x i ∈ σ ζ }|. Recall that each instance belongs to an underlying unknown class. Given a dataset D, our eventual objective is to assign a label to each instance x i ∈ X such that assigned labels and underlying unknown classes are consistent. To achieve this eventual objective, a deep learning model is designed such that the following intermediate objectives can be achieved while it is being trained on our MIL task: 1. Unique class count: Given an unseen set σ ζ, the deep learning model, which is trained on D, can predict its unique class count η σ ζ correctly. 2. Labels on sets: Let σ, the deep learning model should enable us to develop an unsupervised learning model to label instances in σ pure ζ and σ pure ξ as belonging to different classes. Note that the underlying classes for instances in the sets are unknown. 3. Labels on instances: Given individual instances x i ∈ X, the deep learning model should enable us to assign a label to each individual instance x i such that all instances with different/same underlying unknown classes are assigned different/same labels. This is the eventual unsupervised learning objective. Definition 2 A set σ is called a pure set if its unique class count equals one. All pure sets is denoted by the symbol σ pure in this paper. In order to achieve the stated objectives, we have designed a deep learning based Unique Class Count (U CC) model. Our U CC model consists of three neural network modules (θ feature, θ drn, θ decoder) and can be trained end-to-end. The first module θ feature extracts features from individual instances; then distributions of features are constructed from extracted features. The second module θ drn is used to predict ucc label from these distributions. The last module θ decoder is used to construct an autoencoder together with θ feature so as to improve the extracted features by ensuring that extracted features contain semantic information for reconstruction. Formally, for x i ∈ σ ζ, i = {1, 2, · · ·, |σ ζ |}, feature extractor module θ feature extracts J features {f As a short hand, we write the operator θ feature as operating element wise on the set to generate a feature matrix θ feature (σ ζ) = f σ ζ with matrix elements f j,i σ ζ ∈ R, representing the j th feature of the i th instance. After obtaining features for all instances in σ ζ, a kernel density estimation (KDE) module is used to accumulate feature distributions ). Concurrently, decoder module θ decoder in autoencoder branch is used to reconstruct the input images from the extracted features in an unsupervised fashion,x i = θ decoder (θ feature (x i)). Hence, U CC model, main modules of which are illustrated in Figure 2 (a), optimizes two losses concurrently:'ucc loss' and'autoencoder loss'. While'ucc loss' is cross-entropy loss,'autoencoder loss' is mean square error loss. Loss for one bag is given in Equation 1. In U CC model, input is a set σ ζ and output is corresponding ucc labelη σ ζ, which does not depend on permutation of the instances in σ ζ. KDE module provides U CC model with permutation- invariant property. Moreover, KDE module uses the Gaussian kernel and it is differentiable, so our model can be trained end-to-end (Appendix A). KDE module also enables our theoretical analysis thanks to its decomposability property (Appendix B). Lastly, KDE module estimates the probability distribution of extracted features and enables θ drn to fully utilize the information in the shape of the distribution rather than looking at point estimates of distribution obtained by other types of pooling layers (; ;) (Appendix C.6). This section mathematically proves that the U CC model guarantees, in principle, to achieve the stated intermediate objectives in Section 3.1. Proof of propositions are given in Appendix B. Proposition 1 Let σ ζ, σ ξ be disjoint subsets of X with predicted unique class countsη σ ζ =η σ ξ = 1. If the predicted unique class count of Definition 3 A perfect unique class count classifier takes in any set σ and output the correct predicted unique class countη σ = η σ. Proposition 2 Given a perfect unique class count classifier. The dataset X can be perfectly clustered into K subsets σ Proposition 3 Given a perfect unique class count classifier. Decompose the dataset X into K subsets σ Suppose we have a perfect ucc classifier. For any two pure sets σ pure ζ and σ pure ξ, which consist of instances of two different underlying classes, ucc labels must be predicted correctly by the perfect ucc classifier. Hence, the conditions of Proposition 1 are satisfied, so we have h σ pure ζ = h σ pure ξ. Therefore, we can, in principle, perform an unsupervised clustering on the distributions of the sets without knowing the underlying truth classes of the instances. Hence, the perfect ucc classifier enables us to achieve our intermediate objective of "Labels on sets". Furthermore, given a perfect ucc classifier, Proposition 2 states that by performing predictions of ucc labels alone, without any knowledge of underlying truth classes for instances, one can in principle perform perfect clustering for individual instances. Hence, a perfect ucc classifier enables us to achieve our intermediate objective of "Labels on instances". In order to achieve our ultimate objective of developing an unsupervised learning model for clustering all the instances in dataset X, we add this unsupervised clustering branch into our framework. Theoreticallly, we have shown in Proposition 3 that given a perfect ucc classifier, distributions of pure subsets of instances coming from different underlying classes are different. In practice, it may not be always possible (probably most of the times) to train a perfect ucc classifier, so we try to approximate it. First of all, we train our ucc classifier on our novel MIL task and save our trained model (θ feature,θ drn,θ decoder). Then, we use trained feature extractorθ feature to obtain feature matrix f X =θ feature (X). Finally, extracted features are clustered in an unsupervised fashion, by using simple k-means and spectral clustering methods. Figure 2 (b) illustrates the unsupervised clustering process in our framework. A good feature extractorθ feature is of paramount importance in this task. Relatively poorθ feature may in a poor unsupervised clustering performance in practice even if we have a strongθ drn. To obtain a strongθ feature, we employ an autoencoder branch, so as to achieve high clustering performance in our unsupervised instance clustering task. The autoencoder branch ensures that features extracted byθ feature contain semantic information for reconstruction. This section analyzes the performances of our U CC models and fully supervised models in terms of our eventual objective of unsupervised instance clustering on MNIST (10 clusters) , CIFAR10 (10 clusters) and CIFAR100 (20 clusters) datasets . To analyze different characteristics of our framework, different kinds of unique class count models were trained during our experiments: U CC, U CC 2+, U CC α=1 and U CC 2+ α=1. These unique class count models took sets of instances as inputs and were trained on ucc labels. While U CC and U CC 2+ models had autoencoder branch in their architecture and they were optimized jointly over both autoencoder loss and ucc loss, U CC α=1 and U CC 2+ α=1 models did not have autoencoder branch in their architecture and they were optimized over ucc loss only (i.e. α = 1 in Equation 1). The aim of training unique class count models with and without autoencoder branch was to show the effect of autoencoder branch in the robustness of clustering performance with respect to ucc classification performance. U CC and U CC α=1 models were trained on bags with labels of ucc1 to ucc4. On the other hand, U CC 2+ and U CC 2+ α=1 models were trained on bags with labels ucc2 to ucc4. Our models were trained on ucc labels up to ucc4 instead of ucc10 (ucc20 in CIFAR100) since the performance was almost the same for both cases and training with ucc1 to ucc4 was much faster (Appendix C.2). Please note that for perfect clustering of instances inside the bags, it is enough to have a perfect ucc classifier that can perfectly discriminate ucc1 and ucc2 bags from Proposition 2. The aim of traininig U CC 2+ and U CC 2+ α=1 models was to experimentally check whether these models can perform as good as U CC and U CC α=1 models even if there is no pure subsets during training. In addition to our unique class count models, for benchmarking purposes, we also trained fully supervised models, F ullySupervised, and unsupervised autoencoder models, Autoencoder. F ullySupervised models took individual instances as inputs and used instance level ground truths as labels during training. On the other hand, Autoencoder models were trained in an unsupervised manner by optimizing autoencoder loss (i.e. α = 0 in Equation 1). It is important to note that all models for a dataset shared the same architecture for feature extractor module and all the modules in our models are fine tuned for optimum performance and training time as explained in Appendix C.1. We trained and tested our models on MNIST, CIFAR10 and CIFAR100 datasets. We have X mnist,tr, X mnist,val and X mnist,test for MNIST; X cif ar10,tr, X cif ar10,val and X cif ar10,test for CIFAR10; and X cif ar100,tr, X cif ar100,val and X cif ar100,test for CIFAR100. Note that tr, val and test subscripts stand for'training','validation' and'test' sets, respectively. All the presented in this paper were obtained on hold-out test sets X mnist,test, X cif ar10,test and X cif ar100,test. F ullySupervised models took individual instances as inputs and were trained on instance level ground truths. Unique class count models took sets of instances as inputs, which were sampled from the power sets 2 Xmnist,tr, 2 X cif ar10,tr and 2 X cif ar100,tr, and were trained on ucc labels (Appendix C.2). While all 0.964 0.272 0.137 GMVAE 0.885 --DAC ( 0.696 0.247 0.136 K-means 0.572 0.229 0.130 ADGM (Maaløe et al., 2016) 0.990 --Ladder Networks 0.989 0.796 -AAE 0.981 --CatGAN 0.981 0.804 - * Models do not separate training and testing data, i.e. their are not on hold-out test sets. the models were trained in a supervised setup, either on ucc labels or instance level ground truths, all of them were used to extract features for unsupervised clustering of individual instances. Preceeding sections showed, in theory, that a perfect ucc classifier can perform'weakly' supervised clustering perfectly. We evaluate ucc prediction accuracy of our unique class count models in accordance with our first intermediate objective that unique class count models should predict ucc labels of unseen subsets correctly. We randomly sampled subsets for each ucc label from the power sets of test sets and predicted the ucc labels by using trained models. Then, we calculated the ucc prediction accuracies by using predicted and truth ucc labels, which are summarized in Table 1 (Appendix C.3). We observed that as the task becomes harder (from MNIST to CIFAR100), it also becomes harder to approximate the perfect ucc classifier. Moreover, U CC and U CC α=1 models, in general, have higher scores than their counterpart models of U CC 2+ and U CC 2+ α=1, which is expected since the ucc prediction task becomes easier at the absence of pure sets and models reach to early stopping condition (Appendix C.1) more easily. This is also supported by annother interesting, yet reasonable, observation that U CC 2+ models have higher ucc accuracies than U CC 2+ α=1 models thanks to the autoencoder branch which makes U CC 2+ harder to reach to early stopping condition. Jensen-Shannon (JS) divergence value between feature distributions of two pure sets consisting of instances of two different underlying classes is defined as inter-class JS divergence in this paper and used for comparison on'Labels on sets' objective of assigning labels to pure sets. Higher values of inter-class JS divergence are desired since it means that feature distributions of pure sets of underlying classes are far apart from each other. The features of all the instances in a particular class are extracted by using a trained model and feature distributions associated to that class obtained by performing kernel density estimation on these extracted features. Then, for each Figure 3: Clustering accuracy vs ucc accuracy plots of U CC and U CC α=1 models together with k-means and spectral clustering accuracy baselines on MNIST, CIFAR10 and CIFAR100 datasets. pair of classes, inter-class JS divergence values are calculated (Appendix C.4). For a particular model, which is used in feature extraction, the minimum of these pairwise inter-class JS divergence values is used as a metric in the comparison of models. We have observed that as the task gets more challenging and the number of clusters increases, there is a drop in minimum inter-class JS divergence values, which is summarized in Table 1. For our eventual objective of'Labels on instances', we have used'clustering accuracy' as a comparison metric, which is calculated similar to. By using our trained models, we extracted features of individual instances of all classes in test sets. Then, we performed unsupervised clustering over these features by using k-means and spectral clustering. We used number of classes in ground truth as number of clusters (MNIST: 10, CIFAR10: 10, CIFAR100: 20 clusters) during clustering and gave the best clustering accuracy for each model in Table 1 (Appendix C.5). In Table 1, we compare clustering accuracies of our models together with baseline and state of the art models in the literature: baseline unsupervised (K-means, Spectral Clustering ); state of the art unsupervised (JULE , GMVAE , DAC , DEPICT , DEC ) and state of the art semi-supervised (AAE , CatGAN , LN , ADGM (Maaløe et al., 2016) ). Clustering performance of our unique class count models is better than the performance of unsupervised models in all datasets and comparable to performance of fully supervised learning models in MNIST and CI-FAR10 datasets. The performance gap gets larger in CIFAR100 dataset as the task becomes harder. Although semi-supervised methods use some part of the dataset with'exact' labels during training, our models perform on par with AAE and CatGAN models and comparable to LN and ADGM models on MNIST dataset. ADGM and LN even reach to the performance of the F ullySupervised model since they exploit training with'exact' labeled data. On CIFAR10 dataset, LN and CatGAN models are slightly better than our unique class count models; however, they use 10% of instances with'exact' labels, which is not a small portion. In general, our U CC and U CC α=1 models have similar performance, and they are better than their counterpart models of U CC 2+ and U CC 2+ α=1 due to the absence of pure sets during training. However, in the real world tasks, the absence of pure sets heavily depends on the nature of the problem. In our task of semantic segmentation of breast cancer metastases in histological lymph node sections, for example, there are many pure sets. Furthermore, we observed that there is a performance gap between U CC 2+ and U CC 2+ α=1 models: U CC 2+ models perform better than U CC 2+ α=1 models thanks to the autoencoder branch. The effect of autoencoder branch is also apparent in Figure 3, which shows clustering accuracy vs ucc accuracy curves for different datasets. For MNIST dataset, while U CC model gives clustering accuracy values proportional to ucc accuracy, U CC α=1 model cannot reach to high clustering accuracy values until it reaches to high ucc accuracies. The reason is that autoencoder branch in U CC helps θ feature module to extract better features during the initial phases of the training process, where the ucc classification accuracy is low. Compared to other datasets, this effect is more significant in MNIST dataset since itself is clusterable. Although autoencoder branch helps in CIFAR10 and CIFAR100 datasets as well, improvements in clustering accuracy coming from autoencoder branch seems to be limited, so two models U CC and U CC α=1 Figure 4: Example images from hold-out test dataset with corresponding ucc labels, ground truth masks and predicted masks by U CC segment, U net and K-means clustering models. follow nearly the same trend in the plots. The reason is that CIFAR10 and CIFAR100 datasets are more complex than MNIST dataset, so autoencoder is not powerful enough to contribute to extract discrimant features, which is also confirmed by the limited improvements of Autoencoder models over baseline performance in these datasets. Semantic segmentation of breast cancer metastases in histological lymph node sections is a crucial step in staging of breast cancer, which is the major determinant of the treatment and prognosis . Given the images of lymph node sections, the task is to detect and locate, i.e. semantically segment out, metastases regions in the images. We have formulated this task in our novel MIL framework such that each image is treated as a bag and corresponding ucc label is obtained based on whether the image is from fully normal or metastases region, which is labeled by ucc1, or from boundary region (i.e. image with both normal and metastases regions), which is labeled by ucc2. We have shown that this segmentation task can be achieved by using our weakly supervised clustering framework without knowing the ground truth metastases region masks of images, which require experts to exhaustively annotate each metastases region in each image. This annotation process is tedious, time consuming and more importantly not a part of clinical workflow. We have used 512 × 512 image crops from publicly available CAMELYON dataset and constructed our bags by using 32 × 32 patches over these images. We trained our unique class count model U CC segment on ucc labels. Then, we used the trained model as a feature extractor and conducted unsupervised clustering over the patches of the images in the hold-out test dataset to obtain semantic segmentation masks. For benchmarking purposes, we have also trained a fully supervised U net model , which is a well-known biomedical image segmentation architecture, by using the ground truth masks and predicted the segmentation maps in the test set. The aim of this comparison was to show that at the absence of ground truth masks, our model can approximate the performance of a fully supervised model. Moreover, we have obtained semantic segmentation maps in the test dataset by using k-means clustering as a baseline study. Example images from test dataset with corresponding ground truth masks, ucc labels and predicted masks by different models are shown in When we look at the performance of unsupervised baseline method of K-means clustering, it is obvious that semantic segmentation of metastases regions in lymph node sections is not an easy task. Baseline method achieves a very low TPR value of 0.370 and almost random score of 0.512 in PA. On the other hand, both our weakly supervised model U CC segment and fully supervised model U net outperform the baseline method. When we compare our model U CC segment with U net model, we see that both models behave similarly. They have reasonably high TPR and TNR scores, and low FPR and FNR scores. Moreover, they have lower FPR values than FNR values, which is more favorable than vice-versa since pathologists opt to use immunohistochemistry (IHC) to confirm negative cases . However, there is a performance gap between two models, which is mainly due to the fact that U net model is a fully supervised model and it is trained on ground truth masks, which requires exhaustive annotations by experts. On the contrary, U CC segment model is trained on ucc labels and approximates to the performance of the U net model. ucc label is obtained based on whether the image is metastatic, non-metastatic or mixture, which is much cheaper and easier to obtain compared to exhaustive mask annotations. Another factor affecting the performance of U CC segment model is that ucc1 labels can sometimes be noisy. It is possible to have some small portion of normal cells in cancer regions and vice-versa due to the nature of the cancer. However, our U CC segment is robust to this noise and gives reasonably good , which approximates the performance of U net model. In this paper, we proposed a weakly supervised learning based clustering framework and introduce a novel MIL task as the core of this framework. We defined ucc as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier can be used to perfectly cluster individual instances inside the bags. We designed a neural network based ucc classifer and experimentally showed that clustering performance of our framework with our ucc classifiers are better than the performance of unsupervised models and comparable to performance of fully supervised learning models. Finally, we showed that our weakly supervised unique class count model, U CC segment, can be used for semantic segmentation of breast cancer metastases in histological lymph node sections. We compared the performance of our model U CC segment with the performance of a U net model and showed that our weakly supervised model approximates the performance of fully supervised U net model. In the future, we want to check the performance of our U CC segment model with other medical image datasets and use it to discover new morphological patterns in cancer that had been overlooked in traditional pathology workflow. Kernel density estimation is a statistical method to estimate underlying unknown probability distribution in data . It works based on fitting kernels at sample points of an unknown distribution and adding them up to construct the estimated probability distribution. Kernel density estimation process is illustrated in Figure 5.. In order to be able to train our unique class count model end-to-end, we need to show that KDE module is differentiable, so that we can pass the gradients from θ drn to θ feature during back-propagation. Derivative of h After showing that KDE module is differentiable, we can show the weight update process for θ feature module in our model. Feature extractor module θ feature is shared by both autoencoder branch and ucc branch in our model. During back-propagation phase of the end-to-end training process, the weight updates of θ feature comprise the gradients coming from both branches (Equation 5). Gradients coming from autoencoder branch follow the traditional neural network back-propagation flow through the convolutional and fully connected layers. Different than that, gradients coming from ucc branch (Equation 6) also back-propagate through the custom KDE layer according to Equation 3. B PROOFS OF PROPOSITIONS Before proceeding to the formal proofs, it is helpful to emphasize the decomposability property of kernel density estimation here. For any set, σ ζ, one could partition it into a set of M disjoint subsets ). As a direct consequence, one could decompose any set into its pure subsets. This is an important decomposition which will be used in the proofs of propositions later. Now, we can proceed to formally state our propositions. Definition 1 Given a subset σ ζ ⊂ X, unique class count, η σ ζ, is defined as the number of unique classes that all instances in the subset σ ζ belong to, i.e. η σ ζ = |{L(x i)|x i ∈ σ ζ }|. Recall that each instance belongs to an underlying unknown class. Definition 2 A set σ is called a pure set if its unique class count equals one. All pure sets are denoted by the symbol σ pure in this paper. Proposition B. 1 For any set σ ζ ⊂ X, the unique class count η σ ζ of σ ζ does not depend on the number of instances in σ ζ belonging to a certain class. Proof: This is obvious from the definition of unique class count in Definition 1. Proposition B. 2 θ drn is non-linear. Proof: We give a proof by contradiction using Proposition B.1. Suppose θ drn is linear, then Hence, θ drn is linear only when Equation 8 holds. However, by Proposition B.1, (θ feature, θ drn) should count correctly regardless of the proportion of the size of the sets |σ ζ | and |σ ξ |. Hence, Equation 8 cannot hold true and θ drn by contradiction cannot be linear. Proposition B. 3 Let σ ζ, σ ξ be disjoint subsets of X with predicted unique class countsη σ ζ and η σ ξ, respectively. Letη σν be the predicted unique class count of Proof: The distribution of set σ ν can be decomposed into distribution of subsets, Hence,η σν =η σ ζ =η σ ξ. Proposition 1 Let σ ζ, σ ξ be disjoint subsets of X with predicted unique class countsη σ ζ =η σ ξ = 1. If the predicted unique class count of Proof: Proof of this proposition follows immediately from the contra-positive of Proposition B.3. Definition 3 A perfect unique class count classifier takes in any set σ and output the correct predicted unique class countη σ = η σ. Proposition 2 Given a perfect unique class count classifier. The dataset X can be perfectly clustered into K subsets σ Proof: First note that this proposition holds because the "perfect unique class count classifier" is a very strong condition. Decompose X into subsets with single instance and then apply the unique class count on each subset, by definition, unique class counts of all subsets are one. Randomly pair up the subsets and merge them if their union still yield unique class count of one. Recursively apply merging on this condition until no subsets can be merged. Proposition 3 Given a perfect unique class count classifier. Decompose the dataset X into K subsets σ Proof: Since in Proposition 1, the subsets are arbitrary, it holds for any two subsets with unique class count of one. By pairing up all combinations, one arrives at this proposition. Note that for a perfect unique class count classifier, η =η. Feature extractor module θ feature has convolutional blocks similar to the wide residual blocks in. However, the parameters of architectures, number of convolutional and fully connected layers, number of filters in convolutional layers, number of nodes in fully-connected layers, number of bins and σ value in KDE module, were decided based on models' performance and training times. While increasing number of convolutional layers or filters were not improving performance of the models substantialy, they were putting a heavy computation burden. For determining the architecture of θ drn, we checked the performances of different number of fully connected layers. As the number of layers increased, the ucc classification performance of the models increased. However, we want θ feature to be powerful, so we stopped to increase number of layers as soon as we got good . For KDE module, we have tried parameters of 11 bins, 21 bins, σ = 0.1 and σ = 0.01. Best were obtained with 11 bins and σ = 0.1. Similarly, we have tested different number of features at the output of θ feature module and we decided to use 10 features for MNIST and CIFAR10 datasets and 16 features for CIFAR100 dataset based on the clustering performance and computation burden. During training, loss value of validation sets was observed as early stopping criteria. Training of the models was stopped if the validation loss didn't drop for some certain amount of training iterations. For the final set of hyperparameters and details of architectures, please see the code for our experiments: http://bit.ly/uniqueclasscount We trained and tested our models on MNIST, CIFAR10 and CIFAR100 datasets. While MNIST and CIFAR10 datasets have 10 classes, CIFAR100 dataset has 20 classes. For MNIST, we randomly splitted 10,000 images from training set as validation set, so we had 50,000, 10,000 and 10,000 images in our training X mnist,tr, validation X mnist,val and test sets X mnist,test, respectively. In CI-FAR10 dataset, there are 50,000 and 10,000 images with equal number of instances from each class in training and testing sets, respectively. Similar to MNIST dataset, we randomly splitted 10,000 images from the training set as validation set. Hence, we had 40,000, 10,000 and 10,000 images in our training X cif ar10,tr, validation X cif ar10,val and testing X cif ar10,test sets for CIFAR10, respectively. In CIFAR100 dataset, there are 50,000 and 10,000 images with equal number of instances from each class in training and testing sets, respectively. Similar to other datasets, we randomly splitted 10,000 images from the training set as validation set. Hence, we had 40,000, 10,000 and 10,000 images in our training X cif ar100,tr, validation X cif ar100,val and testing X cif ar100,test sets for CIFAR10, respectively. F ullySupervised models took individual instances as inputs and were trained on instance level ground truths. X mnist,tr, X cif ar10,tr and X cif ar100,tr were used for training of F ullySupervised models. Unique class count models took sets of instances as inputs and were trained on ucc labels. Inputs to unique class count models were sampled from the power sets of MNIST, CIFAR10 and CIFAR100 datasets, i.e. 2 Xmnist,tr, 2 X cif ar10,tr and 2 X cif ar100,tr. For MNIST and CIFAR10 datasets, the subsets (bags) with 32 instances and for CIFAR100 dataset, the subsets (bags) with 128 instances are used in our experiments. While U CC and U CC α=1 models are trained on ucc1 to ucc4 labels, U CC 2+ and U CC 2+ α=1 models are trained on ucc2 to ucc4 labels. Our models were trained on ucc labels up to ucc4 instead of ucc10 (ucc20 in CIFAR100) since the performance was almost the same for both cases in our experiment with MNIST dataset, of which are shown in Table 3. On the other hand, training with ucc1 to ucc4 was much faster than ucc1 to ucc10 because as the ucc label gets larger, the number of instances in a bag is required to be larger in order to represent each class and number of elements in powerset also grows exponentially. Please note that for perfect clustering of instances, it is enough to have a perfect ucc classifier that can discriminate ucc1 and ucc2 from Proposition 2. All the presented in this paper were obtained on hold-out test sets X mnist,test, X cif ar10,test and X cif ar100,test. We randomly sampled subsets for each ucc label from the power sets of test sets and predicted the ucc labels by using trained models. Then, we calculated the ucc prediction accuracies by using predicted and truth ucc labels, which are summarized in Table 1. Here, we show confusion matrices of our U CC and U CC 2+ models on MNIST, CIFAR10 and CIFAR100 datasets as examples in Figure 6, 7 and 8, respectively. The features of all the instances in a particular class are extracted by using a trained model and feature distributions associated to that class obtained by performing kernel density estimation on these extracted features. Then, for each pair of classes, inter-class JS divergence values are calculated. We show inter-class JS divergence matrices for our F ullySupervised and U CC models on MNIST test dataset in Figure 9. We also show the underlying distributions for F ullySupervised and U CC models in Figure 10 and 11, respectively. We performed unsupervised clustering by using k-means and spectral clustering and gave the best clustering accuracy for each model on each dataset in Table 1 in the main text. Here, we present all the clustering accuracies for our models in Table 4. KDE layer is chosen as MIL pooling layer in U CC model because of its four main properties, first three of which are essential for the proper operation of proposed framework and validity of the propositions in the paper: 1. KDE layer is permutation-invariant, i.e. the output of KDE layer does not depend on the permutation of its inputs, which is important for the stability of θ drn module. 2. KDE layer is differentiable, so U CC model can be trained end-to-end. 3. KDE layer has decomposability property which enables our theoretical analysis (Appendix B). 4. KDE layer enables θ drn to fully utilize the information in the shape of the distribution rather than looking at point estimates of distribution. Averaging layer as an MIL pooling layer, which also has the first three properties, can be an alternative to KDE layer in U CC model. We have conducted additional experiments by replacing KDE layer with'averaging layer' and compare the clustering accuracy values of the models with averaging layer and the models with KDE layer in Table 5. Our model U CC segment has the same architecture with the U CC model in CIFAR10 dataset, but this time we have used 16 features. We have also constructed the U net model with the same blocks used in U CC segment model in order to ensure a fair comparison. The details of the models can be seen in our code: http://bit.ly/uniqueclasscount We have used 512 × 512 image crops from publicly available CAMELYON dataset . CAMELYON dataset is a public Whole Slide Image (WSI) dataset of histological lymph node sections. It also provides the exhaustive annotations for metastases regions inside the slides which enables us to train fully supervised models for benchmarking of our weakly supervised unique class count model. We randomly crop 512×512 images over the WSIs of CAMELYON dataset and associate a ucc label to each image based on whether it is fully metastases/normal (ucc1) or mixture (ucc2). We assigned ucc labels based on provided ground truths since they are readily available. However, please note that in case no annotations provided, obtaining ucc labels is much cheaper and easier compared to tedious and time consuming exhaustive metastases region annotations. We assigned ucc1 label to an image if the metastases region in the corresponding ground truth mask is either less than 20% (i.e. normal) or more than 80% (i.e metastases). On the other hand, we assigned ucc2 label to an image if the metastases region in the corresponding ground truth mask is more than 30% and less than 70% (i.e. mixture). Actually, this labeling scheme imitates the noise that would have been introduced if ucc labeling had been done directly by the user instead of using ground truth masks. Beyond that, ucc1 labels in this task can naturally be noisy since it is possible to have some small portion of normal cells in cancer regions and vice-versa due to the nature of the cancer. In this way, we have constructed our segmentation dataset consisting of training, validation and testing sets. The images in training and validation sets are cropped randomly over the WSIs in training set of CAMELYON dataset and the images in testing set are cropped randomly over the test set of CAMELYON dataset. Then, the bags in our MIL dataset to train U CC segment model are constructed by using 32 × 32 patches over these images. Each bag contains 32 instances, where each instance is a 32 × 32 patch. The details of our segmentation dataset are shown in Table 6. We have provided the segmentation dataset under "./data/camelyon/" folder inside our code folder. If you want to use this dataset for benchmarking purposes please cite our paper (referenced later) together with the original CAMELYON dataset paper of. Table 6: Details of our segmentation dataset: number of WSIs used to crop the images in each set, number of images in each set and corresponding label distributions in each set ucc1 ucc2 normal metastases total mixture # of images # of WSIs Training 461 322 783 310 1093 159 Validation 278 245 523 211 734 106 Testing 282 668 950 228 1178 126 We have given confusion matrix for ucc predictions of our U CC segment model in Figure 12. For U net model, we have shown loss curves of traininig and validation sets during training in Figure 13. Figure 12: Confusion matrix of our U CC segment model for ucc predictions on our segmentation dataset. Figure 13: Training and validation loss curves during training of our U net model. We have used the best model weights, which were saved at iteration 58000, during training. Models starts to overfit after iteration 60000 and early stopping terminates the training. In this section, we have defined our pixel level evaluation metrics used for performance comparison of our weakly supervised U CC segment model, fully supervised U net model and unsupervised baseline K − means model. T P R = T P T P + F N T N R = T N T N + F P F N R = F N F N + T P P A = T P + T N T P + F P + T N + F N
A weakly supervised learning based clustering framework performs comparable to that of fully supervised learning models by exploiting unique class count.
1,391
scitldr
Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks. Deep reinforcement learning has achieved many impressive in recent years, including learning to play Atari games from raw-pixel inputs BID0, mastering the game of Go BID1, as well as learning advanced locomotion and manipulation skills from raw sensory inputs BID3 BID4. Many of these were achieved using model-free reinforcement learning algorithms, which do not attempt to build a model of the environment. These algorithms are generally applicable, require relatively little tuning, and can easily incorporate powerful function approximators such as deep neural networks. However, they tend to suffer from high sample complexity, especially when such powerful function approximators are used, and hence their applications have been mostly limited to simulated environments. In comparison, model-based reinforcement learning algorithms utilize a learned model of the environment to assist learning. These methods can potentially be much more sample efficient than model-free algorithms, and hence can be applied to real-world tasks where low sample complexity is crucial BID7 BID3 BID8. However, so far such methods have required very restrictive forms of the learned models, as well as careful tuning for them to be applicable. Although it is a straightforward idea to extend model-based algorithms to deep neural network models, so far there has been comparatively fewer successful applications. The standard approach for model-based reinforcement learning alternates between model learning and policy optimization. In the model learning stage, samples are collected from interaction with the environment, and supervised learning is used to fit a dynamics model to the observations. In the policy optimization stage, the learned model is used to search for an improved policy. The underlying assumption in this approach, henceforth termed vanilla model-based RL, is that with enough data, the learned model will be accurate enough, such that a policy optimized on it will also perform well in the real environment. Although vanilla model-based RL can work well on low-dimensional tasks with relatively simple dynamics, we find that on more challenging continuous control tasks, performance was highly unstable. The reason is that the policy optimization tends to exploit regions where insufficient data is available to train the model, leading to catastrophic failures. Previous work has pointed out this issue as model bias, i.e. BID7 BID9 BID10. While this issue can be regarded as a form of overfitting, we emphasize that standard countermeasures from the supervised learning literature, such as regularization or cross validation, are not sufficient here -supervised learning can guarantee generalization to states from the same distribution as the data, but the policy optimization stage steers the optimization exactly towards areas where data is scarce and the model is inaccurate. This problem is severely aggravated when expressive models such as deep neural networks are employed. To resolve this issue, we propose to use an ensemble of deep neural networks to maintain model uncertainty given the data collected from the environment. During model learning, we differentiate the neural networks by varying their weight initialization and training input sequences. Then, during policy learning, we regularize the policy updates by combining the gradients from the imagined stochastic roll-outs. Each imagined step is uniformly sampled from the ensemble predictions. Using this technique, the policy learns to become robust against various possible scenarios it may encounter in the real environment. To avoid overfitting to this regularized objective, we use the model ensemble for early stopping policy training. Standard model-based techniques require differentiating through the model over many time steps, a procedure known as backpropagation through time (BPTT). It is well-known in the literature that BPTT can lead to exploding and vanishing gradients BID11. Even when gradient clipping is applied, BPTT can still get stuck in bad local optima. We propose to use likelihood ratio methods instead of BPTT to estimate the gradient, which only make use of the model as a simulator rather than for direct gradient computation. In particular, we use Trust Region Policy Optimization (TRPO) BID4, which imposes a trust region constraint on the policy to further stabilize learning. In this work, we propose Model-Ensemble Trust-Region Policy Optimization (ME-TRPO), a modelbased algorithm that achieves the same level of performance as state-of-the-art model-free algorithms with 100× reduction in sample complexity. We show that the model ensemble technique is an effective approach to overcome the challenge of model bias in model-based reinforcement learning. We demonstrate that replacing BPTT by TRPO yields significantly more stable learning and much better final performance. Finally, we provide an empirical analysis of vanilla model-based RL using neural networks as function approximators, and identify its flaws when applied to challenging continuous control tasks. There has been a large body of work on model-based reinforcement learning. They differ by the choice of model parameterization, which is associated with different ways of utilizing the model for policy learning. Interestingly, the most impressive robotic learning applications so far were achieved using the simplest possible model parameterization, namely linear models BID12 BID13 BID14 BID15 BID3 BID16, where the model either operates directly over the raw state, or over a feature representation of the state. Such models are very data efficient, and allows for very efficient policy optimization through techniques from optimal control. However, they only have limited expressiveness, and do not scale well to complicated nonlinear dynamics or high-dimensional state spaces, unless a separate feature learning phase is used BID15 ).An alternative is to use nonparametric models such as Gaussian Processes (GPs) BID17 BID18 BID7. Such models can effectively maintain uncertainty over the predictions, and have infinite representation power as long as enough data is available. However, they suffer from the curse of dimensionality, and so far their applications have been limited to relatively low-dimensional settings. The computational expense of incorporating the uncertainty estimates from GPs into the policy update also imposes an additional challenge. Deep neural networks have shown great success in scaling up model-free reinforcement learning algorithms to challenging scenarios BID0 BID1 BID4. However, there has been only limited success in applying them to model-based RL. Although many previous studies have shown promising on relatively simple domains BID19 BID20 BID21 BID22, so far their applications on more challenging domains have either required a combination with model-free techniques BID23 BID25, or domain-specific policy learning or planning algorithms BID26 BID27 BID28 BID29 BID30 BID31. In this work, we show that our purely model-based approach improves the sample complexity compared to methods that combine model-based and model-free elements. Two recent studies have shown promising signs towards a more generally applicable model-based RL algorithm. BID32 utilize Bayesian neural networks (BNNs) to learn a distribution over dynamics models, and train a policy using gradient-based optimization over a collection of models sampled from this distribution. BID33 learn a latent variable dynamic model over temporally extended segments of the trajectory, and train a policy using gradient-based optimization over the latent space. Both of these approaches have been shown to work on a fixed dataset of samples which are collected before the algorithm starts operating. Hence, their evaluations have been limited to domains where random exploration is sufficient to collect data for model learning. In comparison, our approach utilizes an iterative process of alternatively performing model learning and policy learning, and hence can be applied to more challenging domains. Additionally, our proposed improvements are orthogonal to both approaches, and can be potentially combined to yield even better . This paper assumes a discrete-time finite-horizon Markov decision process (MDP), defined by (S, A, f, r, ρ 0, T), in which S ⊆ R n is the state space, A ⊆ R m the action space, f: S × A → S a deterministic transition function, r: S × A → R a bounded reward function, ρ 0: S → R + an initial state distribution, and T the horizon. We denote a stochastic policy π θ (a|s) as the probability of taking action a at state s. Let η(θ) denote its expected return: DISPLAYFORM0 where τ = (s 0, a 0, . . ., a T −1, s T) denotes the whole trajectory, s 0 ∼ ρ 0 , a t ∼ π θ (.|s t), and s t+1 = f (s t, a t) for all t. We assume that the reward function is known but the transition function is unknown. Our goal is to find an optimal policy that maximizes the expected return η(θ). In the most successful methods of model-free reinforcement learning, we iteratively collect data, estimate the gradient of the policy, improve the policy, and then discard the data. Conversely, modelbased reinforcement learning makes more extensive use of the data; it uses all the data collected to train a model of the dynamics of the environment. The trained model can be used as a simulator in which the policy can be trained, and also provides gradient information BID34 BID7 BID32 BID35. In the following section, we describe the vanilla model-based reinforcement learning algorithm (see Algorithm 1). We assume that the model and the policy are represented by neural networks, but the methodology is valid for other types of function approximators. The transition dynamics is modeled with a feed-forward neural network, using the standard practice to train the neural network to predict the change in state (rather than the next state) given a state and an action as inputs. This relieves the neural network from memorizing the input state, especially when the change is small BID7 BID36 BID25.We denote the function approximator for the next state, which is the sum of the input state and the output of the neural network, asf φ.The objective of model learning is to find a parameter φ that minimizes the L 2 one-step prediction loss 2: DISPLAYFORM0 where D is the training dataset that stores the transitions the agent has experienced. We use the Adam optimizer BID38 to solve this supervised learning problem. Standard techniques are followed to avoid overfitting and facilitate the learning such as separating a validation dataset to early stop the training, and normalizing the inputs and outputs of the neural network Given an MDP, M, the goal of reinforcement learning is to maximize the expected sum of rewards. During training, model-based methods maintain an approximate MDP,M, where the transition function is given by a parameterized modelf φ learned from data. The policy is then updated with respect to the approximate MDP. Hence, the objective we maximize iŝ DISPLAYFORM1 DISPLAYFORM2, and s t+1 =f φ (s t, a t).We represent the stochastic policy 4 as a conditional multivariate normal distribution with a parametrized mean µ θ: S → A and a parametrized standard deviation σ θ: S → R m. Using the re-parametrization trick, we can write down an action sampled from π θ at state s as µ θ (s) + σ θ (s)T ζ, where ζ ∼ N (0, I m). Given a trajectoryτ sampled using the policy π θ, we can recover the noise vectors {ζ 0, ..., ζ T}. Thus, the gradient of the objectiveη(θ; φ) can simply be estimated by Monte-Carlo methods: DISPLAYFORM3 This method of gradient computation is called backpropagation through time (BPTT), which can be easily performed using an automatic differentiation library. We apply gradient clipping BID39 to deal with exploding gradients, and we use the Adam optimizer BID38 for more stable learning. We perform the updates until the policy no longer improves its estimated performanceη over a period of time (controlled by a hyperparameter), and then we repeat the process in the outer loop by using the policy to collect more data with respect to the real model 5. The whole procedure terminates when the desired performance according to the real model is accomplished. Using the vanilla approach described in Section 4, we find that the learned policy often exploits regions where scarce training data is available for the dynamics model. Since we are improving Collect samples from the real environment f using π θ and add them to D. Train the modelf φ using D.6: DISPLAYFORM0 Collect fictitious samples fromf φ using π θ. Update the policy using BPTT on the fictitious samples. Estimate the performanceη(θ; φ). until the performance stop improving. 11: until the policy performs well in real environment f. the policy with respect to the approximate MDP instead of the real one, the predictions then can be erroneous to the policy's advantage. This overfitting issue can be partly alleviated by early stopping using validation initial states in a similar manner to supervised learning. However, we found this insufficient, since the performance is still evaluated using the same learned model, which tends to make consistent mistakes. Furthermore, although gradient clipping can usually resolve exploding gradients, BPTT still suffers from vanishing gradients, which cause the policy to get stuck in bad local optima BID11 BID39. These problems are especially aggravated when optimizing over long horizons, which is very common in reinforcement learning problems. We now present our method, Model-Ensemble Trust-Region Policy Optimization (ME-TRPO). The pseudocode is shown in Algorithm 2. ME-TRPO combines three modifications to the vanilla approach. First, we fit a set of dynamics models {f φ1, . . ., f φ K} (termed a model ensemble) using the same real world data. These models are trained via standard supervised learning, as described in Section 4.1, and they only differ by the initial weights and the order in which mini-batches are sampled. Second, we use Trust Region Policy Optimization (TRPO) to optimize the policy over the model ensemble. Third, we use the model ensemble to monitor the policy's performance on validation data, and stops the current iteration when the policy stops improving. The second and third modifications are described in detail below. Policy Optimization. To overcome the issues with BPTT, we use likelihood-ratio methods from the model-free RL literature. We evaluated using Vanilla Policy Gradient (VPG) BID40, Proximal Policy Optimization (PPO), and Trust Region Policy Optimization (TRPO) BID4. The best were achieved by TRPO. In order to estimate the gradient, we use the learned models to simulate trajectories as follows: in every step, we randomly choose a model to predict the next state given the current state and action. This avoids the policy from overfitting to any single model during an episode, leading to more stable learning. Policy Validation. We monitor the policy's performance using the K learned models. Specifically, we compute the ratio of models in which the policy improves: DISPLAYFORM0 The current iteration continues as long as this ratio exceeds a certain threshold. In practice, we validate the policy after every 5 gradient updates and we use 70% as the threshold. If the ratio falls below the threshold, a small number of updates is tolerated in case the performance improves, the current iteration is terminated. Then, we repeat the overall process of using the policy to collect more real-world data, optimize the model ensemble, and using the model ensemble to improve the policy. This process continues until the desired performance in the real environment is reached. The model ensemble serves as effective regularization for policy learning: by using the model ensemble for policy optimization and validation, the policy is forced to perform well over a vast number of possible alternative futures. Even though any of the individual models can still incur model bias, our experiments below suggest that combining these models yields stable and effective policy improvement. Collect samples from the real system f using π θ and add them to D. Train all models using D. repeatOptimize π θ using all models. Collect fictitious samples from {f φi} K i=1 using π θ. Update the policy using TRPO on the fictitious samples. Estimate the performancesη(θ; φ i) for i = 1,..., K. until the performances stop improving. 11: until the policy performs well in real environment f. We design the experiments to answer the following questions:1. How does our approach compare against state-of-the-art methods in terms of sample complexity and final performance? 2. What are the failure scenarios of the vanilla algorithm? 3. How does our method overcome these failures?We also provide in the Appendix D an ablation study to characterize the effect of each component of our algorithm. To answer these questions, we evaluate our method and various baselines over six standard continuous control benchmark tasks BID42 BID43 in Mujoco BID44: Swimmer, Snake, Hopper, Ant, Half Cheetah, and Humanoid, shown in FIG1. The details of the tasks can be found in Appendix A.2. We compare our method with the following state-of-the-art reinforcement learning algorithms in terms of sample complexity and performance: Trust Region Policy Optimization (TRPO) BID4, Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), and Stochastic Value Gradient (SVG).The are shown in FIG2. Prior model-based methods appear to achieve worse performance compared with model-free methods. In addition, we find that model-based methods tend to be difficult to train over long horizons. In particular, SVG, not presented in the plots, is very unstable in our experiments. While SVG(∞) is more stable, it fails to achieve the same level of performance as model-free methods. In contrast, our proposed method reaches the same level of performance as model-free approaches with ≈ 100× less data. To the best of our knowledge, it is the first purely model-based approach that can optimize policies over high-dimensional motor-control tasks such as Humanoid. For experiment details please refer to Appendix A. In this section we explain and quantify the failure cases of vanilla model-based reinforcement learning, and how our approach overcomes such failures. We analyze the effect of each of our proposed modifications by studying the learning behavior of replacing BPTT with TRPO in vanilla model-based RL using just a single model, and then the effect of using an ensemble of models. As discussed above, BPTT suffers from exploding and vanishing gradients, especially when optimizing over long horizons. Furthermore, one of the principal drawbacks of BPTT is the assumption that the model derivatives should match that of the real dynamics, even though the model has not been explicitly trained to provide accurate gradient information. In FIG3 we demonstrate the effect of using policy gradient methods that make use of a score function estimator, such as VPG and TRPO, while using a single learned model. The suggest that in comparison with BPTT, policy gradient methods are more stable and lead to much better final performance. By using such model-free algorithms, we require less information from the learned model, which only acts as a simulator. Gradient information through the dynamics model is not needed anymore to optimize the policy. However, while replacing BPTT by TRPO helps optimization, the learned policy can still suffer from model bias. The learning procedure tends to steer the policy towards regions where it has rarely visited, so that the model makes erroneous predictions to its advantage. The estimated performances of the policy often end up with high rewards according to the learned model, and low rewards according to the real one (see Appendix B for further discussion). In Figure 4, we analyze the effect of using various numbers of ensemble models for sampling trajectories and validating the policy's performance. The indicate that as more models are used in the model ensemble, the learning is better regularized and the performance continually improves. The improvement is even more noticeable in more challenging environments like HalfCheetah and Ant, which require more complex dynamics models to be learned, leaving more room for the policy to exploit when model ensemble is not used. Figure 4: Comparison among different number of models that the policy is trained on. TRPO is used for the policy optimization. We illustrate the improvement when using 5, 10 and 20 models over a single model (Best viewed in color). In this work, we present a simple and robust model-based reinforcement learning algorithm that is able to learn neural network policies across different challenging domains. We show that our approach significantly reduces the sample complexity compared to state-of-the-art methods while reaching the same level of performance. In comparison, our analyses suggests that vanilla model-based RL tends to suffer from model bias and numerical instability, and fails to learn a good policy. We further evaluate the effect of each key component of our algorithm, showing that both using TRPO and model ensemble are essential for successful applications of deep model-based RL. We also confirm the of previous work BID7 BID32 BID22 that using model uncertainty is a principled way to reduce model bias. One question that merits future investigation is how to use the model ensemble to encourage the policy to explore the state space where the different models disagree, so that more data can be collected to resolve their disagreement. Another enticing direction for future work would be the application of ME-TRPO to real-world robotics systems. Our algorithm can be broken down into three parts: data collection, model learning, and policy learning. We describe the numerical details for each part below. A.1.1 DATA COLLECTION In each outer iteration, we use the stochastic policy to collect 3000 timesteps of real world data for every environment, except Humanoid in which we collect 6000 timesteps. At the beginning of every roll-out we sample the policy standard deviation randomly from U[0.0, 3.0], and we keep the value fixed throughout the episode. Furthermore, we perturb the policy's parameters by adding white Gaussian noise with standard deviation proportional to the absolute difference between the current parameters and the previous one BID45 BID46. Finally, we split the collected data using a 2-to-1 ratio for training and validation datasets. We represent the dynamics model with a 2-hidden-layer feed-forward neural network with hidden sizes 1024-1024 and ReLU nonlinearities. We train the model with the Adam optimizer with learning rate 0.001 using a batch size of 1000. The model is trained until the validation loss has not decreased for 25 passes over the entire training dataset (we validate the training every 5 passes). We represent the policy with a 2-hidden-layer feed-forward neural network with hidden sizes 32-32 and tanh nonlinearities for all the environments, except Humanoid, in which we use the hidden sizes 100-50-25. The policy is trained with TRPO on the learned models using initial standard deviation 1.0, step size δ KL 0.01, and batch size 50000. If the policy fails the validation for 25 updates (we do the validation every 5 updates), we stop the learning and repeat the overall process. The environments we use are adopted from rllab BID42 Note that in Hopper we relax the early stopping criterion to a soft constraint in reward function, whereas in Ant we early stop when the center of mass long z-axis is outside [0.2, 1.0] and have a survival reward when alive. The state in each environment is composed of the joint angles, the joint velocities, and the cartesian position of the center of mass of a part of the simulated robot. We are not using the contact information, which make the environments effectively POMDPs in Half Cheetah, Ant, Hopper and Humanoid. We also eliminate the redundancies in the state space in order to avoid infeasible states in the prediction. In Section 6.2 we compare our method against TRPO, PPO, DDPG, and SVG. For every environment we represent the policy with a feed-forward neural network of the same size, horizon, and discount factor as the ones specified in the Appendix A.1.3. In the following we provide the hyper-parameters details:Trust Region Policy Optimization. We used the implementation of BID42 with a batch size of 50000, and we train the policies for 1000 iterations. The step size δ KL that we used in all the experiments was of 0.05.Proximal Policy Optimization. We referred to the implementation of BID43. The policies were trained for 10 7 steps using the default hyper-parameters across all tasks. Deep Deterministic Policy Gradient. We also use the implementation of BID43 using a number epochs of 2000, the rest of the hyper-parameters used were the default ones. Stochastic Value Gradient. We parametrized the dynamics model as a feedforward neural network of two hidden layers of 512 units each and ReLU non-linearities. The model was trained after every episode with the data available in the replay buffer, using the Adam optimizer with a learning rate of 10 −4, and batch size of 128. We additionally clipped the gradient we the norm was larger than 10. We show that replacing the ensemble with just one model leads to the policy overoptimization. In each outer iteration, we see that at the end of the policy optimization step the estimated performance increases while the real performance is in fact decreasing (see FIG4). C REAL-TIME COMPLEXITY We provide wall clock time for the ME-TRPO from figure 2 in the table below: Finally, we provide a study of the different ways for validating the policy. We compare the following techniques: using the real performance (i.e., using an oracle) (real), using the average return in the trpo roll-outs (trpo mean), stopping the policy after 50 gradient updates (no early 50), or after 5 gradient updates (no early 5), using one model to predict the performances (one model), and using an ensemble of models (ensemble). The experiments are designed to use the same number of models and hyper-parameters for the other components of the algorithm. In FIG5 we can see the effectiveness of each approach. It is noteworthy that having an oracle of the real performance is not the best approach. Such validation is over-cautious, and does not give room for exploration ing in a poor trained dynamics model. Stopping the gradient after a fixed number of updates in good performance if the right number of updates is set. This burdens the hyper-parameter search with one more hyper-parameter. On the other hand, using the ensemble of models has good performance across environments without adding extra hyper-parameters.
Deep Model-Based RL that works well.
1,392
scitldr
The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained comput- ing systems. Many network complexity reduction techniques have been proposed including fixed-point implementation. However, a systematic approach for design- ing full fixed-point training and inference of deep neural networks remains elusive. We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision. The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori. Thus, our work leads to a systematic methodology of determining suit- able precision for fixed-point training. The near optimality (minimality) of the ing precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets. The complexity reduction arising from our approach is compared with other fixed-point neural network designs. Though deep neural networks (DNNs) have established themselves as powerful predictive models achieving human-level accuracy on many machine learning tasks BID12, their excellent performance has been achieved at the expense of a very high computational and parameter complexity. For instance, AlexNet BID17 requires over 800 × 10 6 multiply-accumulates (MACs) per image and has 60 million parameters, while Deepface requires over 500 × 10 6 MACs/image and involves more than 120 million parameters. DNNs' enormous computational and parameter complexity leads to high energy consumption BID4, makes their training via the stochastic gradient descent (SGD) algorithm very slow often requiring hours and days BID9, and inhibits their deployment on energy and resource-constrained platforms such as mobile devices and autonomous agents. A fundamental problem contributing to the high computational and parameter complexity of DNNs is their realization using 32-b floating-point (FL) arithmetic in GPUs and CPUs. Reduced-precision representations such as quantized FL (QFL) and fixed-point (FX) have been employed in various combinations to both training and inference. Many employ FX during inference but train in FL, e.g., fully binarized neural networks BID13 use 1-b FX in the forward inference path but the network is trained in 32-b FL. Similarly, BID10 employs 16-b FX for all tensors except for the internal accumulators which use 32-b FL, and 3-level QFL gradients were employed (; BID0 to accelerate training in a distributed setting. Note that while QFL reduces storage and communication costs, it does not reduce the computational complexity as the arithmetic remains in 32-b FL.Thus, none of the previous works address the fundamental problem of realizing true fixed-point DNN training, i.e., an SGD algorithm in which all parameters/variables and all computations are implemented in FX with minimum precision required to guarantee the network's inference/prediction accuracy and training convergence. The reasons for this gap are numerous including: 1) quantization Step 1: Forward PropagationStep 2: Back PropagationStep 3: Update errors propagate to the network output thereby directly affecting its accuracy ; 2) precision requirements of different variables in a network are interdependent and involve hard-toquantify trade-offs ; 3) proper quantization requires the knowledge of the dynamic range which may not be available ; and 4) quantization errors may accumulate during training and can lead to stability issues BID10.Our work makes a major advance in closing this gap by proposing a systematic methodology to obtain close-to-minimum per-layer precision requirements of an FX network that guarantees statistical similarity with full precision training. In particular, we jointly address the challenges of quantization noise, inter-layer and intra-layer precision trade-offs, dynamic range, and stability. As in , we do assume that a fully-trained baseline FL network exists and one can observe its learning behavior. While, in principle, such assumption requires extra FL computation prior to FX training, it is to be noted that much of training is done in FL anyway. For instance, FL training is used in order to establish benchmarking baselines such as AlexNet BID17, VGG-Net , and ResNet BID12, to name a few. Even if that is not the case, in practice, this assumption can be accounted for via a warm-up FL training on a small held-out portion of the dataset BID6.Applying our methodology to three benchmarks reveals several lessons. First and foremost, our work shows that it is possible to FX quantize all variables including back-propagated gradients even though their dynamic range is unknown BID15. Second, we find that the per-layer weight precision requirements decrease from the input to the output while those of the activation gradients and weight accumulators increase. Furthermore, the precision requirements for residual networks are found to be uniform across layers. Finally, hyper-precision reduction techniques such as weight and activation binarization BID13 or gradient ternarization are not as efficient as our methodology since these do not address the fundamental problem of realizing true fixed-point DNN training. We demonstrate FX training on three deep learning benchmarks (CIFAR-10, CIFAR-100, SVHN) achieving high fidelity to our FL baseline in that we observe no loss of accuracy higher then 0.56% in all of our experiments. Our precision assignment is further shown to be within 1-b per-tensor of the minimum. We show that our precision assignment methodology reduces representational, computational, and communication costs of training by up to 6×, 8×, and 4×, respectively, compared to the FL baseline and related works. We consider a L-layer DNN deployed on a M -class classification task using the setup in FIG1. We denote the precision configuration as the DISPLAYFORM0 whose l th row consists of the precision (in bits) of weight DISPLAYFORM1 ), and internal weight accumulator W DISPLAYFORM2 ) tensors at layer l. This DNN quantization setup is summarized in Appendix A. We present definitions/constraints related to fixed-point arithmetic based on the design of fixed-point adaptive filters and signal processing systems :• A signed fixed-point scalar a with precision B A and binary representation DISPLAYFORM0 2 −i a i, where r A is the predetermined dynamic range (PDR) of a. The PDR is constrained to be a constant power of 2 to minimize hardware overhead.• An unsigned fixed-point scalar a with precision B A and binary representation DISPLAYFORM1 • The precision B A is determined as: B A = log 2 r A ∆ A + 1, where ∆ A is the quantization step size which is the value of the least significant bit (LSB).• An additive model for quantization is assumed: a =ã + q a, where a is the fixed-point number obtained by quantizing the floating-point scalarã, q a is a random variable uniformly distributed on the interval − 12. The notion of quantization noise is most useful when there is limited knowledge of the distribution ofã.• The relative quantization bias η A is the offset: DISPLAYFORM2, where the first unbiased quantization level µ A = E ã ã ∈ I 1 and DISPLAYFORM3. The notion of quantization bias is useful when there is some knowledge of the distribution ofã.• The reflected quantization noise variance from a tensor T to a scalar α = f (T), for an arbitrary function f , is: DISPLAYFORM4, where ∆ T is the quantization step of T and E T →α is the quantization noise gain from T to α.• The clipping rate β T of a tensor T is the probability: β T = Pr ({|t| ≥ r T : t ∈ T}), where r T is the PDR of T. We use a set of metrics inspired by those introduced by which have also been used by Wu et al. (2018a). These metrics are algorithmic in nature which makes them easily reproducible.• Representational Cost for weights (C W) and activations (C A): DISPLAYFORM0, which equals the total number of bits needed to represent the weights, weight gradients, and internal weight accumulators (C W), and those for activations and activation gradients (C A). DISPLAYFORM1, where D l is the dimensionality of the dot product needed to compute one output activation at layer l. This cost is a measure of the number of 1-b full adders (FAs) utilized for all multiplications in one back-prop iteration. DISPLAYFORM2, which represents cost of communicating weight gradients in a distributed setting (; BID0 . We aim to obtain a minimal or close-to-minimal precision configuration C o of a FX network such that the mismatch probability p m = Pr{Ŷ f l =Ŷ f x} between its predicted label (Ŷ f x) and that of an associated FL network (Ŷ f l) is bounded, and the convergence behavior of the two networks is similar. Hence, we require that: all quantization noise sources in the forward path contribute identically to the mismatch budget p m , the gradients be properly clipped in order to limit the dynamic range , the accumulation of quantization noise bias in the weight updates be limited BID10, the quantization noise in activation gradients be limited as these are back-propagated to calculate the weight gradients, and the precision of weight accumulators should be set so as to avoid premature stoppage of convergence BID7. The above insights can be formally described via the following five quantization criteria. Criterion 1. Equalizing Feedforward Quantization Noise (EFQN) Criterion. The reflected quantization noise variances onto the mismatch probability p m from all feedforward weights DISPLAYFORM0 Criterion 2. Gradient Clipping (GC) Criterion. The clipping rates of weight ({β DISPLAYFORM1) gradients should be less than a maximum value β 0: DISPLAYFORM2 Criterion 3. Relative Quantization Bias (RQB) Criterion. The relative quantization bias of weight gradients ({η DISPLAYFORM3 DISPLAYFORM4 where Σ l is the total sum of element-wise variances of G (W) l. Criterion 5. Accumulator Stopping (AS) Criterion. The quantization noise of the internal accumulator should be zero, equivalently: DISPLAYFORM5 is the reflected quantization noise variance from W DISPLAYFORM6, its total sum of element-wise variances. Further explanations and motivations behind the above criteria are presented in Appendix B. The following claim ensures the satisfiability of the above criteria. This leads to closed form expressions for the precision requirements we are seeking and completes our methodology. The validity of the claim is proved in Appendix C. Claim 1. Satisfiability of Quantization Criteria. The five quantization criteria (EFQN, GC, RQB, BQN, AS) are satisfied if:• The precisions B W l and B A l are set as follows: DISPLAYFORM7 for l = 1... L, where rnd denotes the rounding operation, E W l →pm and E A l →pm are the weight and activation quantization noise gains at layer l, respectively, B (min) is a reference minimum precision, and DISPLAYFORM8 Published as a conference paper at ICLR 2019• The weight and activation gradients PDRs are lower bounded as follows: DISPLAYFORM9 where σ DISPLAYFORM10 are the largest recorded estimates of the weight and activation gradients DISPLAYFORM11, respectively.• The weight and activation gradients quantization step sizes are upper bounded as follows: DISPLAYFORM12 where σ DISPLAYFORM13 is the largest singular value of the square-Jacobian (Jacobian matrix with squared entries) of G DISPLAYFORM14 l+1.• The accumulator PDR and step size satisfy: DISPLAYFORM15 where γ (min) is the smallest value of the learning rate used during training. Practical considerations: Note that one of the 2L feedforward precisions will equal B (min). The formulas to compute the quantization noise gains are given in Appendix C and require only one forward-backward pass on an estimation set. We would like the EFQN criterion to hold upon convergence; hence, FORMULA19 is computed using the converged model from the FL baseline. For backward signals, setting the values of PDR and LSB is sufficient to determine the precision using the identity B A = log 2 r A ∆ A + 1, as explained in Section 2.1. As per Claim 1, estimates of the second order statistics, e.g., DISPLAYFORM16, of the gradient tensors, are required. These are obtained via tensor spatial averaging, so that one estimate per tensor is required, and updated in a moving window fashion, as is done for normalization parameters in BatchNorm BID14. Furthermore, it might seem that computing the Jacobian in is a difficult task; however, the values of its elements are already computed by the back-prop algorithm, requiring no additional computations (see Appendix C). Thus, the Jacobians (at different layers) are also estimated during training. Due to the typical very large size of modern neural networks, we average the Jacobians spatially, i.e., the activations are aggregated across channels and mini-batches while weights are aggregated across filters. This is again inspired by the work on Batch Normalization BID14 and makes the probed Jacobians much smaller. We conduct numerical simulations in order to illustrate the validity of the predicted precision configuration C o and investigate its minimality and benefits. We employ three deep learning benchmarking datasets: CIFAR-10, CIFAR-100 BID16 SVHN . All experiments were done using a Pascal P100 NVIDIA GPU. We train the following networks:• CIFAR-10 ConvNet: a 9-layer convolutional neural network trained on the CIFAR-10 dataset described as 2 DISPLAYFORM0 where C3 denotes 3 × 3 convolutions, M P 2 denotes 2 × 2 max pooling operation, and F C denotes fully connected layers.• SVHN ConvNet: the same network as the CIFAR-10 ConvNet, but trained on the SVHN dataset.• CIFAR-10 ResNet: a wide deep residual network with ResNet-20 architecture but having 8 times as many channels per layer compared to BID12.• CIFAR-100 ResNet: same network as CIFAR-10 ResNet save for the last layer to match the number of classes in CIFAR-100.A step by step description of the application of our method to the above four networks is provided in Appendix E. We hope the inclusion of these steps would: clarify any ambiguity the reader may have from the previous section and facilitate the reproduction of our . The precision configuration C o, with target p m ≤ 1%, β 0 ≤ 5%, and η 0 ≤ 1%, via our proposed method is depicted in FIG4 for each of the four networks considered. We observe that C o is dependent on the network type. Indeed, the precisions of the two ConvNets follow similar trends as do those the two ResNets. Furthermore, the following observations are made for the ConvNets:• weight precision B W l decreases as depth increases. This is consistent with the observation that weight perturbations in the earlier layers are the most destructive .• the precisions of activation gradients (B G (A) l ) and internal weight accumulators (B W (acc) l ) increases as depth increases which we interpret as follows: the back-propagation of gradients is the dual of the forward-propagation of activations, and accumulators store the most information as their precision is the highest.• the precisions of the weight gradients (B G (W) l ) and activations (B A l) are relatively constant across layers. Interestingly, for ResNets, the precision is mostly uniform across the layers. Furthermore, the gap between B W (acc) l and the other precisions is not as pronounced as in the case of ConvNets. This suggests that information is spread equally among all signals which we speculate is due to the shortcut connections preventing the shattering of information BID2. FIG5 indicate that C o leads to convergence and consistently track FL curves with close fidelity. This validates our analysis and justifies the choice of C o. To determine that C o is a close-to-minimal precision assignment, we compare it with: (a) DISPLAYFORM0 is an L × 5 matrix with each entry equal to 1 3, i.e., we perturb C o by 1-b in either direction. FIG5 also contains the convergence curves for the two new configurations. As shown, C −1 always in a noticeable gap compared to C o for both the loss function (except for the CIFAR-10 ResNet) and the test error. Furthermore, C +1 offers no observable improvements over C o (except for the test error of CIFAR-10 ConvNet). These support our contention that C o is close-to-minimal in that increasing the precision above C o leads to diminishing returns while reducing precision below C o leads to a noticeable degradation in accuracy. Additional experimental provided in Appendix D support our contention regarding the near minimality of C o. Furthermore, by studying the impact of quantizing specific tensors we determine that that the accuracy is most sensitive to the precision assigned to weights and activation gradients. We would like to quantify the reduction in training cost and expense in terms of accuracy ing from our proposed method and compare them with those of other methods. Importantly, for a fair comparison, the same network architecture and training procedure are used. We report C W, C A, C M, C C, and test error, for each of the four networks considered for the following training methods:• baseline FL training and FX training using C o, • binarized network (BN) training, where feedforward weights and activations are binary (constrained to ±1) while gradients and accumulators are in floating-point and activation gradients are back- propagated via the straight through estimator BID3 as was done in BID13, • fixed-point training with stochastic quantization (SQ). As was done in BID10, we quantize feedforward weights and activations as well as all gradients, but accumulators are kept in floating-point. The precision configuration (excluding accumulators) is inherited from C o (hence we determine exactly how much stochastic quantization helps), • training with ternarized gradients (TG) as was done in TernGrad . All computations are done in floating-point but weight gradients are ternarized according to the instantaneous tensor spatial standard deviations {−2.5σ, 0, 2.5σ} as was suggested by. To compute costs, we assume all weight gradients use two bits although they are not really fixed-point and do require computation of 32-b floating-point scalars for every tensor. The comparison is presented in TAB3. The first observation is a massive complexity reduction compared to FL. For instance, for the CIFAR-10 ConvNet, the complexity reduction is 2.6× (= 148/56.5), 5.5× (= 9.3/1.7), 7.9× (= 94.4/11.9), and 3.5× (= 49/14) for C W, C A, C M, and C C, respectively. Similar trends are observed for the other four networks. Such complexity reduction comes at the expense of no more than 0.56% increase in test error. For the CIFAR-100 network, the accuracy when training in fixed-point is even better than that of the baseline. The representational and communication costs of BN is significantly greater than that of FX because the gradients and accumulators are kept in full precision, which masks the benefits of binarizing feedforward tensors. However, benefits are noticeable when considering the computational cost which is lowest as binarization eliminates multiplications. Furthermore, binarization causes a severe accuracy drop for the ConvNets but surprisingly not for the ResNets. We speculate that this is due to the high dimensional geometry of ResNets BID1.As for SQ, since C o was inherited, all costs are identical to FX, save for C W which is larger due to full precision accumulators. Furthermore, SQ has a positive effect only on the CIFAR-10 ConvNet where it clearly acted as a regularizer. TG does not provide complexity reductions in terms of representational and computational costs which is expected as it only compresses weight gradients. Additionally, the ing accuracy is slightly worse than that of all other considered schemes, including FX. Naturally, it has the lowest communication cost as weight gradients are quantized to just 2-b. Many works have addressed the general problem of reduced precision/complexity deep learning. Reducing the complexity of inference (forward path): several research efforts have addressed the problem of realizing a DNN's inference path in FX. For instance, the works in address the problem of precision assignment. While Lin et al. FORMULA19 proposed a non-uniform precision assignment using the signal-to-quantization-noise ratio (SQNR) metric, analytically quantified the trade-off between activation and weight precisions while providing minimal precision requirements of the inference path computations that bounds the probability p m of a mismatch between predicted labels of the FX and its FL counterpart. An orthogonal approach which can be applied on top of quantization is pruning BID11. While significant inference efficiency can be achieved, this approach incurs a substantial training overhead. A subset of the FX training problem was addressed in binary weighted neural networks BID5 ) and fully binarized neural networks BID13, where direct training of neural networks with pre-determined precisions in the inference path was explored with the feedback path computations being done in 32-b FL.Reducing the complexity of training (backward path): finite-precision training was explored in BID10 which employed stochastic quantization in order to counter quantization bias accumulation in the weight updates. This was done by quantizing all tensors to 16-b FX, except for the internal accumulators which were stored in a 32-b floating-point format. An important distinction our work makes is the circumvention of the overhead of implementing stochastic quantization BID13. Similarly, DoReFa-Net stores internal weight representations in 32-b FL, but quantizes the remaining tensors more aggressively. Thus arises the need to re-scale and re-compute in floating-point format, which our work avoids. Finally, BID15 suggests a new number format -Flexpoint -and were able to train neural networks using slightly 16-b per tensor element, with 5 shared exponent bits and a per-tensor dynamic range tracking algorithm. Such tracking causes a hardware overhead bypassed by our work since the arithmetic is purely FX. Augmenting Flexpoint with stochastic quantization effectively in WAGE (b), and enables integer quantization of each tensor. As seen above, none of the prior works address the problem of predicting precision requirements of all training signals. Furthermore, the choice of precision is made in an ad-hoc manner. In contrast, we propose a systematic methodology to determine close-to-minimal precision requirements for FX-only training of deep neural networks. In this paper, we have presented a study of precision requirements in a typical back-propagation based training procedure of neural networks. Using a set of quantization criteria, we have presented a precision assignment methodology for which FX training is made statistically similar to the FL baseline, known to converge a priori. We realized FX training of four networks on the CIFAR-10, CIFAR-100, and SVHN datasets and quantified the associated complexity reduction gains in terms costs of training. We also showed that our precision assignment is nearly minimal. The presented work relies on the statistics of all tensors being quantized during training. This necessitates an initial baseline run in floating-point which can be costly. An open problem is to predict a suitable precision configuration by only observing the data statistics and the network architecture. Future work can leverage the analysis presented in this paper to enhance the effectiveness of other network complexity reduction approaches. For instance, weight pruning can be viewed as a coarse quantization process (quantize to zero) and thus can potentially be done in a targeted manner by leveraging the information provided by noise gains. Furthermore, parameter sharing and clustering can be viewed as a form of vector quantization which presents yet another opportunity to leverage our method for complexity reduction. The quantization setup depicted in FIG1 is summarized as follows:• Feedforward computation at layer l: DISPLAYFORM0 where f l is the function implemented at layer l, A l (A l+1) is the activation tensor at layer l (l + 1) quantized to a normalized unsigned fixed-point format with precision B A l (B A l+1), and W l is the weight tensor at layer l quantized to a normalized signed fixed-point format with precision B W l. We further assume the use of a ReLU-like activation function with a clipping level of 2 and a max-norm constraint on the weights which are clipped between [−1, 1] at every iteration.• Back-propagation of activation gradients at layer l: DISPLAYFORM1 where g l (A) is the function that back-propagates the activation gradients at layer l, G DISPLAYFORM2 l+1 ) is the activation gradient tensor at layer l (l + 1) quantized to a signed fixed-point format with precision B G DISPLAYFORM3 ).• Back-propagation of weight gradient tensor G (W) l at layer l: DISPLAYFORM4 where g • Internal weight accumulator update at layer l: DISPLAYFORM5 where U is the update function, γ is the learning rate, and W (acc) l is the internal weight accumulator tensor at layer l quantized to signed fixed-point with precision B W Criterion 1 (EFQN) is used to ensure that all feedforward quantization noise sources contribute equally to the p m budget. Indeed, if one of the 2L reflected quantization noise variances from the feedforward tensors onto p m, say V Wi→pm for i ∈ {1, . . ., L}, largely dominates all others, it would imply that all tensors but W i are overly quantized. It would therefore be necessary to either increase the precision of W i or decrease the precisions of all other tensors. The application of Criterion 1 (EFQN) through the closed form expression in Claim 1 solves this issue avoiding the need for a trial-and-error approach. Because FX numbers require a constant PDR, clipping of gradients is needed since their dynamic range is arbitrary. Ideally, a very small PDR would be preferred in order to obtain quantization steps of small magnitude, and hence less quantization noise. We can draw parallels from signal processing theory, where it is known that for a given quantizer, the signal-to-quantization-noise ratio (SQNR) is equal to SQN R(dB) = 6B + 4.78 − P AR where P AR is the peak-to-average ratio, proportional to the PDR. Thus, we would like to reduce the PDR as much as possible in order to increase the SQNR for a given precision. However, this comes at the risk of overflows (due to clipping). Criterion 2 (GC) addresses this trade-off between quantization noise and overflow errors. Since the back-propagation training procedure is an iterative one, it is important to ensure that any form of bias does not corrupt the weight update accumulation in a positive feedback manner. FX quantization, being a uniform one, is likely to induce such bias when quantized quantities, most notable gradients, are not uniformly distributed. Criterion 3 (RQB) addresses this issue by using η as proxy to this bias accumulation a function of quantization step size and ensuring that its worst case value is small in magnitude. Criterion 4 (BQN) is in fact an extension of Criterion 1 (EFQN), but for the back-propagation phase. Indeed, once the precision (and hence quantization noise) of weight gradients is set as per Criterion 3 (RQB), it is needed to ensure that the quantization noise source at the activation gradients would not contribute more noise to the updates. This criterion sets the quantization step of the activation gradients. Criterion 5 (AS) ties together feedforward and gradient precisions through the weight accumulators. It is required to increment/decrement the feedforward weights whenever the accumulated updates cross-over the weight quantization threshold. This is used to set the PDR of the weight accumulators. Furthermore, since the precision of weight gradients has already been designed to account for quantization noise (through Criteria 2-4), the criterion requires that the accumulators do not cause additional noise. The validity of Claim 1 is derived from the following five lemmas. Note that each lemma addresses the satisfiability of one of the five quantization criteria presented in the main text and corresponds to part of Claim 1. Lemma 1. The EFQN criterion holds if the precisions B W l and B A l are set as follows: DISPLAYFORM0 for l = 1... L, where rnd denotes the rounding operation, B (min) is a reference minimum precision, and E (min) is given by: DISPLAYFORM1 Proof. By definition of the reflected quantization noise variance, the EFQN, by definition, is satisfied if: DISPLAYFORM2 where the quantization noise gains are given by: DISPLAYFORM3 are the soft outputs and ZŶ f l is the soft output corresponding toŶ f l. The expressions for these quantization gains are obtained by linearly expanding (across layers) those used in . Note that a second order upper bound is used as a surrogate expression for p m.From the definition of quantization step size, the above is equivalent to: DISPLAYFORM4 Let E (min) be as defined in FORMULA38: DISPLAYFORM5. We can divide each term by E (min): DISPLAYFORM6 where each term is positive, so that we can take square roots and logarithms such that: DISPLAYFORM7 Thus we equate all of the above to a reference precision B (min) yielding: DISPLAYFORM8 for l = 1... L. Note that because E (min) is the least quantization noise gain, it is equal to one of the above quantization noise gains so that the corresponding precision actually equates B (min). As precisions must be integer valued, each of DISPLAYFORM9 have to be integers, and thus a rounding operation is to be applied on all logarithm terms. Doing so in from Lemma 1 which completes this proof. Lemma 2. The GC criterion holds for β 0 = 5% provided the weight and activation gradients pre-defined dynamic ranges (PDRs) are lower bounded as follows: DISPLAYFORM10 are the largest ever recorded estimates of the weight and activation DISPLAYFORM11, respectively. Proof. Let us consider the case of weight gradients. The GC criterion, by definition requires: DISPLAYFORM12 Typically, weight gradients are obtained by computing the derivatives of a loss function with respect to a mini-batch. By linearity of derivatives, weight gradients are themselves averages of instantaneous derivatives and are hence expected to follow a Gaussian distribution by application of the Central Limit Theorem. Furthermore, the gradient mean was estimated during baseline training and was found to oscillate around zero. DISPLAYFORM13 where we used the fact that a Gaussian distribution is symmetric and Q is the elementary Q-function, which is a decreasing function. Thus, in the worst case, we have: DISPLAYFORM14 Hence, for a PDR as suggested by the lower bound in: DISPLAYFORM15 in Lemma 2, we obtain the upper bound: DISPLAYFORM16 which means the GC criterion holds and completes the proof. For activation gradients, the same reasoning applies, but the choice of a larger PDR in: DISPLAYFORM17 than for weight gradients is due to the fact that the true dynamic range of the activation gradients is larger than the value indicated by the second moment. This stems from the use of activation functions such as ReLU which make the activation gradients sparse. We also recommend increasing the PDR even more when using regularizers that sparsify gradients such as Dropout or Maxout BID8.Lemma 3. The RQB criterion holds for η 0 = 1% provided the weight gradient quantization step size is upper bounded as follows: DISPLAYFORM18 is the smallest ever recorded estimate of σ G DISPLAYFORM19 We close this appendix by discussing the approximation made by invoking the Central Limit Theorem (CLT) in the proofs of Lemmas 2 & 3. This approximation was made because, typically, a backpropagation iteration computes gradients of a loss function being averaged over a mini-batch of samples. By linearity of derivatives, the gradients themselves are averages, which warrants the invocation of the CLT. However, the CLT is an asymptotic which might be imprecise for a finite number of samples. In typical training of neural networks, the number of samples, or mini-batch size, is in the range of hundreds or thousands BID9. It is therefore important to quantify the preciseness, or lack thereof, of the CLT approximation. On way to do so is via the Berry-Essen Theorem which considers the average of n independent, identically distributed random variables with finite absolute third moment ρ and standard deviation σ. The worst case deviation of the cumulative distribution of the true average from the of the approximated Gaussian random variable (via the CLT), also known as the Kolmogorov-Smirnov distance, KS, is upper bounded as follows: KS < Cρ √ nσ 3, where C < 0.4785 . Observe that the quantity ρ σ 3 is data dependent. To estimate this quantity, we performed a forward-backward pass for all training samples at the start of each epoch for our four networks considered. The statistics ρ and σ were estimated by spatial (over tensors) and sample (over training samples) averages. The maximum value of the ratio ρ σ 3 for all gradient tensors was found to be 2.8097. The mini-batch size we used in all our experiments was 256. Hence, we claim that the CLT approximation in Lemmas 2 & 3 is valid in our context up to a worst case Kolmogorov-Smirnov distance of KS < The minimality experiments in the main paper only consider a full 1-b perturbation to the full precision configuration matrix. We further investigate the minimality of C o and its sensitivity to precision perturbation per tenor type. The of this investigation are presented in FIG3. First, we consider random fractional precision perturbations, meaning perturbations to the precision configuration matrix where only a random fraction p of the 5L precision assignments is incremented or decremented. A fractional precision perturbation of 1 (-1) corresponds to C +1 (C −1). A fractional precision perturbation of 0.5 (-0.5) means that a randomly chosen half of the precision assignments is incremented (decremented). FIG3 shows the relative test error deviation compared to the test error associated with C o for various fractional precision perturbations. The error deviation is taken in a relative fashion to account for the variability of the different networks' accuracies. For instance, an absolute 1% difference in accuracy on a network trained on SVHN is significantly more severe than one on a network trained on CIFAR-100. It is observed that for negative precision perturbations the variation in test error is more important than for the case of positive perturbations. This is further encouraging evidence that C o is nearly minimal, in that a negative perturbation causes significant accuracy degradation while a positive one offers diminishing returns. It is also interesting to study which of the 5L tensor types is most sensitive to precision reduction. To do so, we perform a similar experiment whereby we selectively decrement the precision of all tensors belonging to the same type (weights, activations, weight gradients, activation gradients, weight accumulators). The of this experiment are found in FIG3. It is found that the most sensitive tensor types are weights and activation gradients while the least sensitive ones are activations and weight gradients. This is an interesting finding raising further evidence that there exists some form of duality between the forward propagation of activations and back propagation of derivatives as far as numerical precision is concerned. We illustrate a step by step description of the application of our precision assignment methodology to the four networks we report on. Feedforward Precisions: The first step in our methodology consists of setting the feedforward precisions B W l and B A l. As per Claim 1, this requires using. To do so, it is first needed to compute the quantization noise gains using. Using the converged weights from the baseline run we obtain: And therefore, E (min) = 94.7 and the feedforward precisions should be set according to as follows: DISPLAYFORM0 The value of B (min) is swept and p m i evaluated on the validation set. It is found that the smallest value of B (min) ing in p m < 1% is equal to 4 bits. Hence the feedforward precisions are set as follows and as illustrated in gradients. As per Claim 1, an important statistic is the spatial variance of the gradient tensors. We estimate these variances via moving window averages, where at each iteration, the running variance estimateσ 2 is updated using the instantaneous varianceσ 2 as follows: DISPLAYFORM1 where θ is the running average factor, chosen to be 0.1. The running variance estimate of each gradient tensor is dumped every epoch. Using the maximum recorded estimate and we compute the PDRs of the gradient tensors (as a reminder, the PDR is forced to be a power of 2):Layer (min) (min) (min)The value of B (min) is again swept, and it is found that the p m < 1% for B (min) = 3. The feedforward precisions are therefore set as follows and as illustrated in FIG4 Layer Index l 1 2 3 4 5 6 7 8 9 B W l 9 8 9 9 10 9 7 5 5 B A l 8 4 5 4 6 6 7 4 3 Note that for weights, layer depths 21 and 22 correspond to the strided convolutions in the shortcut connections of residual blocks 4 and 7, respectively. The value of B (min) is again swept, and it is found that the p m < 1% for B (min) = 3. The feedforward precisions are therefore set as follows and as illustrated in FIG4 (min)The value of B (min) is again swept, and it is found that the p m < 1% for B (min) = 3. The feedforward precisions are therefore set as follows and as illustrated in
We analyze and determine the precision requirements for training neural networks when all tensors, including back-propagated signals and weight accumulators, are quantized to fixed-point format.
1,393
scitldr
Machine learning (ML) research has investigated prototypes: examples that are representative of the behavior to be learned. We systematically evaluate five methods for identifying prototypes, both ones previously introduced as well as new ones we propose, finding all of them to provide meaningful but different interpretations. Through a human study, we confirm that all five metrics are well matched to human intuition. Examining cases where the metrics disagree offers an informative perspective on the properties of data and algorithms used in learning, with implications for data-corpus construction, efficiency, adversarial robustness, interpretability, and other ML aspects. In particular, we confirm that the "train on hard" curriculum approach can improve accuracy on many datasets and tasks, but that it is strictly worse when there are many mislabeled or ambiguous examples. When reasoning about ML tasks, it is natural to look for a set of training or test examples that is somehow prototypical-i.e., that is representative of the desired learned behavior. Although such prototypical examples have been central to several research efforts, e.g., in interpretability BID5 and curriculum learning BID3, no generally-agreed-upon definition seems to exist for prototypes, or their characteristics. For modern deep-learning models, whose behavior is often inscrutable, even the very existence and usefulness of prototypical examples has seemed uncertain until the recent work of.Inspired by that work we identify a set of desirable properties for prototypicality definitions; systematically explore different metrics used in prior work, as well as new metrics we develop, for identifying prototypical examples in both training and test data; study the characteristics of those metrics' prototypes and their complement set-the outliers-using both quantitative measures and a qualitative human study; and, evaluate the usefulness of prototypes for machine-learning purposes such as reducing sample complexity or improving adversarial robustness and interpretability. Our prototypicality metrics are based on adversarial robustness, retraining stability, ensemble agreement, and differentially-private learning. As an independent , we show that predictive stability under retraining strongly correlates with adversarial distance, and may be used as an approximation. Unequivocally, we find that distinct sets of prototypical and outlier examples exist for the datasets we consider: MNIST , Fashion-MNIST , CIFAR-10 , and ImageNet . Between all of our metrics, as well as human evaluators, there is overall agreement on the examples that are prototypes and those that are outliers. Furthermore, the differences between metrics constitute informative exceptions, e.g., identifying uncommon submodes in the data as well as spurious, ambiguous, or misleading examples. Usefully, there are advantages to training models using only prototypical examples: the models learn much faster, their accuracy loss is not great and occurs almost entirely on outlier test examples, and the models are both easier to interpret and more adversarially robust. Conversely, at the same sample complexity, significantly higher overall accuracy can be achieved by training models exclusively on outliers-once erroneous and misleading examples have been eliminated from the dataset. In designing a metric to identify prototypes, many approaches may seem intuitive. We identify the following properties as desirable for any prototypicality metric:• Independent of the learning task: A prototypicality metric should be applicable to all types of machine-learning tasks, whether they are based upon unsupervised or supervised approaches, or whether they constitute classification tasks or generative tasks, etc.• Independent of the modeling approach: A metric should identify the overall same prototypical examples regardless of the machine-learning paradigm, model architecture, capacity, or hyperparameters that are used to learn the task.• Aligned with human intuition: At least for tasks on which humans do well, prototypical examples identified by a metric should strongly overlap with those identified by humans.• Covers all apparent data modes: A metric should provide a balanced view of all modes of prototypical examples (e.g., even when multiple disparate modes have a single output classification label). In the presence of imbalance in the frequency of data modes, the metric should provide coverage while reflecting frequency: a mode supported by only a handful of examples need not be as prototypical as other modes in the same class.• Provides a stable ranking: A metric should not only identify examples, but also rank them in terms of prototypicality, and this rank should be stable (i.e., have low variance) even though the metric is likely computed by a randomized procedure.• Applies to both training and test data: A metric should allow both training examples and test examples to be ranked in terms of prototypicality.• Predicts test accuracy: Models that are trained only on prototypical training examples should still achieve good accuracy on the prototypical test examples-at least when the training and test datasets are balanced and the models use machine-learning methods known to perform well overall. Other cases need not be so predictive (e.g., if training is done only on outliers); we explore this experimentally in Section 3 and Section 4.Although metrics for prototypicality should generally satisfy the above properties, they need not do so perfectly. In fact, our experimental show that the differences between metrics can be highly informative. In particular, as described in Section 3.3, such differences can improve interpretability and provide insights into model behavior that are not achievable by a single, overall accuracy number, e.g., by giving explanations of failures by example . A number of metrics for identifying prototypes might satisfy the desirable properties we identified above, possibly even the early methods based on concept similarity introduced by. In addition to the work presented below, a complete survey of related work is found in the Appendix. To start with, consider two strawmen prototypicality metrics based on either learning order or gradient magnitude. A model may be expected to learn prototypes early in training because they are presumably more common and somehow "simpler" than all the edge-case outliers. Conversely, near the end of training, one might expect the relative magnitude of the gradient ∇ θ (f θ (x)) to be small for prototypical examples x, as each such example should have very little to teach the model. From this, two metrics can be defined, which we evaluated in experiments that averaged hundreds of model training runs to minimize random-initialization and stochastic-learning effects. Unfortunately, both of these metrics exhibited both very high variance as well as apparently-low signal, defining prototypes sets that appeared random upon inspection and that did not satisfy our stated desirable property of predicting test accuracy. Thus, we do not present for these two metrics. Instead, we define and apply the following five metrics, each of which ranks examples by their relative ordinal number (i.e., position in the sorted order of the measured value):Adversarial Robustness (adv): Prototypical examples should be more adversarially robust. As a measure of prototypicality, the distance to the decision boundary measured by an adversarialexample attack was recently proposed and utilized by. Specifically, for an example x, the measure finds the perturbation δ with minimal δ such that the original x and the adversarial example x + δ are classified differently .To compare prototypicality, the work of that inspired our current work used a simple and efficient ∞ -based adversarial-example attack based on an iterative gradient descent introduced by. That attack procedure computes gradients to find directions that will increase the model's loss on the input within an ∞ -norm ball. They define prototypicality as the number of gradient descent iterations necessary to change the class of the perturbed input. Instead, the adv metric ranks by the 2 norm (or faster, less accurate ∞ norm) of the minimal-found adversarial perturbation . This is generally more accurate at measuring the distance to the decision boundary, but comes at a performance cost (it is on average 10-100× slower).Holdout Retraining (ret): The intuition beind our ret metric is that a model should treat a prototypical example the same regardless of whether, or when, it was used in the training process. Assume we are given a training dataset X, a disjoint holdout datasetX, and an example x ∈ X for which to assess prototypicality. To begin, we train a model f (·) on the training data X to obtain model weights θ. We train this model just as how we would typically do-i.e., with the same learning rate schedule, hyper-parameter settings, etc. Then, we fine-tune the weights of this first model f θ (·) on the held-out training dataX to obtain new weightsθ. To perform this fine-tuning, we use a smaller learning rate and train until the training loss stops decreasing. (We have found it is important to obtainθ by fine-tuning θ as opposed to training from scratch; otherwise, the randomness of training leads to unstable rankings that yield specious .) Finally, given these two models, we compute the prototypicality of x as the difference f θ (x) − fθ(x). The exact choice of metric · is not important; the in this paper use the symmetric KL-divergence. While this metric is similar to the one considered in , it differs in important ways: notably, our holdout retraining metric is conceptually simpler, more stable numerically, and more computationally efficient (because it does not require a backward pass to estimate gradients in addition to the forward pass needed to compare model outputs). Because our metric is only meaningful for data used to train the initial model, in order to measure the prototypicality of arbitrary test points, we actually train on the test data and perform holdout retraining on the original training data. Ensemble Agreement (agr): Prototypical examples should be easy for many types of models to learn. We train multiple models of varying capacity on different subsets of the training data (see Appendix C). The agr metric ranks examples' prototypicality based on the agreement within this ensemble, as measured by the symmetric KL-divergence between the models' output. Concretely, we train many models f θi (·) and, for each example x, evaluate the model predictions, and then compute DISPLAYFORM0 to rank the example's prototypicality. We expect models to be confident on prototypical examples. Based on an ensemble of models like that used by the agr metric, the conf metric ranks examples by the mean confidence in the models' predictions, i.e., ranking each example x by DISPLAYFORM0 Privacy-preserving Training (priv): We can expect prototypical, well-represented modes of examples to be classified properly by models (e.g., deep neural networks) even when trained with guarantees of differential privacy BID0 ). However, such privacypreserving models should exhibit significantly reduced accuracy on any rare or exceptional examples, because differentially-private learning attenuates gradients and introduces noise to prevent the details about any specific training examples from being memorized. Outliers are disproportionally likely to be impacted by this attenuation and added noise, whereas the common signal found across many prototypes must have been preserved in models trained to reasonable accuracy. Our priv metric is based on training an ensemble of models with increasingly greater ε privacy (i.e., more attenuation and noise) using ε-differentially-private stochastic gradient descent BID0. Our metric then ranks the prototypicality of an example based on the maximum ε privacy at which the example is correctly classified in a reliable manner (which we take as being also classified correctly in 90% of less-private models). This ranking embodies the intuition that the more tolerant an example is to noise and attenuation during learning, the more prototypical it is. As the first step in an evaluation, it is natural to consider how the above five metrics satisfy the properties we identified earlier-in Section 2-as being desirable for all prototypicality metrics. In general, all our five metrics satisfy these desirable properties. All five metrics can be applied to both training and test data to induce a ranking and, empirically, we find this ranking is stable when computed multiple times. Furthermore, overall, each of the five metrics exhibits good coverage, with a few informative exceptions, and provides a view of all distinct, prototypical data modes that is proportionally balanced-even in the presence of data skew-in particular across output class labels. Notably, none of the metrics fails by ranking some class of labeled examples (e.g., the easiest-tolearn class) as being strictly more prototypical than all examples with other labels; even within each class, any substantial fraction of the most prototypical examples exhibits good modal coverage. Our metrics are widely applicable, as they are not specific to any learning task or model (some, like ret and priv might be applicable even to unsupervised learning), and experimentally we have confirmed that the metrics give overall the same despite large changes in hyperparameters or even the model architecture. Finally, as described further in Section 4, all of our metrics provide strong predictive accuracy: training on prototypes gives good test performance on prototypes. (Experimental supporting the above observations can be seen in the Appendices.) FIG0 shows the correlation coefficients computed pairwise between each of our metrics for all our datasets, as well as on three metrics for ImageNet 1 (the tables are symmetric across the diagonal). The metrics are overall strongly correlated, and the differences in correlation are informative. Unsurprisingly, since they measure very similar properties, the agr (ensemble agreement) and the conf (model confidence) show the highest correlation, However, somewhat unexpectedly, we find that the adv (adversarial robustness) correlates very strongly with ret (retraining distance). This is presumably because these two metrics both measure the distance to a model's decision boundaryeven though adv measures this distance by perturbing each example while ret measures how the evaluation of each example is affected when models' decision boundaries themselves are perturbed. (This strong correlation between adv and ret is a new that may be of independent interest and some significance. Measurement of adversarial distance is a useful and highly-utalized technique, but it is undefined or ill-defined on many learning tasks and its computation is difficult, expensive, and hard to calibrate. On the other hand, given any holdout dataset and any measure of divergence, the ret metric we define in Section 2.1 should be easily computable for any ML model or task.) It remains to be established that our five metrics rank examples in a manner that corresponds to human intuition. For this, we perform a subjective visual inspection of how the different metrics rank the example training and test data for different machine-learning tasks. We establish that there is a clear, intuitive difference between the prototype and outlier extremes of the ranking. Informally, The separation is clearly informative e.g., revealing an outlier "9" mislabeled as a three; also, looking at the conf prototypes reveals an atypical dress-like "shirt" that was memorized during training. MNIST 6% 7% 1% 4% 7% 3% 1% 15% 12% 39% 6% 6% 1% 8% 5% 2% 6% 13% 11% 36% Table 1: Results of a human study of Mechancial Turk workers selecting the best or worst example among a random collection of 9 training-data images. For each prototypicality metric, the tables show what percent of workers selected examples in each 10% split of the metric's sorted ranking (e.g., 52% of the MNIST images picked as the best one rank in the 90 th percentile on the agr metric).Figure 2 and the figures in Appendix C confirm that there is an obviously apparent difference between at least the extreme outliers and prototypes in the MNIST, Fashion-MNIST, and CIFAR-10 training examples, and the ImageNet validation examples-although between datasets and classes, the five metrics differ in how clearly this difference can be seen. To validate and quantify how our metrics correlate with human perception, we performed an online human study using Amazon's Mechanical Turk service. For each output class in the training and test data of MNIST, Fashion-MNIST, and CIFAR-10, the study had human evaluators choose the image that was most or least representative of the class from amongst 9 randomly-selected images. In the study, over 100,000 images were assessed by over 400 different human evaluators. Concretely, in this study (as shown in Appendix E), each human evaluator saw a 3x3 grid of 9 random images and was asked to pick the worst image-or the best image-and this was repeated multiple times. Evaluators exclusively picked either best or worst images and were only shown random images from one output class under a heading with the label name of that class; thus one person would pick only the best MNIST digits "7" while another picked only the worst CIFAR-10 "cars." (As dictated by good study design, we inserted "Gold Standard" questions with known answers to catch workers answering randomly or incorrectly, eliminating the few such workers from our data.) For all datasets, picking non-representative images proved to be the easier task: in a side study where 50 evaluators were shown the same identical 3x3 grids, agreement was 80% on the worst image but only 27% on the best image (random choice would give 11% agreement).The of our human study are presented in Table 1. One of the takeaways is that the assessment of human evaluators is strongly correlated with each one of our metrics: humans mostly picked low-prototypicality images as the worst examples and examples with significantly higherprototypicality as being the best. Somewhat surprisingly, there are some large, notable differences between metrics and datasets in their correspondence to human perception-e.g., for Pick Worst, the lowest percentile split of the priv metric does poorly (at 8%) whereas the next does extremely well (at 29%)-suggesting that further investigation is warranted. Because our five metrics for prototypicality are not perfectly correlated, there are likely to be many examples that are highly prototypical under one metric but not under another. To quantify the number and types of those differences we can try looking at their visual correlation in a scatter plot; doing so can be informative, as can be seen in FIG3 (a) where the easily-learned, yet fragile, examples of class "1" in MNIST models have high confidence but low adversarial robustness. To further quantify, we can also compute the Jaccard distance to assess the relative size of the intersections between different sets of prototypes and outliers; the rest of FIG3 shows the of doing so at the 25% threshold for Fashion-MNIST. The show substantial disagreement between metrics. To understand disagreements, we can consider examples that are prototypical in one metric but outliers in others, first combining the union of adv and ret prototypes into a single boundary metric, and the union of adv and ret prototypes into an ensemble metric, because of their high correlation. Memorized exceptions: Recalling the unusual dress-looking "shirt" of FIG1, and how it seemed to have been memorized with high confidence, we can intersect the top 25% prototypical ensemble images with the bottom-half outliers in both the boundary and priv metrics. For the Fashion-MNIST "shirt" class, this set-visually shown in FIG4 on the right-includes not only the dress-looking example but a number of other atypical "shirt" images, including some looking like shorts. Also apparent in the set are a number of T-shirt-like and pullover-like images, which are misleading, given the other output classes of Fashion-MNIST. For these sets, which are likely to include spurious, erroneously-labeled, and inherently ambiguous examples, we use the name memorized exceptions because they must be memorized as exceptions for models to have been able to reach very high confidence during training. Similarly, FIG5 (a) shows a large (green) cluster of highly ambiguous boot-like sneakers, which appear indistinguishable from a cluster of memorized exceptions in the Fashion-MNIST "ankle boot" class (see Appendix C).Uncommon submodes: On the other hand, the priv metric is based on differentially-private learning which ensures that no small group of examples can possibly be memorized: the privacy stems from adding noise and attenuating gradients in a manner that will mask the signal from rare examples during training. This suggests that we can find uncommon submodes of the examples in learning tasks by intersecting the bottom-most outlier examples on the priv metric with the union of top prototypes in the boundary and ensamble metrics. FIG5 (b) shows uncommon submodes discovered in MNIST using the 25% lowest outliers on priv and top 50% prototypes on other metrics. Notably, all of the "serif 1s" in the entire MNIST training set are found as a submode. Canonical prototypes: Finally, we can simply consider the intersection of the sets of all the topmost prototypical examples in all of our metrics. The differences between our metrics should ensure that this intersection is free of spurious or misleading examples; yet, our experiments and human study suggest the set will provide good coverage. Hence, we call this set canonical prototypes. (c) Canonical prototypes in the CIFAR-10 "airplane" class. Figure 5: Our metrics' prototype and outlier sets reveal interesting examples, which can be clustered. To further aid interpretability in Figures 4 and 5, we perform a combination of dimensionality reduction and clustering. Concretely, we apply t-SNE on the pixel space (for MNIST and Fashion-MNIST) or ResNetv2 feature space (for CIFAR10) to project the example sets into two dimensions. We then cluster this two-dimensional data using HDBSCAN BID8, a hierarchical and density-based clustering algorithm which does not try to assign all points to clusters-which not only can improve clusters but also identify spurious data. We believe that other types of data projection and clustering could also be usefully applied to our metrics, and offer significant insight into ML datasets. (See Appendix G for this section's figures shown larger.) By using prototype metrics, we can improve models' sample complexity, accuracy, or robustness. We perform two experiments on the three datasets to investigate whether it is better to train on outliers or on prototypes-exploring the "train on hard data" vs. "train on easy data" question of curriculum learning , which is discussed in Appendix A. To begin, we order all training data according to its prototypicality as measured by our adv metric.2 First, we experiment with training on splits of 5, 000 training examples (approximately 10% of the training data) chosen by taking the k-th most prototypical example to the (k+5000)-th most prototypical. As shown in FIG6, we find that the split that yields the most accurate model varies substantially across the datasets and tasks. On MNIST, training on the least prototypical examples gives the highest accuracy; conversely, on CIFAR-10, training on nearly-the-most prototypical examples gives the highest accuracy. We conjecture this is due to the dataset complexity: because nearly all of MNIST is very easy, it makes sense to train on the hardest, most outlier examples. However, because CIFAR-10 is very difficult, training on very prototypical examples is better. Notably, many of the CIFAR-10 and Fashion-MNIST outliers appear to be inherently misleading or ambiguous examples, and several are simply erroneously labeled. We find that about 10% of the first 5,000 outliers meet our definition of memorized exceptions. Also, we find that inserting 10% label noise causes model accuracy to decrease by about 10%, regardless of the split trained oni.e., that to achieve high accuracy on small training data erroneous and misleading outliers must be removed-and explaining the low accuracy shown on the left in the graph of Figures 6(b) and 6(c).The prior experiment assumes the amount of data is fixed, and we must choose which percentile of data to use. Now, we examine what the best strategy is to apply if the amount of training data is not fixed, and ask: is it better to train on the k-most or k-least prototypical examples? Again, we find the answer depends on the dataset. On MNIST, training on the k-least prototypical examples is always better. However, on Fashion-MNIST and CIFAR-10, training on the prototypical examples is better when k is small, but as soon as we begin to collect more than roughly 10, 000 examples for Fashion-MNIST or 20, 000 for CIFAR-10, training on the outliers begins to give more accurate models. However, we find that training only on the most prototypical examples found in the training data gives extremely high test accuracy on the prototypical examples found in the test data.. On MNIST (a) training on the outliers gives strictly higher accuracy. However, on Fashion-MNIST (b) and CIFAR-10 (c), given only a limited amount of training data, prototype-training is better; but if more data is available outlier-training becomes superior. At least for some datasets, heavy data augmentation and regularization techniques like dropout can partially recover the utility loss that from training only with more protoypical examples; the dotted line in subfigure (a) shows the of one such experiment for MNIST.The second row of subfigures (d), (e), and (f) plots the model accuracy ing from training on a fractional slice of prototypicality-ranked training examples, with the fraction increased from small percentages (on the left) to the entire training data (extreme right). One solid line (blue) shows the accuracy ing from training on training data fractions that are becoming increasingly more prototypical (i.e., the left-most part of the blue line from training on the most outlier fraction of training examples); conversely, the other solid line (yellow) shows accuracy from training on an increasing, less-and-less prototypical fraction of training data, from left to right. Finally, the dotted line (yellow) in subfigures (d), (e), and (f) shows the models' test accuracy on only those examples in the test data that have been deemed prototypical; as can be seen, even training only on a small fraction of the most prototypical training examples can suffice to achieve high accuracy on prototypical test examples. While training exclusively on prototypes often gives inferior accuracy compared to training on the outliers, the former has the benefit of obtaining models with simpler decision boundaries. Thus, it is natural to ask whether training on prototypes gives models that are more robust to adversarial examples. In fact, prior work has found that discarding outliers from the training data can help with both classifying and detecting adversarial examples .To show that simpler boundaries can lead to more robustness, we train models on fixed-sized subsets of the data where we vary the prototypicality of the 5, 000 training points included in the subset. For each model, we then compute the mean ∞ adversarial distance needed to find adversarial examples. As shown in FIG0 of Appendix F, the Fashion-MNIST and CIFAR-10 models that are trained on prototypical examples are more robust to adversarial examples than those trained on a slice of training data that is mostly made up of outliers. However, these models trained on a slice of 5, 000 prototypical examples remain comparably robust to a baseline model trained on the entire data. This paper explores prototypes: starting with the properties we would like them to satisfy, then evaluating metrics for computing them, and discussing how we can utilize them during training. The five metrics we study all are highly correlated, and capture human intuition behind what is meant by "prototypical". When the metrics disagree on the prototypicality of an example, we can often learn something interesting about that example (e.g., that it is from a rare submode of a class). Further, we explore the many reasons to utalize prototypes: we find that models trained on prototypes often have simpler decision boundaries and are thus more adversarially robust. However, training only on prototypes often yields inferior accuracy compared to training on outliers. We believe that further exploring metrics for identifying prototypes and developing methods for using them during training is an important area of future work, and hope that our analysis will be useful towards that end goal. Prototypes. At least since the work of which was based on intra-and inter-concept similarity, prototypes have been examined using several metrics derived from the intuitive notion that one could find "quintessential observations that best represent clusters in a dataset" . Several more formal variants of this definition were proposed in the literature-along with corresponding techniques for finding prototypes. select prototypes according to their maximum mean discrepancy with the data, which assumes the existence of an appropriate kernel for the data of interest. circumvent this limitation by prepending classifiers with an autoencoder projecting the input data on a manifold of reduced dimensionality. A prototype layer, which serves as the classifier's input, is then trained to minimize the distance between inputs and a set of prototypes on this manifold. While this method improves interpretability by ensuring that prototypes are central to the classifier's logic, it does require that one modify the model's architecture. Instead, metrics considered in our manuscript all operate on existing architectures. proposed to use distance to the boundary-approximately measured with an adversarial example algorithm-as a proxy for prototypicality. Other interpretability approaches. Prototypes enable interpretability because they provide a subset of examples that summarize the original dataset and best explain a particular decision made at test time BID5. Other approaches like saliency maps instead synthetize new inputs to visualize what a neural network has learned. This is typically done by gradient descent with respect to the input space . Because they rely on model gradients, saliency maps can be fragile and only locally applicable .Beyond interpretability, prototypes are also motivated by additional use cases, some of which we discussed in Section 4. Next, we review related work in two of these applications: namely, curriculum learning and reducing sample complexity. Curriculum learning. Based on the observation that the order in which training data is presented to the model can improve performance (e.g., convergence) of optimization during learning and circumvent limitations of the dataset (e.g., data imbalance or noisy labels), curriculum learning seeks to find the best order in which to analyze training data BID3 ). This first effort further hypothesizes that easy-to-classify samples should be presented early in training while complex samples gradually inserted as learning progresses. While BID3 assumed the existence of hard-coded curriculum labels in the dataset, sample an order for the training set by assigning each point a sampling probability proportional to its leverage score-the distance between the point and a linear model fitted to the whole data. Instead, we use metrics that also apply to data that cannot be modelled linearly. The curriculum may also be generated online during training, so as to take into account progress made by the learner . For instance, train an auxiliary LSTM model to predict the loss of training samples, which they use to sample a subset of training points analyzed by the learner at each training iteration. have an auxiliary model predict the curriculum. This auxiliary model is trained using the learner's current feature representation of a smaller holdout set of data for which ground-truth curriculum is known. However, as reported in our experiments, training on easy samples is beneficial when the dataset is noisy, whereas training on hard examples is on the contrary more effective when data is clean. These observations oppose self-paced learning with hard example mining . Several strategies have been proposed to perform better in both settings. Assuming the existence of a holdout set as well, assign a weight to each training example that characterizes the alignment of both the logits and gradients of the learner on training and heldout data. propose to train on points with high prediction variance or whose average prediction is close from the decision threshold. Both the variance and average are estimated by analyzing a sliding window of the history of prediction probabilities throughout training epochs. Sample complexity. Prototypes of a given task share some intuition with the notion of coresets BID1; BID2 ) because both prototypes and coresets describe the dataset in a more compact way-by returning a (potentially weighted) subset of the original dataset. For instance, clustering algorithms may rely on both prototypes BID4 or coresets BID4 to cope with the high dimensionality of a task. However, prototypes and coresets differ in essential ways. In particular, coresets are defined according to a metric of interest (e.g., the loss that one would like to minimize during training) whereas prototypes are independent of any machine-learning aspects as indicated in our list of desirable properties for prototypicality metrics from Section 2.Taking a different approach, The following are training examples from MNIST, FashionMNIST, and CIFAR10 that are identified as most outlier (left of the red bar) or prototypical (right of the green bar). Images are presented in groups by class. Each row in these groups corresponds to one of the five metrics in Section 2.1. All MNIST were obtained with a CNN made up of two convolutional layers (each with kernel size of 5x5 and followed by a 2x2 max-pooling layer) and a fully-connected layer of 256 units. It was trained with Adam at a learning rate of 10 −3 with a 10 −3 decay. When an ensemble of models was needed (e.g., for the agr metric), these were obtained by using different random initializations. The Fashion-MNIST model architecture is identical to the one used for MNIST. It was also trained with the same optimizer and hyper-parameters. All CIFAR were obtained with a ResNetv2 trained on batches of 32 points with the Adam optimizer for 100 epochs at an initial learning rate of 10 −3 decayed down to 10 −4 after 80 epochs. We adapted the following data augmentation and training script: https://raw.githubusercontent.com/keras-team/keras/master/ examples/cifar10_resnet.py When an ensemble of models was needed (e.g., for the agr metric), these were obtained by using different random initializations. The three matrices that follow respectively report the accuracy of MNIST, Fashion-MNIST and CIFAR-10 models learned on training examples with varying degrees of prototypicality and evaluated on test examples also with varying degrees of prototypicality. Specifically, the model used to compute cell (i, j) of a matrix is learned on training data that is ranked in the i th percentile of adv prototypicality. The model is then evaluated on the test examples whose adv prototypicality falls under the j th prototypicality percentile. For all datasets, these matrices show that performing well on prototypes is possible even when the model is trained on outliers. For MNIST, this shows again that training on outliers provides better performance across the range of test data (from outliers to prototypes). For Fashion-MNIST and CIFAR-10, this best performance is achieved by training on examples that are neither prototypical nor outliers. 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 Prototypicality Percentile of Train Data 0.92 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.91 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.90 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.89 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.88 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.87 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.86 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.84 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.84 0.98 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.85 0.98 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.83 0.98 0.99 0.99 1.00 1.00 1.00 1.00 1.00 1.00 0.83 0.98 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.81 0.97 0.99 0.99 1.00 1.00 1.00 1.00 1.00 1. 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 Under review as a conference paper at ICLR 2019
We can identify prototypical and outlier examples in machine learning that are quantifiably very different, and make use of them to improve many aspects of neural networks.
1,394
scitldr
In this work, we propose the Sparse Deep Scattering Croisé Network (SDCSN) a novel architecture based on the Deep Scattering Network (DSN). The DSN is achieved by cascading wavelet transform convolutions with a complex modulus and a time-invariant operator. We extend this work by first, crossing multiple wavelet family transforms to increase the feature diversity while avoiding any learning. Thus providing a more informative latent representation and benefit from the development of highly specialized wavelet filters over the last decades. Beside, by combining all the different wavelet representations, we reduce the amount of prior information needed regarding the signals at hand. Secondly, we develop an optimal thresholding strategy for over-complete filter banks that regularizes the network and controls instabilities such as inherent non-stationary noise in the signal. Our systematic and principled solution sparsifies the latent representation of the network by acting as a local mask distinguishing between activity and noise. Thus, we propose to enhance the DSN by increasing the variance of the scattering coefficients representation as well as improve its robustness with respect to non-stationary noise. We show that our new approach is more robust and outperforms the DSN on a bird detection task. Modern Machine Learning focuses on developing algorithms to tackle natural machine perception tasks such as speech recognition, computer vision, recommendation among others. Historically, some of the proposed models were based on well-justified mathematical tools from signal processing such as Fourier analysis. Hand-crafted features were then computed based on those tools and a classifier was trained supervised for the task of interest. However, such theory-guided approaches have become almost obsolete with the growth of computational power and the advent of high-capacity models. As such, over the past decade the standard solution evolved around deep neural networks (DNNs). While providing state-of-the-art performance on many benchmarks, at least two pernicious problems still plague DNNs: First, the absence of stability in the DNN's input-output mapping. This has famously led to adversarial attacks where small perturbations of the input lead to dramatically different outputs. In addition, this lack of control manifests in the detection thresholds (i.e: ReLU bias) of DNNs, rendering them prone to instabilities when their inputs exhibit non-stationary noise and discontinuities. Second, when inputs have low SNR, or classes are unbalanced, the stability of DNNs is cantilevered. A common approach to tackle this difficulty is to increase both the size of the training set and the number of parameters of the network ing in a longer training time and a costly labeling process. In order to alleviate these issues we propose the use of the DSN by creating a new non-linearity based on continuous wavelet thresholding. Thus our model, inherits the mathematical guarantees intrinsic to the DSN regarding the stability, and improves the control via wavelet thresholding method. Then, in order to produce time-frequency representation that are not biased toward a single wavelet family, we propose to combine diverse wavelet families throughout the network. Increasing the variability of the scattering coefficient, we improve the linearization capability of the DSN and reduce the need of an expert knowledge regarding the choice of specific filter bank with respect to each input signal. The paper is organized as follows: 1.1 and 1.2 are devoted to the related work and contribution of the paper, the section 2 shows the theoretical , where 2.1 is dedicated to the network architecture and its properties, and 2.2 provides the milestone of our thresholding method, then section 2.3 shows the characterization, via latent representation, of our network on different events by on the Freefield1010 1 audio scenes dataset. Finally, we evaluate our architecture and compare it to the DSN on a bird detection task are shown in 2.4. The appendix in divided into three parts, Appendix A provides both, the pre-requisite and details about building the wavelets dictionary to create our architecture; Appendix B shows additional on the sparsity of the SDCSN latent representations; Appendix C shows mathematical details and proofs for the over-complete thresholding non-linearity. We extend the Deep Scattering Network, first developed in BID29 and first successfully applied in BID9; Andén & Mallat. The Scattering Network (SN) is a cascade of linear and non-linear operators on the input signal. The linear transformation is a wavelet transform, and the nonlinear transformation is a complex modulus. For each layer, the scattering coefficients are computed according to the application of the scaling function on the representation. This network is stable (Lipschitz-continuous) and suitable for machine learning tasks as it removes spatiotemporal nuisances by building space/time-invariant features. The translation invariant property is provided by the scaling function that acts as an averaging operator on each layer of the transform leading to an exponential decay of the scattering coefficients. Since the continuous wavelet transform increases the number of features, the complex modulus is used as its contractive property reduces the variance of the projected space BID30. Two extensions of this architecture have been already developed: the Joint Scattering Network BID2 and the time-chromafrequency scattering BID27. They introduced extra parameterization of the wavelets coefficient in the second layer of the network to capture frequency correlations allowing the scattering coefficient to represent the transient structure of harmonic sounds. Thresholding in the wavelet domain remains a powerful approach for signal denoising as it exploits the edge-detector property of wavelets, providing a sparse representation of the input signal in the time-frequency plane. This property is characterized for each wavelet by its vanishing moments expressing the orthogonality of the wavelet with respect to a given order of smoothness in the input signal. We base our approach on the theories relating the thresholding of signal in the wavelet basis and the evaluation of the best basis. Both are realized via a risk evaluation that arose from different perspectives: statistical signal processing BID11; BID23, information theory BID12;; BID13, and signal processing BID31 BID28. However, to the best of our knowledge, there is no thresholding method developed for continuous wavelet transform. We will thus extend the work of on thresholding over-complete dictionnary in the case of TIDWT and Biorthogonal-DWT to build a risk evaluation in the case of over-complete continuous wavelet transform. As opposed to the chroma-time-frequency scattering, using one wavelet family filter bank but deriving many symmetries of the latter, we propose to use multiple wavelet families having complementary properties (described in A.2) within a unified network yielding cross connections. It helps the architecture to provide higher dimensional and uncorrelated features, reducing the need of an expert to hand-choose the DSN wavelet filters, and also enables any downstream classifier to have greater orbit learning capacity. Therefore our architecture, the Deep Croisé Scattering Network (DCSN), leverages the simultaneous decomposition of complementary filter-banks as well as their crossed decomposition, hence the term "croisé." Then, endowing this architecture with our novel thresholding operator, we build the SDSCN providing new features based on the reconstruction risk of each wavelet dictionary. This method based on empirical risk minimization will bring several advantages. First, it enables us to insure and control the stability of the input-output mapping via thresholding the coefficients. Second, the model has sparse latent representations that ease the learning of decision boundaries as well as increases generalization performance. Finally, the risk associated with each wavelet family provides a characterization of the time-frequency components of the analyzed signal, that, when combines with scattering features enhances the high linearization capacity of DSN. As opposed to ReLU-based nonlinearities that impose sparsity by thresholding coefficients based on a fixed learned scalar threshold, we propose an input-dependant locally adaptive thresholding method. Therefore, our contribution leading to the Sparse Deep Croisé Network is twofold:• Deep Croisé Scattering Network: a natural extension of the DSN allowing the use of multiple wavelet families and their crossed representations.• Derivation of optimal non-orthogonal thresholding for overcomplete dictionaries: empirical risk minimization leads to an analytical solution for the denoising mask, allowing deterministic per-input solutions, and endowing the DSN with sparse latent representations. The Deep Croisé Scattering Network (DCSN) is a tree architecture (2 layers of such model are shown in FIG0) based on the Deep Scattering Network (DSN). The first layer of a scattering transform corresponding to standard scalogram is now replaced with a 3D tensor by adding the wavelet family dimension. Hence, it can be seen as a stacked version of multiple scalograms, one per wavelet family. The second layer of the DCSN brings inter and intra wavelet family decompositions. In fact, each wavelet family of the second layer will be applied on all the first layer scalograms, the same process is successively applied for building deeper model. We first proceed by describing the formalism of the DCSN ∀x ∈ L 2, details on wavelets and filter bank design are provided in Appendix A. We denote by DISPLAYFORM0 the collection of B mother wavelets for the first layer. We also denote by, DISPLAYFORM1 the resolution coefficients for this first layer with J representing the number of octave to decompose and Q the quality coefficients a.k.a the number of wavelets per octave. Based on those configuration coefficients, the filter-banks can be derived by scaling of the mother wavelets through the resolution coefficients. We thus denote the filter-bank creation operator W by DISPLAYFORM2 To avoid redundant notation, we thus denote this filter-bank as W (1,b) with implicit parameters Ψ and Λ. We now developed of the needed tools to explicit define the filter layer of the DCSN. We denote by U the output of this first layer and as previously mentioned it consist of a 3D tensor of shape (B, J Q, N ) with N the length of the input signal denoted as x. We omit here boundary conditions, sub-sampling, and consider a constant shape of N throughout the representations. We thus obtain DISPLAYFORM3 where |.| operator corresponds to an element-wise complex modulus application. We define the convolution operation between those two objects as DISPLAYFORM4 From this, the second layer we present below will introduce the cross family representations. First, we denote by λ and Ψ the internal parameters of layer 2 analogous to the first layer definition. We now denote the second layer representation as DISPLAYFORM5. This object is a 5D tensor introduced 2 extra dimension on the previous tensor shape. In fact, is it defined as DISPLAYFORM6 from this, we denote by croisé representation all the terms in DISPLAYFORM7. Based on those notations it is straightforward to extend those representation to layer as U b1→···→b j2,...,j[x]. We however limit ourselves in practice to 2 layers as usually done with the standard scattering networks. Given those representations, the scattering coefficients, the features per say, are defined as follows: S b1→···→b j2,...,j DISPLAYFORM8 with φ is a scaling function. This application of a low frequency band-pass filter allows for symmetries invariances, inversely proportional to the cut-off frequency of ψ. We present an illustration of the network computation in Fig. 2.As can be seen in the proposed example, while the first layer provides time-frequency information, the second layer characterizes transients as demonstrated in. With this extended framework, we now dive into the problem of thresholding over complete basis, cases where the quality factor, Q, is greater than 1 which are in practice needed to bring enough frequency precision. Sparsity in the latent representation of connectivists models have been praised many times Narang et al. FORMULA0; BID24; BID34. It represents the fitness of the internal parameters of a model needed with only few nonzeros coefficients to perform the task at hand. Furthermore, sparsity is synonym of simpler models as directly related with the Minimum Description Length Dhillon et al. FORMULA0 guaranteeing increased generalization performances. In addition of those concepts, thresholding brings in practice robustness to noise. In particular, as we will demonstrate, even in large scale configuration, non-stationnary noise can not be completely handled by common ML approaches on their own. To do so, we propose to extend standard wavelet thresholding techniques for non-orthogonal filter-banks. Our approach aims at minimizing the reconstruction error of the thresholded signal in the wavelet domain via an oracle decision. Through this formulation, we are able to derive an analytical thresholding based on the input representation and the filter-bank redundancy. We now propose to derive this scheme and then provide interpretation on its underlying tests and computations. As the decomposition is not orthogonal, the first point to be tackle is the difference of the L 2 approximation errors in between the original basis and the over-complete wavelet basis as Parseval equality does not hold. Beside, the transpose of the change of basis matrix is not anymore the inverse transform. Berknet et. al. in proposed the use of the Moore pseudo inverse to build the reconstruction dictionary. In the following we develop an upper bound to the ideal risk such that we benefit an explicit equation for the thresholding operator that is adapted to any over-complete transformation. Let's assume the observed signal, denoted by y, is corrupted with white noise such that y = x+ where x is the signal of interest and ∼ N (0, σ 2). We now denote by W ∈ C N (J * Q+1)×n the matrix composed by the the wavelets at each time and scale (i.e: explicit convolution version of W) such that ∀x ∈ R N, W x is the wavelet transform. We denote by W † ∈ C n×n(J * Q+1) the generalized inverse such that DISPLAYFORM0 Because of the correlation implied by the redundant information contained in the filter banks, the ideal risk is now dependent on all the possible pairs in the frequency axis. However,the independence in time remains. Since this optimization problem does not have an analytic expression, we propose the following upper bound explicitly derived in Appendix C.1. The upper-bound on the optimal risk is denoted by R up and defined as, DISPLAYFORM1 where we denote by R U up the upper bound error term corresponding to unselected coefficients: DISPLAYFORM2 and by R S up the upper bound error term corresponding to the selected coefficients: DISPLAYFORM3 Now, one way to evaluate this upper-bound is to assume an orthogonal basis, and to compare it with the optimal risk in the orthogonal case which leads to the following proposition. Proposition 1. Assuming orthogonal filter matrix W O, the upper bound ideal risk coincides with the orthogonal ideal risk: DISPLAYFORM4 the proof is derived in C.2 In order to apply the ideal risk derive, ones needs an oracle decision regarding the signal of interest. In real application, the signal of interest x is unknown. We thus propose the following empirical risk: DISPLAYFORM5 This risk corresponds to the empirical version of the ideal risk where the observed signal y is evaluate in the left part of the minimization function. In order to compare this empirical risk with the ideal version, we propose their comparison the following extreme cases: Proposition 2. In the case where D S = I, the empirical risk coincides with the upper bound ideal risk:R (y, W) = R up (x, W). Proposition 3. In the case where D U = I, the following bound shows the distance between the empirical risk and the upper bound ideal risk: DISPLAYFORM6 where, DISPLAYFORM7 Refer to C.3 for proofs. As the empirical risk introduces the noise in the left part of the risk expression, this term represents the propagation of this noise throughout the decomposition. We provided a generic development of the risk minimization process. When applied to a particular path of the scattering network, it is denoted as, DISPLAYFORM8 with DISPLAYFORM9 ×N and R representing the risk minimization operator based on a given representation and the associated filter-bank. We define by T the tresholding operator minimizing the the empirical risk, DISPLAYFORM0 In particular when applied to a specific path of the tree, this thresholding operator is denoted as DISPLAYFORM1 We provide in Fig. 2 illustration showing the effect of this thresholding operator at each layer of the network. We demonstrated in the last section the important of the risk in the optimal thresholding optimization. This empirical version of this risk represents the ability of the induced representation to perform efficient denoising and signal reconstruction. This concept is identical to the one of function fitness when considering the denoised ideal signal x and the thresholded reconstruction. As a , it is clear that the optimal basis given a signal is the one with minimal empirical risk. We thus propose here simple visualization and discussion on this concept and motivate the need to use the optimal empirical risk as part of the features characterizing the input signal y along all the representations. In Fig. 3, we provide two samples from the dataset corresponding to very different acoustic scene. One represents transients on the right while the left one provides mixture of natural sounds. Risk based analysis of the filter-banks fitness provide consistent information with the specificities of the selected wavelets. In fact, Paul family is known to be very adapted for transient characterization via its high time localization. On the other hand, the Morlet wavelet is optimal in term of Heisenberg principle and thus suitable for natural sounds such as bird songs, speech, music. We propose to validate the two contributions over a large-scale audio dataset. As we will demonstrate below, our method as well as each contribution taken independently and jointly lead to significant increase in the final performance. We compare our against the standard SN. In all cases, the scattering coefficients are then fed into a random forest Breiman FORMULA0 that can be formally defined as a binary classification task, where each label corresponds to the presence or absence of birds. Each signal is 10sec. long, and has been sampled at 44.1Khz. The evaluation of the is performed via the Area Under Curve metric on 33% of the data. The experiments are repeated 50 times. The total audio length of this dataset is thus of slightly more than 11 hours of audio recordings. To put in comparison, it is about 10× larger than CIFAR10 in term of numbers of scalar values in the dataset. The comparing our algorithm to the DSN with each of the wavelet family used in both SDCSN and DCSN are in TAB0. Both the SDCSN and DCSN outperform from at least 20% accuracy of any DSN proving the enhancement of the scattering feature by including the crossed latent representations. For all the architectures, the octave and quality parameters of the layers are J1 = 5, Q1 = 8, J2 = 4, Q2 = 1. As the feature of interests are birds songs, only high frequency content requires high resolution, the thresholding is applied per window of 2 16 representing ≈ 1.5sec. When considering different dataset sizes, the impact of denoising can be analyzed in details in Fig. 4a. As the dataset becomes smaller, the thresholding operator removing the noise perturbations becomes mandatory.. With infinite data and very high capacity classifier, a priori denoising becomes redundant as it is possible for the classifier to leverage the variance of the data to adjust correctly the hyperplanes delimiting the class regions. However, doing such learning is not possible with small scale dataset hence requiring a priori and deterministic denoising. Another experiment highlighting the need for denoising in practical application comes from the possibility to have different noise levels from the training set to the test set. Thus we propose in Fig. 4b the following experiment. For both the SDCSN and DCSN models, training is achieved done on the denoised dataset. Then, the testing phase is performed on the raw dataset. Clearly, performances degrade strongly for the DCSN showing the inhability of the classifier, even though after standard scattering network transform, to be robust to noise level changes during and after training. This shows empirically the need of a thresholding non-linearity to provide more robustness to Scattering networks. We now propose to visualize the sparsity induced via our thresholding technique FIG0 ). To do so we present histograms of the representation with and without thresholding. Greater sparsity coupled with better performances and closely related to better linearization capacities, which benefits greatly the classifier as the size of the data is small 4a. We presented an extension of the scattering network so that one can leverage multiple wavelet families simultaneously. Via a specific topology, cross family representations are performed carrying crucial information, as we demonstrated experimentally, allowing to significantly outperform standard scattering networks. We then motivated and proposed analytical derivation of an optimal overcomplete basis threhsolding being input adaptive. By providing greater sparsity in the representation but also a measure of filter-bank fitness. Again, we provided experimental validation of the use of our thresholding technique proving the robustness implied by such non-linearity. Finally, the ability to perform active denoising has been demonstrated crucial as we demonstrated that even in large scale setting, standard machine learning approach coupled with the SN fail to discard non-stationary noise. This coupled with the denoising ability of our approach should provide real world application the stability needed for consistent and prediction control. Among the possible extensions is the one adapting the technique to convolutional neural networks such that it provides robustness with respect to adversarial attacks. Furthermore, using a joint scattering and DNN will inherit the benefits presented with our technique as our layers are the ones closer to the input. Hence, denoising will benefit the inner layers, the unconstrained standard DNN layers. Finally, it is possible to perform more consistent best basis selection a la maxout network. In fact, our thresholding technique can be linked to an optimised ReLU based thresholding. In this scheme, applying best basis selection based on the empirical risk would thus become equivalent to the pooling operator of a maxout network. A BUILDING A DEEP CROISÉ SCATTERING NETWORK A.1 CONTINOUS WAVELET TRANSFORM "By oscillating it resembles a wave, but by being localized it is a wavelet". Yves MeyerWavelets were first introduced for high resolution seismology BID21 and then developed theoretically by Meyer et al. BID14. Formally, wavelet is a function ψ ∈ L 2 such that: DISPLAYFORM0 it is normalized such that ψ L 2 = 1. There exist two categories of wavelets, the discrete wavelets and the continuous ones. The discrete wavelets transform are constructed based on a system of linear equation. These equations represent the atom's property. These wavelet when scaled in a dyadic fashion form an orthonormal atom dictionary. Withal, the continuous wavelets have an explicit formulation and build an over-complete dictionary when successively scaled. In this work, we will focus on the continuous wavelets as they provide a more complete tool for analysis of signals. In order to perform a time-frequency transform of a signal, we first build a filter bank based on the mother wavelet. This wavelet is names the mother wavelet since it will be dilated and translated in order to create the filters that will constitute the filter bank. Notice that wavelets have a constant-Q property, thereby the ratio bandwidth to center frequency of the children wavelets are identical to the one of the mother. Then, the more the wavelet atom is high frequency the more it will be localized in time. The usual dilation parameters follows a geometric progression and belongs to the following set: DISPLAYFORM1. Where the integers J and Q denote respectively the number of octaves, and the number of wavelets per octave. In order to develop a systematic and general principle to develop a filter bank for any wavelet family, we will consider the weighted version of the geometric progression mentioned above, that is: DISPLAYFORM2. In fact, the implementation of wavelet filter bank can be delicate since the mother wavelet has to be define at a proper center frequency such that no artifact or redundant information will appear in the final representation. Thus, in the section A.3 we propose a principled approach that allows the computation of the filter bank of any continuous wavelet. Beside, this re-normalized scaled is crucial to the comparison between different continuous wavelet. Having selected a geometric progression ensemble, the dilated version of the mother wavelet in the time are computed as follows: DISPLAYFORM3, and can be calculated in the Fourier domain as follows: DISPLAYFORM4 Notice that in practice the wavelets are computed in the Fourier domain as the wavelet transform will be based on a convolution operation which can be achieved with more efficiency. By construction the children wavelets have the same properties than the mother one. As a , in the Fourier domain:ψ λ = 0, ∀λ ∈ Λ. Thus, to create a filter bank that cover all the frequency support, one needs a function that captures the low frequencies contents. The function is called the scaling function and satisfies the following criteria: DISPLAYFORM5 Finally, we denote by W x, where W ∈ C N * (J * Q)×N is a block matrix such that each block corresponds to the filters at all scales for a given time. Also, we denote by S(W x)(λ, t) the reshape operator such that, DISPLAYFORM6 where ψ is the complex conjugate of ψ λ. Among the continuous wavelets, different selection of mother wavelet is possible. Each one posses different properties, such as bandwidth, center frequency. This section is dedicated to the development of the families that are important for the analysis of diverse signals. The Morlet wavelet FIG3 is built by modulating a complex exponential and a Gaussian window defined in the time domain by, DISPLAYFORM0 where ω 0 defines the frequency plane. In the frequency domain, we denote it byψ M (t), DISPLAYFORM1 thus, it is clear that ω 0 defines the center frequency of the mother wavelet. With associated frequency center and standard deviation denoted respectively by ω λi c and ∆ λi ω, ∀j ∈ {0, ..., J * Q − 1} are: DISPLAYFORM2 Notice that for the admissibility criteria ω 0 = 6, however one can impose that zeros-mean condition facilely in the Fourier domain. Usually, this parameter is assign to the control of the center frequency of the mother wavelet, however in our case, we will see in the section A.3 a simple way to select a mother wavelet close enough to the Nyquist frequency such that all its contracted versions are properly defined. Then, we are able to vary the parameter ω 0 in order to have different support of Morlet wavelet. The Morlet wavelet, is optimal from the uncertainty principle point of view BID28. The uncertainty principle, when given a time-frequency atoms, is the area of the rectangle of its joint time-frequency resolution. In the case of wavelet, given the fact that their ratio bandwidth to center frequency is equal implies that this area is equal for the mother wavelets and its scaled versions. As a , because of its time-frequency versatility this wavelet is wildly used for biological signals such as bio-acoustic BID5, seismic traces Chopra* & Marfurt FORMULA0, EEG DAvanzoa et al. data. The Gammatone wavelet is a complex-valued wavelet that has been developed by BID36 via a transformation of the real-valued Gammatone auditory filter which provides a good approximation of the basilar membrane filter BID20. Because of its origin and properties, this wavelet has been successfully applied for classification of acoustic scene Lostanlen & Andén. The Gammatone wavelet FIG4 is defined in the time domain by, DISPLAYFORM0 and in the frequency domain by,ψ DISPLAYFORM1 A precise work on this wavelet achieved by V. Lostalnen in BID25 allows us to have an explicit formulation of the parameter σ such that the wavelet can be scaled while respecting the admissibility criteria: DISPLAYFORM2 where ξ is the center frequency and B is the bandwidth parameter. Notice that B = (1 − 2 DISPLAYFORM3 induce a quasi orthogonal filter bank. The associated frequency center and standard deviation denoted respectively by ω λi c and ∆ λi ω, ∀j ∈ {0, ..., J * Q − 1} are thus: DISPLAYFORM4 For this wavelet, thanks to the derivation in BID25, we can manually select for each order m the center frequency and bandwidth of the mother wavelet, which ease the filter bank design. An important property that is directly related to the auditory response system is the asymmetric envelop, thereby the Gammatone wavelet is not invariant to time reversal to the contrary of the Morlet wavelet that behaves as a Gaussian function. Thus, for task such as sound classifications, this wavelet provides an efficient filter that will be prone to perceive the sound attack's. Beside this suitable property for specific analysis, this wavelet is near optimal with respect to the uncertainty principle. Notice that, when m → ∞ it yields the Gabor wavelet BID11. Another interesting property of this wavelet is the causality, by taking into account only the previous and present information, there is no bias implied by some future information and thus it is suitable for real time signal analysis. The Paul wavelet is a complex-valued wavelet which is highly localized in time, thereby has a poor frequency resolution. Because of its precision in the time domain, this wavelet is an ideal candidate to perform transient detection. The Paul wavelet of order m FIG5 is defined in the time domain by, DISPLAYFORM0 and in the frequency domain by, DISPLAYFORM1 With associated frequency center and standard deviation denoted respectively by ω λi c and ∆ λi ω, ∀j ∈ {0, ..., J * Q − 1} are: DISPLAYFORM2 In BID35 they provide a clear and explicit formulation of some wavelet families applied the Paul wavelet in order to capture irregularly periodical variation in winds and sea surface temperatures over the tropical eastern Pacific Ocean. In addition, it directly represents the phase gradient from a single fringe pattern, yet providing a powerful tool in order to perform optical phase evaluation BID0. In the previous section, we defined and develop the properties of several families of wavelets. Thereby, we can now consider the creation of the filter bank by means of these wavelets. Notice that we propose a simple manner to obtain the filter bank in the Fourier domain. Two reasons are at the origin of this choice: first, the wavelet transform is often computed in the Fourier domain because of its efficiency, secondly the wavelets are derived according to geometric progression scales, these scales can directly be represented in the frequency domain, thereby it provided us a way of knowing the position of the wavelet. However, in the time domain they are not directly quantifiable. Our systematic framework is based on the intuitive consideration of the problem: we have to select a wavelet, named mother wavelet, that when contracted will create the filter bank derived from the selected scales. Assuming that the signals we will use are real valued, then the information represented in [−π, 0] and [0, π] are the same if extracted with a symmetric atom. Now, two kind of wavelets are considered, if the wavelet is complex-valued then its support is in [0, π], thus the choice of the mother wavelet should be around π and the contracted all along the frequency axis until the total number of octave are covered. In the case of real-valued wavelet, if the wavelet is not symmetric then it will capture other phase information in the frequency band: [−π, 0]. Still, the mother wavelet can be selected to be close to π for its positive part, and −π for its negative one. After defining the routine in order to select the mother wavelet, we propose a simple way to set the position of the mother wavelet. For each family, the center frequency and standard deviation are derived by finding α such that: DISPLAYFORM0 where λ 0 = α * 2 0/Q denotes the first wavelet position. Given this equation, one create the mother wavelet such that it avoids capturing elements after the Nyquist frequency and avert the spectral mirror effect and artifacts. Given the value of α for a wavelet family, one can derive the wavelet filter bank according to the Algorithm 1. The wavelet filter banks generated by this algorithm for the different families aforementioned can be seen in FIG6. Notice that for sake of clarity, the scaling functions are not shown in FIG6. Finally, in order to guarantee the admissibility criterion one has to verify that all the wavelets are zeros-mean and square norm one. The first one is easily imposed by setting the wavelet to be null around ω = 0 as it has been done to efficiently use the Morlet wavelet by Antoine et. al; BID3. Then, because of Parseval equality and the energy conservation principle, the second one can be achieved by a re-normalization in the frequency domain of each atom., where U and S denote respectively the set of selected and unselected wavelet coefficients. We also define D U such that I = D U + D S. This estimate corresponds to a thresholding operation in the new basis and the inverse transform of this truncated representation.
We propose to enhance the Deep Scattering Network in order to improve control and stability of any given machine learning pipeline by proposing a continuous wavelet thresholding scheme
1,395
scitldr
We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids. To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive on MNIST. Finally, we also provide against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create. Clustering is one of the fundamental problems of unsupervised learning. It involves the grouping of items into clusters such that items within the same cluster are more similar than items in different clusters. Crucially, the ability to do this often hinges upon learning latent features in the input data which can be used to differentiate items from each other in some feature space. Two key questions thus arise: How do we decide upon cluster membership? and How do we learn good representations of data in feature space?Spurred initially by studies into the division of animals into taxa BID31, cluster analysis matured as a field in the subsequent decades with the advent of various models. These included distribution-based models, such as Gaussian mixture models BID9; densitybased models, such as DBSCAN BID11; centroid-based models, such as k-means.2 and hierarchical models, including agglomerative BID29 and divisive models BID13.While the cluster analysis community has focused on the unsupervised learning of cluster membership, the deep learning community has a long history of unsupervised representation learning, yielding models such as variational autoencoders BID21, generative adversarial networks BID12, and vector space word models BID28.In this paper, we propose using noise as targets for agglomerative clustering (or NATAC). As in BID1 we begin by sampling points in features space called noise targets which we match with latent features. During training we progressively remove targets and thus agglomerate latent features around fewer and fewer target centroids using a simple heuristic. To tackle the instability of such training we augment our objective with an auxiliary loss which prevents the model from collapsing and helps it learn better representations. We explore the performance of our model across different modalities in Section 3.Recently, there have been several attempts at jointly learning both cluster membership and good representations using end-to-end differentiable methods. Similarly to us, BID37 use a policy to agglomerate points at each training step but they require a given number of clusters to stop agglomerating at. BID23 propose a form of supervised neural clustering which can then be used to cluster new data containing different categories. BID25 propose jointly learning representations and clusters by using a k-means style objective. BID36 introduce deep embedding clustering (DEC) which learns a mapping from data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective (however, as opposed to the hard assignment we use, they optimize based on soft assignment).Additionally, there have been unsupervised clustering methods using nonnegative low-rank approximations BID40 which perform competitively to current neural methods on datasets such as MNIST.Unlike all of the above papers, our method does not require a predefined number of clusters. We begin by discussing the use of noise as targets (NAT) introduced in BID1 which is crucial to the understanding of our model. We then describe the intuition behind our approach, then proceed to describe the mechanism itself. proposed a new form of unsupervised learning called "Noise as Targets" (NAT), aimed at extracting useful features from a set of objects.3 Roughly speaking, this approach selects a set of points, referred to as "targets" uniformly at random from the unit sphere. It then aims to find a mapping from the raw representations of objects to points on the unit sphere, so that these points are close to the corresponding targets; the correspondence (matching) between mapped representations and the target points is done so as to minimize the sum of the distances between the mapped points and the targets. The intuition behind the NAT approach is that the model learns to map raw inputs to the latent space in a way that both covers the entire latent space well, and that places "similar" inputs in neighborhoods of similar targets. More formally, the NAT task is to learn an encoder function f θ: X → Z from input space to a latent representation space. The objective is to minimize the L 2 loss between representations z i ∈ Z, where points in Z are unit normalized, and corresponding targets y k ∈ Y where the targets are uniformly sampled from the L 2 unit sphere (thus inhabiting the same space as the z i).Instead of being tied to corresponding representations as in classic regression, during the model fitting, inputs are considered in batches. Each batch consists of inputs and targets; when processing a batch the targets in the batch are permuted so as to minimize the batch-wise loss. To compute the optimal (loss-minimizing) one-to-one assignment of latent representations and targets of a batch, the Hungarian Method is used BID22. This target re-assignment pushes the representations z i of similar inputs x i ∈ X to neighborhoods of similar targets y k.Importantly, every example must be paired to a single noise target. Minimizing the L 2 loss requires each latent representation to be close to its assigned target. Therefore, the model learns a mapping to latent space that very closely matches the distribution of the noise targets. The motivation behind using NAT was to learn unsupervised features from input data. The authors show their method performs on par with state-of-the-art unsupervised representation learning methods. Viewed from a clustering perspective we can think of the targets y i as cluster centroids to which the latent representations z i are (one-to-one) assigned. Note that although the method introduced in BID1 brings the representations of similar x i closer together (by matching and moving them closer to neighborhoods of similar targets) it cannot produce many-to-one matchings or match multiple similar z i with a single centroid thus forming a cluster. Simply changing the re-assignment policy to allow for many-to-one matchings is difficult because it causes the model to In this case, The delete-and-copy policy removes the target on the top-left of the sphere and copies the target at the top-right. This leads to an agglomeration of the two latent representations at the top of the sphere into the same cluster.collapse in on a single target. In this paper we use the above to propose a new form of neural clustering where we progressively delete targets over time and re-assign representations to other nearby targets, all while keeping the model stable using an auxiliary objective. Delete-and-copy To be able to cluster latent representations we need a way of assigning them to cluster centroids. We propose doing this via an additional delete-and-copy step which allows for many-to-one matchings. Similarly to the NAT method, we first assign representations z to targets y using the Hungarian method. In some cases, the optimally assigned target y opt is not the nearest target to a latent representation z. In this case, we remove the assigned target and reassign a copy of the nearest target for that z with some probability α as in Algorithm 1 (see also FIG1). This has the effect of not only reassigning targets so as to minimize the distance between matched pairs, but also to encourage the model to allow similar examples to be assigned to the same target. The new assignments are denoted as y new i. The loss is then defined as: DISPLAYFORM0 Auxiliary objective To prevent the model from collapsing to a single point, we introduce an auxiliary objective in addition to the loss between representations and targets. In our case we set the auxiliary objective L aux to be the reconstruction loss i x i − f dec (z i) 2 where f dec is some decoder network. The final objective L is then a weighted sum of the NAT loss L NAT (which in our case is the L 2 loss) and the auxiliary objective L aux: DISPLAYFORM1 Importantly, the auxiliary objective not only prevents the model from collapsing, it also informs how the model clusters. For example, when clustering images the reconstruction loss encourages the model to cluster on similar pixel values. Alternative forms of auxiliary objectives could allow for a discriminator loss or a classification loss for tackling semi-supervised settings. As our goal is unsupervised clustering, we only consider the reconstruction loss. Algorithm 1: The delete-and-copy policy used in our experiments. Input: A batch of latent representations z, the optimal NAT assignment y opt, and a probability of copying α Output: A new NAT assignment y new = y Model definition During initialization each example x i in the dataset is paired with a random target y i, uniformly sampled from a d-dimensional sphere. The NATAC model is then trained using minibatches from the dataset. Each training step can be broken down as follows: DISPLAYFORM2 The examples x from a random batch of example-target pairs DISPLAYFORM3 2. Re-assignment step: Using the Hungarian method from BID22, the representations z are optimally one-to-one matched with targets y so as to minimize the total sum of distances between matched pairs in the batch. The newly assigned example-target pair- DISPLAYFORM4 has the permutation of labels within the batch to minimize the batch-wise loss.3. Delete-and-copy (DnC): With a probability α, delete the optimal target y The L 2 loss between targets and latent representations is taken and combined with the auxiliary loss. Gradients w.r.t. θ are then taken and back-propagated along. Notice that although the (re)-assignment step follows a non-differentiable policy, the model is still end-to-end differentiable. Finally, the new example-target assignments are kept after the training step, and persist into the next training step where they are reassigned again. Stopping criterion During training the number of unique targets is tracked. We stop training when the number of unique targets stops decreasing after an epoch of training. Multi-stage training We found that an initial period of training where the auxiliary objective is prioritized (i.e. the NAT loss is multiplied by a very small coefficient) and the DnC policy is not used, improved overall performance. Transitioning to a higher NAT loss and turning on the deleteand-copy policy later on in training increased the stability of our model. We therefore propose training NATAC models as follows:1. Warm-Up stage: α = 0 and λ is very small.2. Transition stage: α increases gradually from 0 to 1, λ also increases gradually to a larger value (approximately 100× larger than its initial value).3. Clustering stage: α = 1, λ is large. Table 1: NMI scores from varying the dimensionality of the latent space in the NATAC and baseline models. The baselines use k-means with the same number of clusters as the repsective NATAC model converged to. We include NATAC models with a latent dimensionality of d = 3, whose latent representations can be viewed without dimensionality reduction. Appendix A contains links to the visualizations hosted on the TensorFlow embedding projector. DISPLAYFORM0 Dimensionality of the latent space In all of our experiments, we found that the best performing models tend to have a latent space dimensionality between 4 and 12.At dimensionalities much larger than this, the model collapses to very few points during the transition stage of training, possibly due to the high expressiveness of the latent space. On the other hand, using a low dimensional representation in an information bottleneck too small to sufficiently learn the auxiliary objective. For example, when clustering tweets from our Twitter dataset, a latent space of two dimensions was too small for the decoder to reliably reconstruct tweets from latent vectors. With an auxiliary objective that cannot be effectively learned, centroids collapse to a single point. We now describe the datasets and evaluation metrics used in our experiments followed by the presentation and analysis of our in comparison to others. The full details regarding the hyperparameters used in our experiments can be found in appendix D.Datasets We evaluate our models on four different datasets -two image and two text datasets. For images we use MNIST BID24 and Fashion-MNIST BID35. For text we use 20 Newsgroups BID19 and a Twitter dataset which we gather ourselves. Our key evaluation metric is the normalized mutual information (NMI) score BID33 which measures the information gained from knowing cluster membership whilst taking the number of clusters into account. Values of NMI range from 0, where clustering gives no information, to 1, where clustering gives perfect information i.e. cluster membership is identical to class membership. In our experiments, we train models on the concatenation of the train and validation sets and evaluate them on the test set. This is done by computing the latent representations of test examples and then assigning them to their nearest respective centroids, then computing the NMI score. Additionally, we provide classification error scores on the MNIST dataset to compare ourselves to other related methods. We also compare our model to clustering methods trained on the 20 Newsgroups dataset. The guiding motivation behind our experiments is to analyze how well our models learns cluster membership and latent representations. Introduced by BID24, MNIST is canonical dataset for numerous machine learning tasks including clustering. MNIST has ten classes, corresponding to the ten digits 0-9, and contains 60,000 train and 10,000 test examples. We train our model with an auxiliary reconstruction loss and use small convolutional architectures (see Figure 5) for our encoder and decoder networks. As points of comparison we also provide of using k-means on the latent representations learned by our model (NATAC-k in Table 1) and k-means on representations learned by a simple autoencoder with the same encoder and decoder architecture (AE-k in Table 1). Table 1 shows our model's performance is best when d = 10 and worse for much lower or higher values of d. This indicates that the dimensionality of the latent space impacts our model's performance (see Section 2.3 for further discussion).The ability of our model to cluster MNIST examples well is shown by two keys . First, it beats both NATAC-k and AE-k (Table 1). Second, it achieves very competitive when compared to other methods (see NMI column in Table 2). To re-iterate, other clustering techniques cited in Table 2 required a predefined number of clusters (in the case of MNIST k=10).We note that NATAC-k beats AE-k indicating that our model learns representations that suit k-means clustering more than a simple autoencoder. However, we note that this is not consistent across all modalities in this paper (see in section 3.4).Finally, we discuss the number of centroids our model converges on (see Table 1). We show that our model is successfully capable of finding centroids that represent different digits, as shown in the top row of FIG4. However, the model also learns centroids that contain very few examples for which the decoded images do not represent any handwritten digits, as shown in the second row of FIG4. Even with these "dead centroids", the model still performs well. Indeed, the twelve most dense centroids contain 98% of all of the examples in MNIST (out of a total of 61).Interestingly, the model also differentiates between ones with different slopes. This suggests that the latent representations of these digits are sufficiently far apart to warrant splitting them into different clusters. Introduced in BID35, Fashion-MNIST is a convenient swap-in dataset for MNIST. Instead of digits the dataset consists of ten different types of clothes. There are 60,000 train and 10,000 test examples just like in MNIST. Fashion-MNIST is generally considered more difficult than MNIST, with classifiers scoring consistently lower on it. The model and analysis from the previous section carry over for fashion-MNIST with a few additional important points. First, the differences between NATAC-k and AE-k are less pronounced (see Table 1) in fashion-MNIST which indicates that the representations learned by NATAC in comparison to a simple autoencoder are not as important for k-means clustering. Interestingly, our model still outperforms both NATAC-k and AE-k, with one exception being when d = 12.Qualitatively, FIG4 shows that the model separates garments into slightly different categories than the labels in the dataset. For example, the most dense cluster seems to be a merging of both "pullovers" and "shirts", suggesting that the model finds it difficult to separate the two different garments. We ourselves find it difficult to discriminate between the two categories, as the lowresolution images do not easily show whether or not the garment has buttons. Additionally, the "sandal" class has been split into two separate clusters: flip-flops and high-heeled shoes with straps. This indicates that our model has found an important distinction between these two type of shoes, that the original Fashion-MNIST labels ignore. Similarly to MNIST, our model also learns "dead clusters", which the model does not decode into any discernible garment. Further visualizations of these experiments can be found appendix section A. Introduced in BID19 the 20 Newsgroups dataset is a collection of 18,846 documents pertaining to twenty different news categories. We use the commonly used 60:40 temporal train-test split. Interestingly, because of the temporal split in the data, the test set contains documents which differ considerably from the train set. We calculate NMI on the news categories in the dataset. BID27 -4.10 Deep Gaussian Mixture VAE BID6 -3.08 -1.6 Autencoder based clustering BID32 0.669 -Task-specific Clustering With Deep Model BID34 0.651 -Agglomerative Clustering Using Average Linkage BID18 0.686 -Large-Scale Spectral Clustering BID4 0 Table 2: Comparison of our best performing NATAC model (with d = 10) on the entire MNIST dataset. NMI and classification error are calculated from the entire data set. We report the evaluation metric used by the authors of each respective model. Precision of values are the same as those reported by the original paper. Note that many of the best-performing methods (DCD, IMSAT, Adversarial Autoencoders) also assume a uniform class distribution along with a pre-set number of clusters. We use an auxiliary reconstruction loss and a two layer fully connected network for both the encoder and decoder, both with hidden layer sizes of 256 and ReLU nonlinearities. We represent each article as an L2 normalized term-frequency-inverse-document-frequency (TF-IDF) vector of the 5000 most occurring words in the train set. BID15 0.08 LSD BID0 0.44 MTV BID2 0.13 PLSI BID16 0.47 SSC BID10 0.29 LSC BID5 0.48 PNMF BID38 0.37 Ncut BID30 0.52 ONMF BID7 0.38 NSC BID8 0.52 k-means BID26 0.44 DCD BID40 0.54 NATAC Autoencoder (425 clusters) 0.479 Table 4: Comparison of our best performing NATAC model (with d = 4) on the entire 20 Newsgroups dataset. NMI is calculated from the entire data set. Figures for other methods taken from BID40 Along with NATAC-k and AE-k comparisons, we also use a spherical k-means model. Spherical k-means is a commonly used technique of unsupervised document clustering, a good description of it can be found in BID3. TAB2 shows how the performance of the each model varies with different dimensionalities of the latent space. The best NATAC models with a latent dimensionality of 3 to 6 centroids outperform a spherical k-means model with 1000 clusters, far more clusters than any of the NATAC models. Although we could not find any neural clustering techniques which report performance on 20 Newsgroups, many non-neural methods report NMI on the whole dataset. Table 4 shows the NATAC model performs competitively to these methods. However, our method does converge on a higher number of clusters (other methods are trained with a pre-defined number of 20 clusters). To further explore the performance of our model on text data we build a dataset of 38,309 ASCIIonly tweets of English speaking users containing exactly one hashtag. The dataset has 647 different hashtags, with each hashtag having at least ten tweets containing it. We use 10, 000 of the tweets as the test set. As a preprocessing step, URLs and hashtags are replaced with special characters. We calculate NMI on the hashtag used in each tweet. We train a character-based Sequence-to-Sequence autoencoder on the Twitter dataset. Just as before we use an auxiliary reconstruction loss. We set the encoder to be a bidirectional GRU with a hidden size of 64 followed by a fully connected layer which takes the GRU's final output and maps it to the latent space. The decoder uses a fully connected layer to map the latent vectors to a 128 dimensional vector which is then used as the initial hidden state for the decoder GRU.Similarly to section 3.4, we compare our approach to using spherical k-means along with the NATAC-k and AE-k baselines. As shown in table 3.5, we see that NATAC-k outperforms NATAC and AE-k models on all of the reported latent dimensionalities. This suggests that the latent mapping learned by the NATAC models does improve on a vanilla autoencoder, but the centroid assignment of a trained NATAC model is less effective than using k-means. Finally, all of the neural models outperform the spherical k-means baseline. However, this baseline is much more competitive to the neural methods reported in this experiment than those reported in the 20 Newsgroups experiments. In this section, we explore how sensitive the NATAC framework is to changes in our hyperparameters. For these experiments, we use the model and dataset from the 20 Newsgroups experiments. We take the best performing hyperparameters from these experiments (d = 4, see TAB2) and observe how the end performance of the model is affected by changing the hyperparameters. We show that our method is reasonably robust to differing values of hyperparameters, although extreme changes, such as skipping pre-training, do adversely affect performance. The NATAC training method contains four sources of randomness: parameter initialization the delete-and-copy policy random batching the sampling of noise as targets. We train 50 NATAC models with the same hyperparameters (but different random seeds) and measure the variation in NMI and the number of converged clusters. Figure 3 shows the variability of NMI and the converged number of clusters from training the best performing model on the 20 Newsgroups dataset. We observe that the NMI varies with a small standard deviation of 0.007 (mean 0.465) regardless of how many clusters the model converged to. In contrast, we observe a higher relative standard deviation of 24 with the converged number of clusters (mean 420). Qualitatively, we observe that the variance in the number of converged clusters is mostly due to dead centroids. The value of α varies throughout the training of a NATAC model. In our experiments, we initially set α to zero for a period of pre-training, after which we incrementally increase the value to 1 over several epochs. TAB5: Mean NMI and converged number of centroids when training a NATAC model with varying amounts of pre-training. Mean taken from 5 consecutive runs using the same hyperparameters. Models trained on the train set of 20 Newsgroups and evaluated on the test set.λfinal \λinital 10 but after 100 epochs of pre-training, the model does not significantly benefit from any more pretraining. Interestingly, the longer the period of pre-training, the more clusters the model converges to. We believe that models which have longer to pre-train before clustering learn a more uniform mapping to latent space. When the clustering phase of training occurs, the latent representations are more uniformly spread across latent space, and thus agglomerate less readily. Alongside changing the value of α, the coefficient for the NAT loss in NATAC models is also varied. Similarly to α, we set λ to a small value for warm-up stage of training, and then progressively increase λ to a larger value afterwards. In the other experiments involving 20 Newsgroups, warm-up training uses λ inital = 10 −4 and a final value of λ final = 10 −2. The transition happens at the same time as the change in α during training. TAB6 shows how the final NMI of NATAC models vary with differing values for λ inital and λ final. We notice that the value of λ final does not seem to greatly impact the number of clusters or NMI, and that the smaller values of λ inital have very similar NMI scores and number of clusters. Interestingly, when trained with large λ inital of 10 −3, the model scores higher in NMI than models with smaller λ inital and also converges to more clusters. Here, we believe that a larger λ inital forces the model to learn a more uniform mapping to latent space (to minimize the NAT loss) during the warm-up stage of training. Similar to increasing the length of pre-training, this causes the model to agglomerate less readily. In this paper, we present a novel neural clustering method which does not depend on a predefined number of clusters. Our empirical evaluation shows that our model works well across modalities. We show that NATAC has competitive performance to other methods which require a pre-defined number of clusters. Further, it outperforms powerful baselines on Fashion-MNIST and text datasets (20 Newsgroups and a Twitter hashtag dataset). However, NATAC does require some hyperparameters to be tuned, namely the dimensionality of the latent space, the length of warm-up training and the values for the loss coefficient λ. However, our experiments indicate that NATAC models are fairly robust to hyperparameter changes. Future work Several avenues of investigation could flow from this work. Firstly, the effectiveness of this method in a semi-supervised setting could be explored using a joint reconstruction and classi-fication auxiliary objective. Another interesting avenue to explore would be different agglomerative policies other than delete-and-copy. Different geometries of the latent space could also be considered other than a unit normalized hypersphere. To remove the need of setting hyperparameters by hand, work into automatically controlling the coefficients (e.g. using proportional control) could be studied. Finally, it would be interesting to see whether clustering jointly across different feature spaces would help with learning better representations. B EXAMPLES FROM THE FASHION-MNIST DATASET. We experimented with using polar coordinates early on in our experiments. Rather than using euclidean coordinates as the latent representation, z is considered a list of angles θ 1, θ 2 · · · θ n where θ 1 · · · θ n−1 ∈ [0, π] and θ n ∈ [0, 2π]. However, we found that the models using polar geometry performed significantly worse than those with euclidean geometry. Additionally, we also experimented with not L2 normalizing the output of the encoder network. We hypothesized that the model would learn a better representation of the latent space by also "learning" the geometry of the noise targets. Unfortunately, the unnormalized representation caused the noise targets to quickly collapse to a single point. Although each different modality (monochrome images, bag-of-words, sequence of characters) uses a different set of hyperparameters, we follow a similar recipe for determining the values for each one:• We use a large batch size of 100. This is so each batch has a representative sample of the targets to reassign in each training step.• The warm-up period is calculated by observing when the auxiliary objective starts to converge.• The final value for λ in training is set so that the NAT loss was approximately 1% of the total loss. • The initial value for λ is set as approximately 1% of the final value of λ.• The transition phase typically lasts 100 epochs of training.• During the transition phase, the value of α is incrementally increased from 0 to 1.We now explicitly list the hyperparameters used for each experiment: • A batch size of 100.• A warm-up period of 10 × d epochs, during which λ = 0.001.• A transition period lasts for 250 epochs, where λ is incrementally increased to 0.25, and α is incremented from 0 to 1. • The ADAM optimizer BID20 ) with a learning rate (α = 10 −4) Figure 5: Architecture of the encoder (left) and decoder (right) used for the MNIST experiments. Between each subsampling layer in the encoder, a single convolution layer is applied with a filter shape of 3 × 3 with border padding to keep the same shape before and after the convolution. Similarly, in the decoder one transpose convolutional layer is applied between each upsampling layer, 3 × 3 filter shape and shape-preserving padding. • A batch size of 100.• A warm-up period of 1, 000 epochs, during which λ = 10 −4.• A transition period lasts for 100 epochs, where λ is incrementally increased to 0.01, and α is incremented from 0 to 1.• The ADAM optimizer BID20 ) with a learning rate (α = 10 −5).• Dropout with a keep-probability of 0.95 in the hidden layers of the encoder and decoder. • A batch size of 100.• A warm-up period of 100 epochs, during which λ = 0.01.• A transition period lasts for 100 epochs, where λ is incrementally increased to 1, and α is incremented from 0 to 1.• The ADAM optimizer BID20 ) with a learning rate (α = 10 −3).
Neural clustering without needing a number of clusters
1,396
scitldr
Recent work on explanation generation for decision-making problems has viewed the explanation process as one of model reconciliation where an AI agent brings the human mental model (of its capabilities, beliefs, and goals) to the same page with regards to a task at hand. This formulation succinctly captures many possible types of explanations, as well as explicitly addresses the various properties -- e.g. the social aspects, contrastiveness, and selectiveness -- of explanations studied in social sciences among human-human interactions. However, it turns out that the same process can be hijacked into producing "alternative explanations" -- i.e. explanations that are not true but still satisfy all the properties of a proper explanation. In previous work, we have looked at how such explanations may be perceived by the human in the loop and alluded to one possible way of generating them. In this paper, we go into more details of this curious feature of the model reconciliation process and discuss similar implications to the overall notion of explainable decision-making. One of the root causes 1 for the need of an explanation is that of model differences between the human and the AI agent. This is because, even if an agent makes the best decisions possible given its model, they may appear to be suboptimal or inexplicable if the human has a different mental model of its capabilities, beliefs and goals. Thus, it follows that the explanation process, whereby the AI agent justifies its behavior to the human in the loop, is one of model reconciliation. The Model Reconciliation Process M R, M R h, π takes in the agent model M R, the human mental model of it M R h, and the agent decision π which is optimal in M R as inputs and produces a modelM R h where π is also optimal.• An Explanation is the model differenceM R h ∆M R h. Thus, by setting the mental modelM R h ← M R h + (through means of some form of interaction / communication), the human cannot come up with a better foil or decisionπ, and hence we say that the original decision π has been explained. This is referred to as the contrastive property of an explanation. This property is also the basis of persuasion since the human, given this information, cannot come up with any other alternative to what was done. So how do we compute this model update? It turns out that there are several possibilities BID2 ), many of which have the contrastive property. Minimal Explanations These minimize the size of an explanation and ensure that the human cannot find a better foil using the fewest number of model updates. These are referred to as minimally complete explanations or MCEs. DISPLAYFORM0 Monotonic Explanations It turns out that MCEs can become invalid on updating the mental model further, while explaining a later decision. Minimally monotonic explanations or MMEs, on the other hand, maintain the notion of minimality as before but also ensure that the given decision π never becomes invalid with further explanations. DISPLAYFORM1 So far, the agent was only explaining its decision with respect to and in terms of what it knows to be true. Constraint refers to the fact that valid model updates considered during the search for an explanation were always towards the target model M R which is, of course, the agent's belief of the ground truth. This means that the content of the model update is also always grounded in (the agent's belief of) reality. In the construction of lies or "alternative facts" to explain, we start stripping away at these two considerations. There may be many reasons to favor them over traditional explanations: -One could consider cases where team utility is improved because of a lie. Indeed, authors in BID6 ) discuss how such considerations makes it not only preferable but also necessary that agents learn to deceive. -A specific case of the above can be seen in terms of difficulty of explanations -a lie can lead to an explanation that is shorter and/or easier to explain... or are more likely to be accepted by the human. These deal with cases when the agent provides a model update that negates parts of its ground truth -e.g. saying it does not have a capability it actually has. This is, in fact, a curious outcome of the non-monotonicity of the model reconciliation process. Consider the case where the initial estimate of the mental model is empty or φ -i.e. we start by assuming that the human has no expectations of the agent. Furthermore, let the minimally complete and minimally monotonic explanations for the model reconciliation process M R, φ, π produce intermediate models M DISPLAYFORM0 which involves the agent stating that its model does not contain parts which it actually does.• A Lie of Omission can emerge from the model reconciliation process φ, M However, they happen to be the easiest to compute due to the fact that they are constrained by a target model (which is empty) and do not requite any "imagination". More on this when we discuss lies of commission. In lies of omission, the agent omitted constraints in its model that actually existed. It did not make up new things (and having the target model as M R in the original model reconciliation process prevented that). In lies of commission, the agent can make up new aspects of its decision-making model that do not belong to its ground truth model. Let M be the space of models induced by M R and M R h. 3 Then:• A Lie of Commission can emerge from the model reconciliation process M, M R h, π where M ∈ M. We have dropped the target here from being M R to any possible model. Immediately, the computational problem arises: the space of models was rather large to begin with -O(2 DISPLAYFORM0 -and now we have an exponentially larger number of models to search through without a target -O(2 DISPLAYFORM1 . This should be expected: after all, even for humans, computationally it is always much easier to tell the truth rather than think of possible lies. 2 As per the definition of an MME, if the mental model is between the MME and the agent model, then there is no need for an explanation since optimal decisions in those models are equivalent.3 This consists of the union of the power sets of the set representation of models M R and M R h following BID2 . 4 "A lie is when you say something happened with didn't happen. But there is only ever one thing which happened at a particular time and a particular place. And there are an infinite number of things which didn't happen at that time and that place. And if I think about something which didn't happen I start thinking about all the other things which didn't happen." BID5 The problem becomes more interesting when the agent can expand on M to conceive of lies that are beyond its current understanding of reality. This requires a certain amount of imagination from the agent: -One simple way to expand the space of models is by defining a theory of what makes a sound model and how models can evolve. Authors in (Bryce, Benton, and Boldt 2016) explore one such technique in a different context of tracking a drifting model of the user.-A more interesting technique of model expansion can borrow from work in the space of storytelling BID8 in imagining lies that are likely to be believable -here, the system extends a given model of decisionmaking by using word similarities and antonyms from a knowledge base like WordNet to think about actions that are not defined in the model but may exist, or are at least plausible, in the real world. Originally built for the purpose of generating new storylines, one could imagine similar techniques being used to come up with false explanations derived from the current model. In all the discussion so far, the objective has been still the same as the original model reconciliation work: the agent is trying to justify the optimality of its decision, i.e. persuade the human that this was the best possible decision that could have been made. At this point, it is easy to see that in general, the starting point of this process may not require a decision that is optimal in the robot model at all, as long as the intermediate model preserves its optimality so that the human in the loop cannot come up with a better foil (or negates the specific set of foils given by the human (Sreedharan, Srivastava, and Kambhampati 2018)).The Persuasion Process M R h, π takes in the human mental model M R h of a decision-making task and the agent's decision π and produces a modelM R h where π is optimal. Note here that, in contrast to the original model reconciliation setup, we have dropped the agent's ground truth model from the definition, as well as the requirement that the agent's decision be optimal in that model to begin with. The content ofM R h is left to the agent's imaginationfor the original model reconciliation work for explanations BID2 ) these updates were consistent with the agent model. In this paper, we saw what happens to the reconciliation process when that constraint is relaxed. So far we have only considered explicit cases of deception. Interestingly, existing approaches in model reconciliation already tend to allow for misconceptions to be ignored if not actively induced by the agent. In trying to minimize the size of an explanation, the agent omits a lot of details of the agent model that were actually used in coming up with the decision, as well as decided to not rectify known misconceptions of the human, since the optimality of the decision holds irrespective of them being there. Such omissions can have impact on the the human going forward, who will base their decisions on M R h which is only partially true. 5 Humans, in fact, make such decision all the time while explaining -this is known as the selective property of an explanation BID8.Furthermore, MCEs and MMEs are not unique. Even without consideration of omitted facts about the model, the agent must consider the relative importance BID11 ) of model differences to the human in the loop. Is it okay then to exploit these preferences towards generating "preferred explanations" even if that means departing from a more valid explanation?It is unclear what the prescribed behavior of the agent should be in these cases. Indeed, a variant of model reconciliation -contingent explanations -that engages the human in dialogue to better figure out the mental model can explicitly figure out gaps in the human knowledge and exploit that to shorten explanations. On the face of it, this sounds worrisome, though perfectly legitimate in so far as preserving the various well-studied properties of explanations go. In this paper we have only considered cases of deception where the agent explicitly changes the mental model. Interestingly, in this multi-model setup, it is also possible to deceive the human without any model updates at all. A parallel idea, in dealing with model differences, is that of explicability ) - DISPLAYFORM0 Thus, the agent, instead of trying to explain its decision, sacrifices optimality and instead conforms to the human expectation (if possible). Indeed, the notion of explanations and explicability can be considered under the same framework where the agent gets to trade off the cost (e.g. length) of an explanation versus the cost of being explicable (i.e. departure from optimality). Unfortunately, this criterion only ensures that the decision the agent makes is equivalent to one that the human would expect though not necessarily for the same reasons. For example, it is quite conceivable that the agent's goal is different to what the human expects though the optimal decisions for both the goals coincide. Such decisions may be explicable for the wrong reasons, even though the current formulation allows it. Similar notions can apply to other forms of explainable behavior as well, as we discuss in. Indeed, authors in BID7 explore how an unified framework of decision-making can produce both legible as well as obfuscated behavior.
Model Reconciliation is an established framework for plan explanations, but can be easily hijacked to produce lies.
1,397
scitldr
We consider new variants of optimization algorithms. Our algorithms are based on the observation that mini-batch of stochastic gradients in consecutive iterations do not change drastically and consequently may be predictable. Inspired by the similar setting in online learning literature called Optimistic Online learning, we propose two new optimistic algorithms for AMSGrad and Adam, respectively, by exploiting the predictability of gradients. The new algorithms combine the idea of momentum method, adaptive gradient method, and algorithms in Optimistic Online learning, which leads to speed up in training deep neural nets in practice. Nowadays deep learning has been shown to be very effective in several tasks, from robotics (e.g. BID15), computer vision (e.g. BID12 ; BID9), reinforcement learning (e.g. BID18, to natural language processing (e.g.). Typically, the model parameters of a state-of-the-art deep neural net is very high-dimensional and the required training data is also in huge size. Therefore, fast algorithms are necessary for training a deep neural net. To achieve this, there are number of algorithms proposed in recent years, such as AMSGRAD , ADAM BID13 ), RMSPROP , ADADELTA , and NADAM BID6 ), etc. All the prevalent algorithms for training deep nets mentioned above combines two ideas: the idea of adaptivity in ADAGRAD BID7 BID17 ) and the idea of momentum as NESTEROV'S METHOD BID19 ) or the HEAVY BALL method BID20 ). ADAGRAD is an online learning algorithm that works well compared to the standard online gradient descent when the gradient is sparse. The update of ADAGRAD has a notable feature: the learning rate is different for different dimensions, depending on the magnitude of gradient in each dimension, which might help in exploiting the geometry of data and leading to a better update. On the other hand, NESTEROV'S METHOD or the Momentum Method BID20 ) is an accelerated optimization algorithm whose update not only depends on the current iterate and current gradient but also depends on the past gradients (i.e. momentum). State-of-the-art algorithms like AMSGRAD and ADAM BID13 ) leverages these two ideas to get fast training for neural nets. In this paper, we propose an algorithm that goes further than the hybrid of the adaptivity and momentum approach. Our algorithm is inspired by OPTIMISTIC ONLINE LEARNING BID4;;; BID0 ). OPTIMISTIC ONLINE LEARNING considers that a good guess of the loss function in the current round of online learning is available and plays an action by exploiting the good guess. By exploiting the guess, those algorithms in OPTIMISTIC ONLINE LEARNING have regret in the form of O(T t=1 g t − m t), where g t is the gradient of loss function in round t and m t is the "guess" of g t before seeing the loss function in round t (i.e. before getting g t). This kind of regret can be much smaller than O(√ T) when one has a good guess m t of g t. We combine the OPTIMISTIC ONLINE LEARNING idea with the adaptivity and the momentum ideas to design new algorithms in training deep neural nets, which leads to NEW-OPTIMISTIC-AMSGRAD and NEW-OPTIMISTIC-ADAM. We also provide theoretical analysis of NEW-OPTIMISTIC-AMSGRAD. The proposed OPTIMISTIC-algorithms not only adapt to the informative dimensions and exhibit momentums but also take advantage of a good guess of the next gradient to facilitate acceleration. We evaluate our algorithms with BID13 ), and BID5 ). Experiments show that our OPTIMISTIC-algorithms are faster than the baselines. We should explain that BID5 proposed another version of optimistic algorithm for ADAM, which is referred to as ADAM-DISZ in this paper. We apply the idea of BID5 ) on AMSGRAD, which leads to AMSGRAD-DISZ. Both ADAM-DISZ and AMSGRAD-DISZ are used as baselines. Both AMSGRAD and ADAM BID13 ) are actually ONLINE LEARNING algorithms. They use REGRET ANALYSIS to provide some theoretical guarantees of the algorithms. Since one can convert an online learning algorithm to an offline optimization algorithm by online-to-batch conversion BID3 ), one can design an offline optimization algorithm by designing and analyzing its counterpart in online learning. Therefore, we would like to give a brief review of ONLINE LEARNING and OPTIMISTIC-ONLINE LEARNING. In the typical setting of online learning, there is a LEARNER playing an action and then receiving a loss function in each round t. Specifically, the learner plays an action w t ∈ K in round t, where w t is chosen in a compact and convex set K ⊆ R n, known as the DECISION SPACE. Then, the learner sees the LOSS FUNCTION t (·) and suffers loss t (w t) for the choice. No distributional assumption is made on the loss functions sequence {1 (·), 2 (·),..., T (·)} in ONLINE LEARNING. Namely, the loss functions can be adversarial. The goal of an online learner is minimizing its REGRET, which is DISPLAYFORM0 We can also define AVERAGE REGRET as Regret T:= DISPLAYFORM1, which is REGRET divided by number of rounds T. In ONLINE LEARNING literature, NO-REGRET ALGORITHMS means online learning algorithms satisfying Regret T → 0 as T → ∞. In recent years, there is a branch of works in the paradigm of OPTIMISTIC ONLINE LEARNING (e.g. BID4 ; ; ; BID0). The idea of OPTIMISTIC ONLINE LEARNING is as follows. Suppose that, in each round t, the learner has a good guess m t (·) of the loss function t (·) before playing an action w t. (Recall that the learner receives the loss function after the learner commits an action!) Then, the learner should exploit the guess m t (·) to choose an action w t, as m t (·) is close to the true loss function t (·). DISPLAYFORM0 where R(·) is a 1-strong convex function with respect to a norm (·) on the constraint set K, L t−1:= t−1 s=1 g s is the cumulative sum of gradient vectors of the convex loss functions (i.e. g s := ∇ s (w s) ) up to but not including t, and η is a parameter. The OPTIMISTIC-FTRL of has update DISPLAYFORM1 where m t is the learner's guess of the gradient vector g t:= ∇ t (w t). Under the assumption that loss functions are convex, the regret of OPTIMISTIC-FTRL satisfies Regret T ≤ O(T t=1 g t − m t *), which can be much smaller than O(√ T) of FTRL if m t is close to g t. Consequently, OPTIMISTIC-FTRL will have much smaller regret than FTRL. On the other hand, if m t is far from g t, then the regret of OPTIMISTIC-FTRL would be a constant factor worse than that of FTRL without optimistic update. In the later section, we provide a way to get m t. Here, we just want to emphasize the importance of leveraging a good guess m t for updating w t to get a fast convergence rate (or equivalently, small regret). We also note that the works of;; ) has been shown to accelerate the convergence of some zero-sum games. ADAM BID13 ) is a very popular algorithm for training deep nets. It combines the momentum idea BID20 with the idea of ADAGRAD BID7 ), which has individual learning rate for different dimensions. The learning rate of ADAGRAD in iteration t for a dimension j is proportional to the inverse of Σ t s=1 g s [j] 2, where g s [j] is the j th element of the gradient vector g s in time s. This adaptive learning rate may help for accelerating the convergence when the gradient vector is sparse BID7 ). However, when applying ADAGRAD to train deep nets, it is observed that the learning rate might decay too fast BID13 Get mini-batch stochastic gradient vector g t ∈ R d at w t.5: DISPLAYFORM0 6: DISPLAYFORM1 8: DISPLAYFORM2 (element-wise division) 9: end for ) proposes using a moving average of gradients (element-wise) divided by the root of the second moment of the moving average to update model parameter w (i.e. line 5,6 and line 8 of Algorithm 1). Yet, ADAM BID13 ) fails at some online convex optmization problems. AMSGRAD fixes the issue. The algorithm of AMSGRAD is shown in Algorithm 1. The difference between ADAM and AMSGRAD lies on line 7 of Algorithm 1. ADAM does not have the update of line 7. adds the step to guarantee a nonincreasing learning rate, ηt √v t. which helps for the convergence (i.e. average regret Regret T → 0.) For the parameters of AMSGRAD, it is suggested that β 1 = 0.9, β 2 = 0.99 and η t = η/ √ t for a number η. As mentioned in the introduction, BID5 proposed one version of optimistic algorithm for ADAM, which is referred to as ADAM-DISZ in this paper. BID5 did not propose an optimistic algorithm for AMSGRAD but such an extension is straightforward which is referred to as AMSGRAD-DISZ.In this section, we propose a new algorithm for training deep nets: NEW-OPTIMISTIC-AMSGRAD, shown in Algorithm 2. NEW-OPTIMISTIC-AMSGRAD has an optimistic update, which is line 9 of Algorithm 2. It exploits the guess m t+1 of g t+1 to get w t+1, since the vector h t+1 uses m t+1. Notice that the gradient vector is computed at w t instead of w t− 1 2 and the moving average of gradients is used to update w t+ 1 2. One might want to combining line 8 and line 9and getting a single line: DISPLAYFORM0. From this, we see that w t+1 is updated from w t− 1 2 instead of w t. Therefore, while NEW-OPTIMISTIC-AMSGRAD looks like just doing an additional update compared to AMSGRAD, the difference of update is subtle. We also want to emphasize that although the learning rate on line 9 contains 4 1−β1 factor. We suspect that it is due to the artifact of our theoretical analysis. In our experiments, the learning rate on line 9 does not have the factor of 4 1−β1. That is, in practice, we implement line 9 as w t+1 = w t+ DISPLAYFORM1 We leave closing the gap between theory and practice as a future work. We see that NEW-OPTIMISTIC-AMSGRAD inherits three properties• Adaptive learning rate of each dimension as ADAGRAD BID7 ). (line 8)• Exponentially moving average of the past gradients as NESTEROV'S METHOD BID19 ) and the HEAVY-BALL method BID20 The first property helps acceleration when the gradient has sparse structure. The second one is the well-recognized idea of momentum which can achieve acceleration. The last one, perhaps less known outside the ONLINE LEARNING community, can actually achieve acceleration when the prediction of the next gradient is good. We are going to elaborate this property in the later section where we give the theoretical analysis of NEW-OPTIMISTIC-AMSGRAD.To obtain m t, we use the extrapolation algorithm of . Extrapolation studies estimating the limit of sequence using the last few iterates BID1 ). Some classical works include Anderson acceleration (Walker & Ni. ), minimal polynomial extrapolation BID2 ), reduced rank extrapolation BID8 ). These method typically assumes that the sequence {x t} ∈ R d has a linear relation DISPLAYFORM2 for an unknown matrix A ∈ R d×d (not necessarily symmetric). The goal is to use the last few iterates {x t} to estimate the fixed point x * on. adapt the classical extrapolation methods to the iterates/updates of Get mini-batch stochastic gradient vector g t ∈ R d at w t.5: DISPLAYFORM3 6: DISPLAYFORM4 DISPLAYFORM5 Obtain z by solving (U U + λI)z = 1. Get c = z/(z 1) 5: Output: Σ r−1 i=0 c i x i, the approximation of the fixed point x *.an optimization algorithm and propose an algorithm that produces a solution that is better than the last iterate of the underlying optimization algorithm in practice. The algorithm of (Scieur et al. FORMULA0) (shown in Algorithm 3) allows the iterates {x t} to be nonlinear DISPLAYFORM0 where e t is a second order term (namely, satisfying e t 2 = O( x t−1 − x * 2 2). Some theoretical guarantees regarding the distance between the output and x * is provided in .In NEW-OPTIMISTIC-AMSGRAD, we use Algorithm 3 to get m t. Specifically, m t is obtained by• Call Algorithm 3 with input being a sequence of some past r + 1 gradients, {g t, g t−1, g t−2, . . ., g t−r} to obtain m t, where r is a parameter.• Set m t:= Σ r−1 i=0 c i g t−r+i from the output of Algorithm 3.If the past few gradients can be modeled by FORMULA9 approximately, the extrapolation method should be expected to work well in predicting the gradient. In practice, it helps to achieve faster convergence. NEW-OPTIMISTIC-ADAM By removing line 7 in Algorithm 2, the step of making monotone weighted second moment, we obtain an algorithm which we call it NEW-OPTIMISTIC-ADAM, as the ing algorithm can be viewed as an OPTIMISTIC-variant of ADAM. We provide the regret analysis here. We denote the Mahalanobis norm · H = ·, H· for some PSD matrix H. For the PSD matrix diag{v t}, where diag{v t} represents the diagonal matrix such that its i th diagonal element isv t [i] in Algorithm 2, we define the the corresponding Mahalanobis norm · ψt:= ·, diag{v t} 1/2 ·, where we use the notation ψ t to represent the matrix diag{v t} 1/2. We can also define the the corresponding dual norm DISPLAYFORM0 We assume that the model parameter w is in d-dimensional space. That is, w ∈ R d. Also, the analysis of NEW-OPTIMISTIC-AMSGRAD is for unconstrained optimization. Thus, we assume that the constraint K of the benchmark in the regret definition, min w∈K T t=1 t (w), is a finite norm ball that contains the optimal solutions to the underlying offline unconstrained optimization problem. Now we can conduct our analysis. First of all, we can decompose the regret as follows. DISPLAYFORM1 where the first inequality is by assuming that the loss function t (·) is convex and that we use the notation g t:= ∇ t (w t) which we adopt throughout the following proof for brevity. Given the decomposition, let us analyze the first term DISPLAYFORM2 Astute readers may realize that the bound in Lemma 1 is actually the bound of AMSGRAD. Indeed, since in online learning setting the loss vectors g t come adversarially, it does matter how g t is generated. Therefore, the regret of DISPLAYFORM3 − w *, g t can be bounded in the same way as AMSGRAD. In Appendix B, we provide the detail proof of Lemma 1. Now we switch to bound the other sums DISPLAYFORM4, h t in. The proof is available in Appendix C. DISPLAYFORM5 Combining and Lemma 1 and 2 leads to DISPLAYFORM6 Now we can conclude the following theorem. The proof is in Appendix D. DISPLAYFORM7 One should compare the bound with that of AMSGRAD , which is DISPLAYFORM8 where η t = η/ √ t in their setting. We need to compare the last two terms in with the last term in. We are going to show that, under certain conditions, the bound is smaller than that of AMSGRAD. Let us suppose that g t is close to m t so that DISPLAYFORM9 is much smaller than the last term of. Yet, the last term of, DISPLAYFORM10 ψt−1) *, might be actually o(√ T) and consequently might also be smaller than the last term of. To see this, let us rewrite DISPLAYFORM11 Assume that if each DISPLAYFORM12 in the inner sum is bounded by a constant c, we will get DISPLAYFORM13 Yet, the denominator v t−1 is non-decreasing so that we can actually have a smaller bound. That is, in practice, v t−1 [i] might grow over time, and the growth rate is different for different dimension i. DISPLAYFORM14 then the last term is just O(log T), which might be better than that of the last term on. One can also get a data dependent bound of the last term of. To summarize, when m t is close to g t, NEW-OPTIMISTIC-AMSGRAD can have a smaller regret (thus better convergence rate) than ADAM (Kingma & Ba FORMULA0) and AMSGRAD . Algorithm 4 ADAM-DISZ BID5 1: Required: parameter β 1, β 2, and η t. 2: Init: DISPLAYFORM0 Get mini-batch stochastic gradient vector g t ∈ R d at w t.5: DISPLAYFORM1 6: DISPLAYFORM2 7: DISPLAYFORM3 We are aware of Algorithm 1 in BID5, which was also motivated by OPTIMISTIC ONLINE LEARN-ING 2. For comparison, we replicate ADAM-DISZ in Algorithm 4. We are going to describe the differences of the algorithms and the differences of the contributions between our work and their work. FORMULA0 ) showing that if both players use some kinds of OPTIMISTIC-update, then acceleration to the convergence of the minimax value of the game is possible. BID5 was inspired by these related works and showed that OPTIMISTIC-MIRROR-DESCENT can avoid the cycle behavior in a bilinear zero-sum game, which accelerates the convergence. Our work is about solving min x f (x) (e.g. empirical risk) quickly. We also show that ADAM-DISZ suffers the non-convergence issue as ADAM. The proof is available in Appendix E.Theorem 2. There exists a convex online learning problem such that ADAM-DISZ has nonzero average regret (i.e. One might wonder if the non-convergence issue can be avoided if one let the weighted second moment of ADAM-DISZ be monotone by adding the stepv t = max(v t−1, v t) as AMSGRAD, which we call it AMSGRAD-DISZ. Unfortunately, we are unable to prove if the step guarantees convergence or not. To demonstrate the effectiveness of our proposed method, we test its performance with various neural network architectures, including fully-connected neural networks, convolutional neural networks (CNN's) and recurrent neural networks (RNN's). The illustrate that NEW-OPTIMISTIC-AMSGRAD is able to speed up the convergence of state-of-art AMSGRAD algorithm, making the learning process more efficient. For AMSGRAD algorithm, we set the parameter β 1 and β 2, respectively, to be 0.9 and 0.999, as recommended in. We tune the learning rate η over a fine grid and report the under best-tuned parameter setting. For NEW-OPTIMISTIC-AMSGRAD and AMSGRAD-DISZ, we use same β 1, β 2 and learning rate as those for AMSGRAD to make a fair comparison of the enhancement brought by the optimistic step. We use the same weight initialization for all algorithms. The remaining tuning parameter of NEW-OPTIMISTIC-AMSGRAD is r, the number of previous gradients that we use to predict the next move. We conduct NEW-OPTIMISTIC-AMSGRAD with different values of r and observe similar performance (See Appendix A). Hence, we report r = 15 for all experiments for tidiness of plots. To follow previous works of and BID13, we compare different methods on MNIST, CIFAR10 and IMDB datasets in our experiment. For MNIST, we use a noisy version named as MNIST-back-rand in BID14 to increase the training difficulty. Our experiments start with fully connected neural network for multi-class classification problems. MNIST-back-rand dataset consists of 12000 training samples and 50000 test samples, where random is inserted in original MNIST hand written digit images. The input dimension is 784 (28×28) and the number of classes is 10. We investigate a multi-layer neural networks with input layer followed by a hidden layer with 200 cells, which is then connected to a layer with 100 neurons before the output layer. All hidden layer cells are rectifier linear units (ReLu's). We use mini-batch size 128 to calculate stochastic gradient in each iteration. Model performance is evaluated by multi-class cross entropy loss. The training loss with respect to number of iterations is reported in FIG0. On this dataset, we empirically observe obvious improvement of NEW-OPTIMISTIC-AMSGRAD in terms of both convergence speed and training loss. On the other hand, AMSGRAD-DISZ performs similarly to AMSGRAD in general. Convolutional Neural Networks (CNN) have been widely studied and is important in various deep learning applications such as computer vision and natural language processing. We test the effectiveness of NEW-OPTIMISTIC-AMSGRAD in deep CNN's with dropout. We use the CIFAR10 dataset, which includes 60,000 images (50,000 for training and 10,000 for testing) of size 32 × 32 in 10 different classes. ALL-CNN architecture proposed in Springenberg et al. FORMULA0 is implemented with two blocks of 3 × 3 convolutional filter, 3 × 3 convolutional layer with stride 2 and dropout layer with keep probability 0.5. Another block of 3 × 3, 1 × 1 convolutional layer and a 6 × 6 global averaging pooling is added before the output layer. We apply another dropout with keep probability 0.8 on the input layer. The cost function is multi-class cross entropy. The batch size is 128. The images are all whitened. The training loss is provided in FIG5. The shows that NEW-OPTIMISTIC-AMSGRAD accelerates the learning process significantly and gives lowest training cost after 10000 iterations. For this dataset, the performance of AMSGRAD-DISZ is worse than original AMSGRAD. As another important application of deep learning, natural language processing tasks often benefit from considering sequence dependency in the models. Recurrent Neural Networks (RNN's) achieves this goal by adding hidden state units that act as "memory". Long-Short Term Memory (LSTM) is the most popular structure in building RNN's. We use IMDB movie review dataset from BID16 to test the performance of NEW-OPTIMISTIC-AMSGRAD in RNN's under the circumstance of high data sparsity. IMDB is a binary classification dataset with 25000 training and test samples respectively. Our model includes a word embedding layer with 5000 input entries representing most frequent words in the dataset and each word is embedded into a 32 dimensional space. The output of embedding layer is passed to 100 LSTM units, which is then connected to 100 fully connected ReLu's before reaching the output layer. Binary cross-entropy loss is used and the batch size is 128. We provide the in figure 3. We observe a considerable improvement in convergence speed. In the first epoch, the is more exciting. At epoch 0.5, NEW-OPTIMISTIC-AMSGRAD already achieves the training loss that vanilla AMSGRAD can produce with more than 1 epoch. The sample efficiency is significantly improved. On this dataset, AMSGRAD-DISZ performs less effectively and may be trapped in local minimum. We remark that in each iteration, only a small portion of gradients in embedding layer is non-zero. Thus, this experiment demonstrates that NEW-OPTIMISTIC-AMSGRAD could also perform well with sparse gradient. (See additional experiments in Appendix A.) In this paper, we propose NEW-OPTIMISTIC-AMSGRAD, which combines optimistic learning and AMSGRAD to strengthen the learning process of optimization problems, in particular, deep neural networks. The idea of adding optimistic step can be easily extended to other optimization algorithms, e.g ADAM and ADAGRAD. We provide OPTIMISTIC-ADAGRAD algorithm and theoretical in Appendix F. A potential direction based on this work is to improve the method for predicting next gradient. We expect that optimistic acceleration strategy could be widely used in various optimization problems. In NEW-OPTIMISTIC-AMSGRAD, parameter r, the number of previous gradients used, may affect the performance dramatically. If we choose r too small (e.g r < 10), the optimistic updates will start at very early stage, but the information we collect from the past is limited. This may make our training process off-track at first several iterations. On the other hand, if r is chosen to be too large (e.g r >= 30), although we may get a better prediction of next gradient, the optimistic step will start late so NEW-OPTIMISTIC-AMSGRAD may miss great chances to improve the learning performance at early stages. Additionally, we need more room to store past gradients, hence the operational time will increase. The empirical impact of different r value is reported in FIG7. We suggest 10 ≤ r ≤ 20 as an ideal range, and r = 15 tend to perform well in most training tasks. Actually, we may make the algorithm more flexible by "early start". For example, when we set r = 20, we can instead start adding optimistic step at iteration 10, and gradually increase the number of past gradients we use to predict next move from 10 to 20 in next 10 iterations. After iteration 21, we fix the moving window size for optimistic prediction as 20. This may bring enhancement to NEW-OPTIMISTIC-AMSGRAD because it can seek opportunities to accelerate learning in first several iterations, which is critical for indicating a better direction towards the minimum loss. We also conduct experiments on NEW-OPTIMISTIC-ADAM and ADAM-DISZ. The experiment setting is similar to NEW-OPTIMISTIC-AMSGRAD, where we fix β 1, β 2 and compare the performance using the best learning rate with respect to ADAM. We provide a brief summary of the in FIG8, with r = 15. The improvement brought by NEW-OPTIMISTIC-ADAM is obvious, indicating that adding optimistic step could also enhance the performance of ADAM optimizer. B PROOF OF LEMMA 1 DISPLAYFORM0 Proof. The proof of this lemma basically follows that of . We have that DISPLAYFORM1 By rearranging the terms above and summing it from t = 1,..., T, DISPLAYFORM2 where the first inequality is due to Young's inequality and the second one is due to the constraint 0 < β 1 ≤ 1 so that DISPLAYFORM3 where the first inequality is due to.Continue the analysis, we have DISPLAYFORM4 where the last equality is by telescoping sum. DISPLAYFORM5 Proof. From the update, w t = w t− 1 2 DISPLAYFORM6 Therefore, we have that DISPLAYFORM7 where (a) is by Hölder's inequality and (b) is by Young's inequality. D PROOF OF THEOREM 1 DISPLAYFORM8 Proof. Recall that from FORMULA20, we show that DISPLAYFORM9 To proceed, let us analyze DISPLAYFORM10 above. Notice that DISPLAYFORM11 and DISPLAYFORM12 where we use the update rule, w t = w t− 1 2 DISPLAYFORM13. Therefore, we have that DISPLAYFORM14 where the second inequality is due to that the sequence {v t [i]} is non-decreasing and the last equality is because DISPLAYFORM15 To summarize, DISPLAYFORM16 E PROOF OF THEOREM 2Theorem 2 There exists a convex online learning problem such that ADAM-DISZ has nonzero average regret (i.e. We basically follow the same setting as Theorem 1 of . In each round, the loss function t (w) is linear and the learner's decision space is DISPLAYFORM0 where C ≥ 4. For this loss function sequences, the point w = −1 achieves the minimum regret, i.e. − 1 = arg min w∈K T t=1 f t (w) when T → ∞. Consider the execution of ADAM-DISZ with DISPLAYFORM1 β 2 < 1 in this case, which satisfies the conditions of BID13.The goal is to show that for all t, w 3t+1 = 1, and w t > 0. Let us denoteŵ t+1: DISPLAYFORM2 Assume the initial point is 1. As pointed it out by , the assumption of the initial point is without loss of generality. If the initial point is not 1, then one can translate the coordinate system to a new coordinate system so that the initial point is 1 and then choose the loss function sequences for the new coordinate system. Therefore, the base case x 1 = 1 is true. Now assume that for some t > 0, x 3t+1 = 1 and x s > 0 for all s ≤ 3t + 1. Observe that.∇f t (w) = C, for t mod 3 = 1 −1, otherwise, According to the update of ADAM-DISẐ DISPLAYFORM3 So,ŵ 3t+2 = w 3t+2. DISPLAYFORM4 Forŵ 3t+4, let us first consider the case thatŵ 3t+3 < 1 DISPLAYFORM5 (3t + 1)(1 − β 2)) + η (3t + 1)(β 2 C 2 + (1 − β 2)) + 2η (3t + 3)(β 2 C 2 + (1 − β 2)) is a decreasing function of t and has its largest value at t = 1 for any t > 0, which is 1 − (DISPLAYFORM6) ≤ 0.12. To summarize, we have that w 3t+4 = 1. Now if it is the case thatŵ 3t+3 ≥ 1 w 3t+4 = min(ŵ 3t+3, 1) + 2η (3t + 3)(β 2 v 3t+2 + (1 − β 2)) − η (3t + 3)(β 2 v 3t+1 + (1 − β 2)) = 1 + 2η (3t + 3)(β 2 v 3t+2 + (1 − β 2)) − η (3t + 3)(β 2 v 3t+1 + (1 − β 2)) > 1where the last inequality is because 2 β 2 v 3t+1 + (1 − β 2) ≥ β 2 v 3t+2 + (1 − β 2)To see this, β 2 v 3t+2 + (1 − β 2) = β 2 (β 2 v 3t+1 + (1 − β 2)) + (1 − β 2) = (β 2 2)v 3t+1 + 1 − β 2 2 So, the above inequality is equivalent to DISPLAYFORM7 which is true. This means that w 3t+4 = 1. Therefore, we have completed the induction. To summarize, we have 3t+1 (w 3t+1) + 3t+2 (w 3t+2) + 3t+3 (w 3t+3) − 3t+1 (−1) − 3t+2 (w −1) − 3t+3 (−1) ≥ 2C − 4. That is, for every 3 steps, the algorithms suffers regret at least 2C − 4, this means that the regret over T rounds would be (2C − 4)T /3, which is not sublinear to T. Now we have completed the proof. One may follow the analysis of Theorem 2 of to generalize the so that ADAM-DISZ does not converge for any β 1, β 2 ∈ such that β 1 < √ β 2. The optimistic update step can also be extended to other algorithms as well. For example, based on ADAGRAD, we propose OPTIMISTIC-ADAGRAD by including the optimistic update step. Algorithm 5 OPTIMISTIC-ADAGRAD (UNCONSTRAINED) Get current gradient g t at w t. − ηdiag(G t) −1/2 m t+1, where m t+1 is the guess of g t+1. 7: end for We provide the regret analysis here. Let us recall the notations and assumptions first. We denote the Mahalanobis norm · H = ·, H· for some PSD matrix H. We let ψ t (x):= x, diag{v t} 1/2 x for the PSD matrix diag{v t}, where diag{v t} represents the diagonal matrix such that its i th diagonal element isv t [i] in Algorithm 2. Consequently, ψ t (·) is 1-strongly convex with respect to the norm · ψt:= ·, diag{v t} 1/2 ·. Namely, ψ t (·) satisfies ψ t (u) ≥ ψ t (v) + ψ t (v), u − v + (u, v) is defined as B ψt (u, v):= ψ t (u)−ψ t (v)− ψ t (v), u−v and ψ t is called the distance generating function. We can also define the associate dual norm as ψ * t (x):= x, diag{v t} −1/2 x. We assume that the model parameter w is in d-dimensional space. That is, w ∈ Rd. It suffices to analyze Algorithm 6, which holds for any convex set K. The algorithm reduces to Algorithm 5 when K = R d.
We consider new variants of optimization algorithms for training deep nets.
1,398
scitldr
Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs. We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference. Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more error-resilient. The expensive memory access and computation of the big module can be reduced as the are only used in the sensitive region. Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality. Recurrent Neural Networks (RNNs) play a critical role in many natural language processing (NLP) tasks, such as machine translation ), speech recognition , and speech synthesis, owing to the capability of modeling sequential data. These RNN-based services deployed in both data-center and edge devices often process inputs in a streaming fashion, which demands a real-time interaction. For instance, in cloud-based translation tasks, multiple requests need to be served with very stringent latency limit, where inference runs concurrently and individually . For on-device speech recognition as an automated assistant, latency is the primary concern to pursue a fast response . However, serving RNN-based models in latency-sensitive scenarios is challenging due to the low data reuse, and thus low resource utilization as memory-bound General Matrix-Vector multiplication (GEMV) is the core compute pattern of RNNs. Accessing weight matrix from off-chip memory is the bottleneck of GEMV-based RNN execution as the weight data almost always cannot fit in on-chip memory. Moreover, accessing weights repeatedly at each time-step, especially in sequenceto-sequence models, makes the memory-bound problem severer. Subsequently, the on-chip computing resources would be under-utilized. Although batching is a walk-around for low-utilization, using a large batch size is not favored in latency-sensitive scenarios such as speech recognition and translation. In essence, the RNN inference is not a simple GEMV. With non-linearity followed the GEMV operation as the activation functions, the RNN inference operation is "activated" GEMV. These nonlinear activation functions as used in neural networks bring error resilience. As shown in Figure 1, sigmoid and tanh functions in Gated RNNs such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have insensitive regionsgreen shaded regions -where the outputs are saturated and resilient to errors in pre-activation accumulated . In other words, not all computations in RNNs need to be accurate. Can we leverage this error resilience in RNNs to reduce the memory access and eventually achieve speedup? To this end, we propose a big-little dual-module inference that regarding the original RNN layer as the big module, and use a parameterized little module to approximate the big module to help reduce redundant weight accesses. The philosophy of dual-module inference is using approximated computed by the memory-efficient little module in the insensitive region, and using accurate computed by the memory-intensive big module in the sensitive region. For this reason, the final outputs are the mixture of the big-little module. With the memory-efficient little module computes for the insensitive region, we can reduce the expensive data access and computation of the big module and thus reduce overall memory access and computation cost. The (in)sensitive region is dynamically determined using the little module . Because of the error resilience, using approximated in the insensitive region has a negligible impact on the overall model quality but creates a significant acceleration potential. Given the trade-off between accuracy and efficiency, the little module needs to be sufficiently accurate while being as much lightweight as possible. To achieve this, we first use a dimension reduction method -random projection -to reduce the parameter size of the little module and thus reducing data accesses. Then, we quantize the weights of the little module to lower the overhead further. Because we only need the little module outputs in the insensitive region that is error-resilient, we can afford aggressively low bit-width. Compared with common sparsification schemes, our hybrid approach avoids indexing overheads and therefore successfully achieves practical speedup. We evaluate our method on language modeling and neural machine translation using RNN-based models and measure the performance, i.e., wall-clock execution time, on CPU-based server platform. With overall memory access data reduced by 40% on average, our method can achieve 1.54x to 1.75x speedup with negligible impact on model quality. In this section, we discuss the error resilience of RNNs. As shown in Fig. 1, the nonlinear activation functions -sigmoid and tanh -have insensitive regions where the output activations are resilient to errors introduced in pre-activation accumulation . We take a single LSTM layer for language modeling over PTB dataset as an illustrative example. The baseline perplexity (PPL) is 80.64. We consider two cases: adding a random error vector under norm distribution into the pre-activation accumulation in the sensitive regions of four gates; adding errors to the insensitive regions. We separate the (in)sensitive regions by 50% based on the activation magnitude. As listed in Table 1, we report the PPL on the testing set and the average cosine similarity between the activations of the baseline model and the error-introduced model. Before applying the nonlinear activation functions, the cosine similarity of two cases -adding errors in the sensitive region or the insensitive region -are in the same level. However, we observe that after the nonlinear gates, the cosine similarity in the insensitive case is much closer to one (i.e., fewer output errors) than that in the sensitive case. We further compare the PPL of these two cases, and we observe that introducing errors in the insensitive region causes little quality degradation. The selection of which neurons should be in the (in)sensitive region is dynamic and input-dependent, which can be seen in Figure 2. Unlike the static weight sparsity that we can prune the unused connections offline in advance, the dynamic region speculation requires a very lightweight criterion for real-time processing. Taking all these into account, we propose a dual-model inference method that efficiently determines (in)sensitive region and significantly saves the memory access and computational cost. Firstly, we explain the dual-module inference by taking a fully-connected (FC) layer as an example and then extend it to LSTM and GRU. For an FC layer with unit batch size, the operation is typically formulated as a = ϕ(y), y = W x + b, where W is a weight matrix (W ∈ R n×d), x is an input vector (x ∈ R d), b is a bias vector (b ∈ R n), a is an activated output vector (a ∈ R n), and ϕ is an activation function. The core computation is matrix-vector multiplication (GEMV), i.e., W x. Both the amount of computation and memory access are O(nd); therefore, it is memory-bounded since the operation intensity is O according to the Roofline model analysis . Accessing weights from the off-chip memory is the bottleneck in terms of both the latency and energy. Our work aims at reducing the memory access of weight matrices for GEMV-based RNN inference. We show in Section 2 that not all values in y need accurate computation, and those that belong to the insensitive region can afford some level of approximation. In other words, we only need accurate computation and expensive memory access in the sensitive region of y and skip computation and memory access to weights that contribute to the insensitive region of y. With that, we still need approximated in the insensitive region. Therefore, we propose to learn a lightweight little module from the original trained layer, here we refer the original layer as the big module. Essentially, our little module is executed in a low-dimensional and low-precision space, thus termed as LL module; by contrast, the original big module with high dimension and high precision is called HH module. Let the outputs from these two modules be y LL and y HH, respectively. If the LL module approximates the HH module well, the final output vector -a mixture of from the HH and the LL modules -can be assembled by where m ∈ {0, 1} n is a binary mask vector for the output switching. m i equals 1 in the sensitive region while it switches to 0 in the insensitive region. The overall saving comes from skipping memory access to the big module while paying the overhead of accessing and computing of the little module. As the HH module is the original pre-trained layer, we only need to construct the LL module. Delivering a lightweight little module at inference time is crucial to achieving real wall-clock time speedup. As discussed earlier, the sparsification method usually suffers from severe indexing overheads; therefore, we turn to other approaches. In this work, we propose a hybrid compression with dimension reduction and data quantization to keep the little module as efficient as possible in computation and storage. The low dimension and low precision give birth to the desired LL module. We emphasize two objects that should be reached in the design of LL module: much lower computation and memory overheads than the HH module; approximating the outputs of HH module accurately. First, we introduce sparse random projection to reduce the dimension of x from R d to R k where k d. Subsequently, the parameter size of the LL module is O(nk), which is much smaller compared with the parameter size O(nd) of the HH module. Random projection is a common technique for dimension reduction that preserves distances in Euclidean space (; ; ;). The dimension reduction step can be formulated as where P is a sparse random matrix (P ∈ k×d, the probability of P ij being −1, 0, and 1 is 1 6, 2 3, and 1 6, respectively). Note that k is configurable according to actual needs to balance the accuracy loss and inference cost. We choose the value of k according to: where n is the number of rows in W and is a real number in. Second, after the dimension reduction, we quickly construct a lightweight little module in the lowdimensional space to approximate the pre-trained big module. The parameters of the latter (i.e., W HH and b HH) are kept frozen while the parameters of the former (i.e., W LL and b LL) are updated by stochastic gradient descent (SGD) to minimize the following loss function: where S is the mini-batch size. Essentially, for each pair of big-little modules, we apply linear regression on the little module to approximate the function of the big module and optimize the mean square error of the two. Apparently, the parameter size of W LL is O(nk), much smaller than the original weight W HH of O(nd) in the high-dimensional space. Even if further considering the projection cost of O(kd), the overhead is still much lower than the vanilla inference. In this way, the memory-bound issue in GEMV-based models can be greatly alleviated; the computational complexity is also reduced. The SGD overhead for constructing the above module can be amortized by the pattern of "construct-once-inference-forever". Finally, based on the constructed low-dimensional module, we also apply data quantization technique to reduce the parameter precision. Data quantization can further shrink the storage space of LL parameters due to the shorter bit-width. The input x is also quantized during run-time to reduce the computation cost. In our design, we apply one-time uniform quantization on W LL to avoid complicated calculations. Although some other accurate quantization methods are available as well, we find that one-time quantization works well in our dual-module inference given in Equation. This error tolerance is benefit from the fact that the computation in the insensitive region has a small influence on the final outputs. The dual-module inference relies on a binary mask m to switch between outputs of the "accurate & costly" HH module and the "approximated & efficient" LL module. Hence, the generation of m is a crucial factor to control the overall performance by adjusting the trade-off between accuracy and efficiency. Thanks to the saturation region of the nonlinear activation functions in RNNs, such as sigmoid and tanh, we observe a unipolar or bipolar distribution of their outputs, as depicted in Figure 3. This affords two excellent opportunities: It is possible to remove the majority of the computation and access from the costly HH module by setting the peak areas in Figure 3 as insensitive regions; The saturation output values in those regions such as near 0 in sigmoid and near ±1 in tanh additionally allow inaccurate computations because the outputs are insensitive to approximated values. According to the above observations and analysis, we design a specific criterion for each activation function. In particular, they are governed by where θ sigmoid > 0, θ − tanh < 0, and θ + tanh > 0 are constant thresholds. Note that these thresholds can be searched to a target insensitive ratio using validation dataset or be tuned at run-time that acts as a knob for accuracy-efficiency trade-off. The overall implementation is provided in Algorithm 1. After the construction of the LL model, the consequent dual-module inference needs five steps: Dimension reduction and data quantization for each dynamical input x as x Step 1: Step 2: Step 3: Generating m according to Equation (We discuss how to apply the proposed dual-module inference for an FC layer to RNNs, including LSTM and GRU. We will explain the LSTM implementation for illustration, while the extension to GRU is quite straightforward. The dynamics of an LSTM cell can be described as where f, i, o are the states of forget, input, and output gate, respectively, and g is the input activation. Each of them has its own bias vector and weight matrices. c and h are the cellular and hidden states of the hidden layer, respectively. σ(·) and θ(·) are sigmoid function and tanh function, respectively. The computation of each gate is similar to an FC-like layer; therefore, Algorithm 1 still holds. The first difference is the two GEMV computations in each gate; we apply dimension reduction, construction of the LL module, and data quantization on both GEMV computations. The second difference is that there is an additional temporal dimension in RNNs. We should guarantee the approximation performance of the LL module at all time steps. Taking the forget gate as an example, the linear map works for both x LL (t) = P x x HH (t) and h LL (t − 1) = P h h HH (t − 1). The loss function for constructing the LL module is slightly modified to Here the minimization considers not only S training samples in each mini-batch but also T time steps. The data quantization, switching mask (i.e., m) generation, and output assembling is the same as Algorithm 1 describes. Applying to other gates is similar so we do not discuss them to avoid repetition. Note that the input x and hidden state h can have different sizes, termed as d x and d h, respectively. For simplicity, we set P x ∈ R k×dx and P h ∈ R k×d h to let x LL and h LL to the same length k. For the g gate with tanh function, we set |θ − tanh | = |θ + tanh | also for simplicity; however, different magnitudes are allowed. The target of our dual-module inference method is to reduce the expensive off-chip memory access of the big module with the help of the little module. We introduce an insensitive ratio as the number of outputs using the little module over entire outputs. The ratio can be interpreted as the zero ratio in mask m as in Equation 1. In other words, the higher insensitive ratio will have less memory access to the big module. For example, obtaining a ratio of 50% in reducing 50% of weight matrix accessing in a GEMV operation. The choice of accurate ratio determines the model inference quality, and it is a knob to trade-off model inference quality vs. latency at run-time. The overhead of dual-module inference is small due to the use of dimension reduction and quantization. When choosing reduced dimension k and low-precision bit-width of the little module, we use Equation 3 with = 0.5 and INT8 quantization by default. We also explore different levels of dimension reduction and quantization in Section 4.3 and Section 4.4. As shown in Figure 4, we compare memory access data and operations between the single-module -the base case -and the little module of dual-module inference using a set of LSTM and GRU layers. On average, the little module accounts 10% storage overhead and 40% operation overhead compared with the base case. Note that we count the number of operations in Figure 4 regardless of precision; and the little module computation overhead can be further reduced using low-precision compute kernel as we used in performance evaluation. We first evaluate the model inference quality and execution time under different insensitive ratio and then conduct two sensitivity studies on dimension reduction and quantization. Our method is evaluated on CPU-based server platform (Intel(R) Xeon(R) CPU E5-2698 v4) as most inference workloads run on CPUs . We use PyTorch to train the little module and evaluate inference quality. The baseline implementation is the PyTorch CPU version with Intel MKL (version 2019.4) as the back-end BLAS kernel library. Our custom kernel implementation uses a multi-threaded MKL dot-product kernel at BLAS level-1 to compute the big module instead of BLAS level-2 or level-3 kernels. The kernel-wise performance is measured as wall-clock time and averaged with 1000 runs, assuming cold cache at the execution of each RNN cell representing the real-world cases, for example in the decoder of seq2seq model. We first evaluate our method on single-layer LSTM & GRU used in language modeling tasks and then on multi-layer stacked LSTM in GNMT model used in machine translation tasks -a standard benchmark model for inference as in MLPerf 1. We train the little module while freezing the parameters of the big module, and we use the same training set and validation set to run SGD optimization. We first evaluate our method on single-layer LSTMs/GPUs. Our implementations are adapted from the word-level language modeling example from PyTorch with same hyper-parameters to train baseline models. We report word-level perplexity (PPL) as the measure of model quality. As listed in Table 2, the baseline LSTM model achieves 80.64 PPL at the latency of 1.477ms. Then, we varying the insensitive ratio to show the quality-performance trade-off; the larger insensitive ratio indicates more are from the little module and less memory access to compute the big module. As we increase the insensitive ratio, we observe the degradation of quality as the perplexity increases during a gradual reduction in execution time. When the insensitive ratio is 50%, the perplexity is slightly increased to 81.36, which is negligible in language modeling tasks, while the inference speedup is 1.67x. We observe a similar quality-performance trade-off for LSTM with 750 hidden units. Comparing the case of base LSTM with 750 hidden units with dual-module LSTM with 1500 hidden units and 50% insensitive ratio, although the memory access reduction is at the same level, our proposed dualmodule approach achieves much better model quality because we kept the expressive power of a larger LSTM layer. We further report the using single-layer GRU on word-level language modeling tasks as in Table 3. Using dual-module inference on GRUs expresses the similar quality-performance trade-off as of LSTMs. Our dual-module method is generally applicable to both LSTMs and GRUs. Given the promising on language modeling, we further investigate Neural Machine Translation (NMT), which is a promising end-to-end learning approach for automated translation . The base model 2 consists of a four-layer stacked LSTM in both the encoder and the decoder of the sequence-to-sequence modeling. We focus on the speedup of the decoder since it is the most memory intensive and the most time-consuming part (95%). The decoder has a four-layer unidirectional LSTM with hidden size 1024 with residual connections starting from the third layer, i.e., the input size of the third and fourth layer is 2048. Our experiments show de-tokenized BLEU score to measure the model inference quality on the public WMT16 English-German dataset. The baseline model obtains a BLEU score of 24.32. We replace the LSTM layers in the decoder with our proposed dual-module-based LSTM layers. Similar to single-layer LSTM , using the little module computed in the insensitive region can reduce overall memory access while maintaining model quality. As listed in Table 4, our method can achieve imperceptible BLEU score degradation while speedup inference by 1.75x for the first two LSTM layers and 1.70x for the last two LSTM layers. When compromising more translation quality, i.e., decreasing the BLEU score by 2.4, our method can achieve more than 2x speedup. Dimension reduction is an integral part of our dual-module inference method to reduce the number of parameters and memory footprint. Here, we study the impact of different levels of dimension reduction on the model quality and performance. We conduct experiments on language modeling using single-layer LSTM of 1500 hidden units. We quantize the little module to INT8 and reduce the hidden dimension from 1500 to three different levels, which are calculated by Sparse Random Projection. We fix the insensitive ratio to be 50% across this set of experiments. As we can see in Table 5, the higher dimension of the little module, the better approximation the little module can perform. For instance, when we reduce hidden size to 966 and quantize to INT8, the dual-module inference can achieve slightly better quality -PPL of 80.40 -and 1.37x speedup. More aggressive dimension reduction can further have more speedup at the cost of more quality degradation: hidden dimension reduced to 417 and 266 can have 1.67x and 1.71x speedup but increase PPL by 0.72 and 2.87, respectively. We further show the overhead of performing the computation of the little module. As listed in the last three columns in Table 5, we measure the execution time of performing dimension reduction on inputs by Sparse Random Projection, computation of the little module, and computation of the big module; the execution time is normalized to the baseline case, i.e., the execution time of standard LSTM, to highlight the percentage of overheads. When the hidden dimension is reduced to 966, the overhead of the little module accounts 22% while the execution time of the big module is cut off by half 3. In our experiments, we choose = 0.5 as the default parameter in sparse random projection as it demonstrated good quality and speedup trade-off by our study. When further reducing the hidden dimension to 266, there is only a slight improvement on speedup compared with the hidden size of 417 in the little module, where the overhead of the little module is already small enough, but the quality dropped significantly. Quantizing the weights of the little module is another integral part of keeping memory footprint small. We show different quantization levels the impact on model quality and parameter size. After training the little module, we can quantize its weights to lower precision to reduce the memory accessing on top of dimension reduction. As we can see in Table 6, more aggressive quantization leads to smaller parameter size that can reduce the overhead of computing the little module; on the other hand, the approximation of the little module is compromised by quantization. We can quantize the little module up to INT4 without significant quality degradation. Using lower precision would degrade the quality while decreasing the parameter size. For performance evaluation, we choose INT8 as the quantization level since we leverage off-the-shelf INT8 GEMM kernel in MKL. We expect more speedup once the little module overhead can be further reduced by leveraging INT4 compute kernels. As we aim at the memory-bound problem of RNN-based inference applications, we limit the discussion on related work to RNN inference acceleration. Although we only evaluate our dual-module inference method on standard LSTMs/GRUs, we believe our method can be applied to many newly released sequence modeling networks (; as we leverage the commonly observed error-resilience of non-linear activation functions. Compressing DNN models via data quantization, weight sparsity, and knowledge distillation is promising to deliver efficient deployment for inference. propose a quantization method for RNNs where both weights and activations are quantized to binary or ternary. propose a hybrid ternary quantization method based on the different distributions of weights and activations. Weight pruning, i.e., inducing weight sparsity, has been proposed to reduce the parameter size of a pre-trained model (b; a). While fine-grained pruning at element-wise could reduce the number of parameters (; ;), indexing nonzero weights causes extra memory cost and would offset the benefits of reducing parameter size; it is hard to gain practical acceleration on general-purpose hardware or need hardware specialization . Although structural pruning and knowledge distillation could achieve speedup, the applicability on more complicated tasks such NMT using large-scale dataset is unstudied; besides, those methods require extensive retraining via regularization that would increase the training cost and hard to find a solution. Model compression would inevitably compromise the compressive power of RNNs. Our method, by no means, is supposed to replace model compression but provides an orthogonal approach to accelerate RNN inference. Using the analogy of knowledge distillation, we do not simply deploy a student network learned from the teacher network. Instead, we let the teacher network, applied with model compression or not, help with the student -the little module learned from the base moduleand collaboratively perform inference with reduced memory access and computation. Instead of model compression, many work propose to skip computations dynamically based on certain criterion. propose dynamic execution with layer-wise early exit. leverage a special feature of LSTM that using threshold-based pruning on output gates and generate a mask, and then using the mask to skip computation as well as data access of masked-out neurons of the other three gates. utilize temporal input sparsity but need to enforce input similarity with threshold clipping. selectively skip updating the hidden states for some inputs. However, these work either depend on special cell structure or rely on the temporal similarity of inputs which is not evaluated on NLP tasks such as NMT. We are the first that propose a general and principled method to reduce memory access and computation of Gated RNNs, including both LSTMs and GRUs. In this paper, we describe a big-little dual-module inference method to mitigate the memory-bound problem in serving RNN-based models under latency-sensitive scenarios. We leverage the error resilience of nonlinear activation functions by using the lightweight little module to compute for the insensitive region and using the big module with skipped memory access and computation to compute for the sensitive region. With overall memory access reduced by near half, our method can achieve 1.54x to 1.75x wall-clock time speedup without significant degradation on model quality. We compare our proposed dual-module inference approach with the automated gradual pruning method , which is a popular pruning method with open implementation 4. Firstly, compared with weight pruning, our method achieves better quality with practical speedup -1.54x to 1.75x reduction on wall-clock time -on commodity CPUs while element-wise weight pruning requires specialized hardware to gain real speedup of computation given irregular sparsity. Moreover, our dual-module inference method can be further applied on top of pruned models to reduce execution time by reducing memory access.
We accelerate RNN inference by dynamically reducing redundant memory access using a mixture of accurate and approximate modules.
1,399
scitldr